Never Have an Unmaintainable Jupyter Notebook Again!

Rate this content
Bookmark

Data visualisation is a fundamental part of Data Science. The talk will start with a practical demonstration (using pandas, scikit-learn, and matplotlib) of how relying on summary statistics and predictions alone can leave you blind to the true nature of your datasets. I will make the point that visualisations are crucial in every step of the Data Science process and therefore that Jupyter Notebooks definitely do belong in Data Science. We will then look at how maintainability is a real challenge for Jupyter Notebooks, especially when trying to keep them under version control with git. Although there exists a plethora of code quality tools for Python scripts (flake8, black, mypy, etc.), most of them don't work on Jupyter Notebooks. To this end I will present nbQA, which allows any standard Python code quality tool to be run on a Jupyter Notebook. Finally, I will demonstrate how to use it within a workflow which lets practitioners keep the interactivity of their Jupyter Notebooks without having to sacrifice their maintainability.

26 min
02 Jul, 2021

Video Summary and Transcription

Jupyter Notebooks are important for data science, but maintaining them can be challenging. Visualizing data sets and using code quality tools like NBQA can help address these challenges. Tools like nbdime and Precommit can assist with version control and future code quality. Configuring NBQA and other code quality tools can be done in the PyProject.toml file. NBQA has been integrated into various projects' continuous integration workflows. Moving code from notebooks to Python packages should be considered based on the need for reproducibility and self-contained solutions.

Available in Español

1. Introduction to Jupyter Notebooks

Short description:

We will discuss the importance of Jupyter Notebooks and the challenges of maintaining them. Then, I will demonstrate a workflow for keeping your Jupyter Notebooks maintainable.

Hello, friends. We are here today to talk about Jupyter Notebooks and how to keep them maintainable. We will start with a motivating example, in which I'll make the case for why you might care about using Jupyter Notebooks in the first place. Then, I'll address a couple of challenges which people often bring up when trying to keep their Jupyter Notebooks maintainable.

The first one has to do with version control, and anyone who's tried to look at the difference between two notebooks using git diff will know what I'm talking about. It's not easy. The second has to do with continuous integration and, more specifically, the lack of code-quality tools which are available to run on Jupyter Notebooks.

So, then, finally, I will demonstrate a workflow for keeping your Jupyter Notebooks maintainable. Let's dive straight in with our motivating example. I've prepared a pretty standard data science workflow here, absolutely standard. We'll go through it in a second. Now, you might be wondering why I'm showing you an absolutely standard data science workflow, and bear with me, there might be a twist at the end, might. So let's go through it.

2. Analyzing Summary Statistics

Short description:

We start by reading in four CSV files using Pandas read CSV. We print out summary statistics for all four data sets, which show that they are pretty similar.

We start by reading in four CSV files using Pandas read CSV, pretty standard. Each of these has two columns, x and y, pretty standard. So then we'll print out some summary statistics, so we'll print out the mean of x, the mean of y, the standard deviation of x, the standard deviation of y, and the correlation between x and y. We will do this for all four data sets, still pretty standard.

And then, using Scikit-learn, for each of these data sets we will fit a linear regression model, also pretty standard, and we will print out the mean squared error, also absolutely standard.

So where's the twist? Well, let's see what happens if we run this using Python. Right, look at that. If we look at what's been printed on the console, we'll see that the mean of x is the same for all four data sets, but so is the mean of y, the standard deviation of x, the standard deviation of y, the correlation between x and y, and the mean squared error from having fit a linear regression model is also almost identical. So if we look at this, we can tell that the four data sets must be pretty similar. That's what these summary statistics are telling us.

3. Analyzing Data Sets in Jupyter Notebooks

Short description:

Let's try doing the analysis in a Jupyter notebook instead of a Python script. We'll visualize the data sets and the linear regression lines. The plots reveal that the data sets are not the same, highlighting the importance of visualization. Jupyter notebooks can be criticized for version control issues, as shown by the diff after a trivial change.

Now, let's try doing something slightly different. Let's repeat this analysis, but instead of doing it in a Python script, let's do it in a Jupyter notebook. We'll do the same thing. We'll just read in these data sets using pandas.read.csv and we'll fit a linear regression model using scikit-learn. But then, instead of just printing out some summary statistics, we will visualize our data sets and we will also visualize the linear regression lines, which we will have fit. And because we just printed out the summary statistics and they were the same for all four data sets, we expect the four plots to look almost identical. So, let's go. Ready, set, go.

Oh, what's going on? Looks like maybe these four data sets aren't that similar after all. However, if we contrast this to what we saw a second ago when we just printed out some numbers to the console, now we can tell that the four data sets aren't actually the same. They just happen to have some shared characteristics. But when we were just relying on single numbers as summary statistics, we couldn't tell that. Yet it's frustratingly common to see data science workflows in which people will just load in data, fit a model, and then print out a few numbers without ever bothering to visualize it. So that's the motivating example. I hope this motivating example has highlighted the importance of visualizing your data. And Jupyter notebooks are a great way to do that. But if Jupyter notebooks are so great, why do they sometimes get criticized? Well, I said earlier that they pose a problem when it comes to version control. And for anyone who's not tried doing that before, let's see together what I mean. Let's save the notebook as it is. And let's make a commit. git commit run notebook. Now let's make an absolutely trivial change. Let's just add a little line here saying fig subplot title Data Frames. You know, really small change. Let's run the cell again. The only thing that's changed is that I've added this title. If this was a Python script and we had just changed one line of code, then if we did git diff, we would see a really small diff. However, this is not a Python script. It's a Jupyter notebook. And so if we save and do git diff, look at what happens.

4. Challenges with Jupyter Notebooks and Solutions

Short description:

We encounter a horrendous diff between notebooks, which makes me want to stop using Jupyter notebooks. However, a specialized tool called nbdyme provides a visually pleasing view of the diff, making Jupyter notebooks more desirable. Another challenge is the lack of code quality tools for notebooks. However, a tool called NBQA can convert notebooks to Python scripts, run code quality tools, and reconstruct the notebook. This allows for continuous integration and code quality checks in Jupyter notebooks.

We get this absolutely horrendous, unreasonable raw image diff. I look at this and I have no idea what's going on. It makes me want to stop using Jupyter notebooks forever. However, it's all lost, because maybe it's not so much that Jupyter notebooks don't work under version control, maybe it's just that we need a more specialized tool.

And one such tool, which I will present to you today, is called nbdyme. The way nbdyme works is you call it from the command line, as nbdiff-web, and you will, let me just allow that, and then you will get a URL which you can open up in your browser, and now we get a visually pleasing, easy to understand view of the diff between the notebooks. Now, if we look at this, it's absolutely clear that just one line of code has changed. We can also easily compare the diff in the outputs and see that just the title has changed. This is much easier to read compared to what we had a couple of minutes ago. This absolutely unreadable diff, now we have something visually pleasing which makes me want to use Jupyter Notebooks again. Great.

So it wasn't that Jupyter Notebooks didn't work with version control. It was more that we needed a specialized tool. So that's the first challenge when it comes to Jupyter Notebooks which I brought up earlier. Let's now look at the next one. Because if you're keeping things in version control, then chances are you're not just looking at the diff between versions of your code, you'll also be running continuous integration. If you're used to doing continuous integration on your Python scripts, then likely you'll be used to running a whole suite of linters and formatters on your code like black, I sort, flake eight, high upgrade, my pie, the list goes on. If you tell someone who's used to doing that that all of a sudden they need to switch over to using Jupyter notebooks, for which they won't have available that large suite of tools, then they might quite rightly feel like crying.

But does that mean that all is lost, or again, does it mean that we just need a more specialized tool? Let's see. We would need a tool which would temporarily convert your notebook to a Python script and store it in a temporary directory, run these code quality tools on it, reconstruct the notebook, and pass the output. And one such tool, which I'll present to you today, is called NBQA. Let's have a little look at how that works. I've prepared a notebook here for you, which produces a pretty plot at the end, which is taken from the matplotlib gallery, but inside it I've written some purposefully distorted code. Let's have a look at what happens when we run NBQA and then some code quality tools on it. You can run NBQA from the command line. In fact, you don't even need your notebook open or to have a Jupyter instance running. So, let's see what happens. Let's auto-format it using black. Then let's sort the imports using isort. We will then upgrade the syntax using pyupgrade, and then finally we will run flakate, which will not modify our notebook.

5. Using Precommit for Future Code Quality

Short description:

We remove an unused import and use nb-dime to view the diff between notebooks. The imports are sorted, the unused import is removed, the outdated constructor is replaced, and the inconsistent indentation is sorted out. To ensure future code quality, we can use the Precommit tool, which runs code quality checks automatically and blocks commits if they don't pass. By enabling pre-commit in our workspace, we can ensure that our notebooks continue to pass code quality checks. Pre-commit can also be used during continuous integration to validate incoming changes.

It'll just let us know if there are any style guide violations. And, in fact, it tells us that there's an import which is unused, Seaborn. So, we can open up our notebook again, remove this unused import, and now, well, let's see what's changed.

To see what's changed, we're going to do git diff, except we're not going to do git diff. I just told you that we should be using nb-dime to view the diff between notebooks. So, let's use it. Let's open up this link in our browser, and now, let's see what's changed.

So, first of all, you'll see that the imports have now been sorted thanks to isort. This unused import has been removed thanks to flakeate. This outdated constructor of this dictionary has been replaced with a more modern one thanks to pyupgrade. This inconsistent indentation has been sorted out thanks to black, and all of a sudden, the code style feels a lot more uniform, and it's going to be easier to keep this of a consistent quality. It's going to be easier to compare diffs when different people have been modifying it, if the style is consistent.

Great, except we don't just want our notebook to be of a certain code quality today. We want to make sure that it stays this way in the future, and a popular way of doing that is via a tool called Precommit. The way Precommit works is you need a .precommit config.yaml file in which you specify the repositories which host the code quality tools which you want to run on your files. So here I'll be using NVQA. You specify a revision, at the moment I'm putting 0.3.3, but you should always check to see what the latest one is, and probably put that one. And then specify which hooks you want to run. So I'll be running nbqa-black, nbqa-iSort, nbqa-pyupgrade and nbqa-flake8. So this is exactly what we had earlier but now I've put it in my pre-commit file.

So what will happen now is that if we make a commit which stages a notebook, then pre-commit will run all of these code quality checks automatically and it will block our commit if they don't all pass. Except, sorry, we need to enable pre-commit in our workspace for that to work. So let's get reset notebook. Right now let's add it again. Let's commit. All right and now you'll see that it has run our code quality tools. I needed to do this twice to get them all passing and the second time pre-commit let us actually make the commit. So if you use pre-commit you will make sure that not only your notebooks pass your code quality checks today but also that they will continue passing your code quality checks in the future. You can also run pre-commit during your continuous integration. And so you'll make sure that any incoming change to your repository will pass these checks.

6. Configuring NBQA and Code Quality Tools

Short description:

Is it really as simple as running nbqa black on your notebooks? I've hidden away complexity in the PyProject.toml file, where you can configure NBQA and other code quality tools. You can let NBQA modify your notebook in place by specifying it in the mutate section. Extra command line arguments can be added in addopt.

Now is it really this simple? Is it really as simple as just running nbqa black and then you can use black on your notebooks just as you would normally use black on your Python scripts? I have a confession to make. I've actually hidden away a little bit of complexity from you in the PyProject.toml file. You can configure NBQA entirely within this file. It's the same file you can use to configure your black formatter. So if any of your tools take config files, you can put them here. If you want any code quality tool to modify your notebook in place, you can let NBQA know here in the mutate section. Notice that I haven't put flakate because flakate just analyzes our notebook without actually modifying it. And then, if you want to pass any extra command line arguments, you can put them here in addopt.

7. Importance of Jupyter Notebooks and Challenges

Short description:

Jupyter notebooks are crucial for data science as they allow for data visualization, addressing the challenges of maintainability, and providing code quality tools through NB time and NBQA.

Great. So, in conclusion, we have seen how Jupyter notebooks play an integral role in data science. This is because they allow you to visualize your data, which helps you understand it in a way that simply printing some summary statistics to your console does not. We also saw that Jupyter notebooks present some challenges when it comes to keeping them maintainable. Namely, that viewing the diff between two notebooks is hard. And we saw how we can address this using NB time. And also that there's a lack of code quality tools available for Jupyter notebooks. We saw how we can keep our same Python code quality tools but just run them on our Jupyter notebooks via NBQA. I've included links to the homepages of NB time and NBQA here as well as to this presentation. That's it from me. Now please go out and write a maintainable Jupyter notebook. Good to have you.

QnA

NBQA Integration and Benefits

Short description:

We've had PyMC3, Alibi, SK time, Pandas profiling and NLP profiler use NBQA as part of their continuous integration workflows. We're looking at introducing a GitHub action for NBQA. nbdime has a GitHub integration for reviewing pull requests. It's free for open source projects but not for private projects. MBQA is similar to integrating an IDE into a Jupyter Notebook and adding some steroids. Jeremy Howard describes Jupyter Notebooks as an embodiment of the literate programming environment envisioned by Donald Knuth. Any tool that helps us program in a more comfortable and maintainable way within a notebook would be welcome.

So we're going to jump to the questions from our audience. Are you ready? Sure it's good to be here. Good good.

Question one. Have you introduced NBQA to the other data scientists in your workplace? How much has it helped in their workflow? Sure. So I have only recently spoken about it at work and so it's a limited buy-in. At the moment most of the buy-in has been in the open source world. We've had PyMC3, Alibi, SK time, Pandas profiling and NLP profiler use it as part of their continuous integration workflows. I suspect that most of the buy-in is probably going to be there. We're looking at introducing a GitHub action and hopefully that'll help further spark, further bring it to more people. Yeah, so, can you elaborate on that because you were showing how to use it locally but you can use it on GitHub in the future? Yeah, sure. So, yeah, in the future there will be a GitHub action. So, this is with reference to nbqa specifically. The other tool I showed, nbdime, which just to clarify, I'm not affiliated with, I'm not a co-author of that one, nbdime has a GitHub integration which you can use to review pull requests on GitHub. There are some libraries such as pymc3 which use that quite heavily. It's free for open source projects but not for private projects. If you want to use that in your workplace, then you will have to make the case to your employer as to why they should pay for it. Well, I can be very convincing, so that's not a problem.

Next question. Would it be safe to say MBQA is similar to integrating an IDE into a Jupyter Notebook and adding some steroids? I'm surprised no one came up with this before. Amazing work. Oh, well, thank you. That's very kind of you. I would like to think of it that way. I think... I'm trying to think of his name. The guy who did fast.ai, Jeremy Howard, he describes Jupyter Notebooks as being an embodiment of the literate programming environment, which was envisioned by Donald Knuth, if I'm not mistaken. And I think it's a pity that a lot of the standard development practices, which are available to us when we're programming in Python scripts, are not so readily available when we're programming in Jupyter Notebooks. And given some of the benefits that they provide when doing data science, I think that any tool which helps us program in a more comfortable and maintainable way within a notebook, I'd like to think it would be welcome. Okay.

Moving Code from Notebook to Python Package

Short description:

When deciding whether to move code from a Jupyter Notebook to a Python package, consider the long-term need for reproducibility. If the code is part of a report or analysis that needs to be produced consistently over time, it may be better to keep it in the notebook. However, if you want a self-contained solution or if the code is not directly related to data science, migrating it to a Python package might be more appropriate. Jeremy Howard and the Fast.AI team have tools for creating packages from notebooks, although I haven't personally used them yet.

Next question is from our audience member, Dido. Any recommendations on when to move code from Notebook to a Python package? That's a good question. I mean, my main use for notebooks is when I have some report or some analysis that I want not just to be able to produce today, but also that I want to be able to produce one month, two months from now and know that when I try to produce it again in two months it won't suddenly break. So with this use case in mind, I wouldn't typically migrate what I have in a notebook to a Python package. My usual thinking for making a Python package is when I want something somewhat self-contained that isn't part of an analysis or some model while the kind of work I do in a Jupyter notebook is more to do with pure data science. So I wouldn't typically migrate a notebook to a Python script. However, Jeremy Howard and the Fast.AI team, they do have some way of actually creating a package from a Jupyter notebook. They are very prolific in the number of tools that they put out. So there is a possibility for that. It's just not something I've used yet in my own work.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2021React Advanced Conference 2021
27 min
(Easier) Interactive Data Visualization in React
Top Content
If you’re building a dashboard, analytics platform, or any web app where you need to give your users insight into their data, you need beautiful, custom, interactive data visualizations in your React app. But building visualizations hand with a low-level library like D3 can be a huge headache, involving lots of wheel-reinventing. In this talk, we’ll see how data viz development can get so much easier thanks to tools like Plot, a high-level dataviz library for quick & easy charting, and Observable, a reactive dataviz prototyping environment, both from the creator of D3. Through live coding examples we’ll explore how React refs let us delegate DOM manipulation for our data visualizations, and how Observable’s embedding functionality lets us easily repurpose community-built visualizations for our own data & use cases. By the end of this talk we’ll know how to get a beautiful, customized, interactive data visualization into our apps with a fraction of the time & effort!
6 min
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Featured Article
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.

What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
 What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
 What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
 And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
 What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
 Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
 You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.


Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
 You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
 How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
React Advanced Conference 2021React Advanced Conference 2021
21 min
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
Top Content
This talk gives an introduction about MediaPipe which is an open source Machine Learning Solutions that allows running machine learning models on low-powered devices and helps integrate the models with mobile applications. It gives these creative professionals a lot of dynamic tools and utilizes Machine learning in a really easy way to create powerful and intuitive applications without having much / no knowledge of machine learning beforehand. So we can see how MediaPipe can be integrated with React. Giving easy access to include machine learning use cases to build web applications with React.
JSNation Live 2021JSNation Live 2021
39 min
TensorFlow.JS 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
ML conf EU 2020ML conf EU 2020
32 min
An Introduction to Transfer Learning in NLP and HuggingFace
In this talk I'll start introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released HuggingFace, in particular our Transformers, Tokenizers and Datasets libraries and our models.

Workshops on related topic

JSNation Live 2021JSNation Live 2021
139 min
Data Visualization for Web Developers
WorkshopFree
In this workshop, through hands-on projects we'll learn how to use Observable, a browser-based reactive coding platform, to rapidly build insightful, interactive visualizations in JavaScript. After completing this workshop, you'll have the basic tools & techniques you need to start using dataviz to better understand your code, your projects & your users, and make better data-driven decisions as a developer.
ML conf EU 2020ML conf EU 2020
160 min
Hands on with TensorFlow.js
Workshop
Come check out our workshop which will walk you through 3 common journeys when using TensorFlow.js. We will start with demonstrating how to use one of our pre-made models - super easy to use JS classes to get you working with ML fast. We will then look into how to retrain one of these models in minutes using in browser transfer learning via Teachable Machine and how that can be then used on your own custom website, and finally end with a hello world of writing your own model code from scratch to make a simple linear regression to predict fictional house prices based on their square footage.
JSNation Live 2021JSNation Live 2021
130 min
Painting with Data: Intro to d3.js
Workshop
D3.js is a powerful JavaScript library for building data visualizations, but anyone who has tried to use it quickly finds out that it goes deeper picking your favorite chart type. This workshop is designed to give you a hands-on introduction to the essential concepts and techniques for creating custom data visualizations with d3.js. By the end of this workshop you will have made an interactive and animated visualization on a realistic dataset that you can easily swap out with your own.
ML conf EU 2020ML conf EU 2020
112 min
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
Workshop
Are you a Software Engineer who got tasked to deploy a machine learning or deep learning model for the first time in your life? Are you wondering what steps to take and how AI-powered software is different from traditional software? Then it is the right workshop to attend.
The internet offers thousands of articles and free of charge courses, showing how it is easy to train and deploy a simple AI model. At the same time in reality it is difficult to integrate a real model into the current infrastructure, debug, test, deploy, and monitor it properly. In this workshop, I will guide you through this process sharing tips, tricks, and favorite open source tools that will make your life much easier. So, at the end of the workshop, you will know where to start your deployment journey, what tools to use, and what questions to ask.
ML conf EU 2020ML conf EU 2020
146 min
Introduction to Machine Learning on the Cloud
Workshop
This workshop will be both a gentle introduction to Machine Learning, and a practical exercise of using the cloud to train simple and not-so-simple machine learning models. We will start with using Automatic ML to train the model to predict survival on Titanic, and then move to more complex machine learning tasks such as hyperparameter optimization and scheduling series of experiments on the compute cluster. Finally, I will show how Azure Machine Learning can be used to generate artificial paintings using Generative Adversarial Networks, and how to train language question-answering model on COVID papers to answer COVID-related questions.