Never Have an Unmaintainable Jupyter Notebook Again!


Data visualisation is a fundamental part of Data Science. The talk will start with a practical demonstration (using pandas, scikit-learn, and matplotlib) of how relying on summary statistics and predictions alone can leave you blind to the true nature of your datasets. I will make the point that visualisations are crucial in every step of the Data Science process and therefore that Jupyter Notebooks definitely do belong in Data Science. We will then look at how maintainability is a real challenge for Jupyter Notebooks, especially when trying to keep them under version control with git. Although there exists a plethora of code quality tools for Python scripts (flake8, black, mypy, etc.), most of them don't work on Jupyter Notebooks. To this end I will present nbQA, which allows any standard Python code quality tool to be run on a Jupyter Notebook. Finally, I will demonstrate how to use it within a workflow which lets practitioners keep the interactivity of their Jupyter Notebooks without having to sacrifice their maintainability.


Hello friends. We are here today to talk about Jupyter notebooks and how to keep them maintainable. We will start with a motivating example in which I'll make the case for why you might care about using Jupyter notebooks in the first place. Then I'll address a couple of challenges which people often bring up when trying to keep their Jupyter notebooks maintainable. The first one has to do with version control and anyone who's tried to look at the difference between two notebooks using git diff will know what I'm talking about. It's not easy. The second has to do with continuous integration and more specifically the lack of code quality tools which are available to run on Jupyter notebooks. So then finally I will demonstrate a workflow for keeping your Jupyter notebooks maintainable. Let's dive straight in with our motivating example. I've prepared a pretty standard data science workflow here. Absolutely standard. We'll go through it in a second. Now you might be wondering why I'm showing you an absolutely standard data science workflow and bear with me, there might be a twist at the end. Might. So let's go through it. We start by reading in four CSV files using Pandas read CSV. Pretty standard. Each of these has two columns, X and Y. Pretty standard. So then we'll print out some summary statistics. So we'll print out the mean of X, the mean of Y, the standard deviation of X, the standard deviation of Y, and the correlation between X and Y. We will do this for all four data sets. Still pretty standard. And then using scikit-learn for each of these data sets we will fit a linear regression model, also pretty standard, and we will print out the mean squared error. Also absolutely standard. So where's the twist? Well let's see what happens if we run this using Python. Right, look at that. If we look at what's been printed on the console, we'll see that the mean of X is the same for all four data sets, but so is the mean of Y, the standard deviation of X, the standard deviation of Y, the correlation between X and Y, and the mean squared error from having fit a linear regression model is also almost identical. So if we look at this, we can tell that the four data sets must be pretty similar. That's what these summary statistics are telling us. Now let's try doing something slightly different. Let's repeat this analysis, but instead of doing it in a Python script, let's do it in a Jupyter notebook. We'll do the same thing. We'll just read in these data sets using Pandas read CSV, and we'll fit a linear regression model using scikit-learn, but then instead of just printing out some summary statistics, we will visualize our data sets, and we will also visualize the linear regression lines which we will have fit. And because we just printed out the summary statistics and they were the same for all four data sets, we expect the four plots to look almost identical. So let's go. Ready, set, go. Oh, what's going on? Looks like maybe these four data sets aren't that similar after all. However, if we contrast this to what we saw a second ago when we just printed out some numbers to the console, now we can tell that the four data sets aren't actually the same. They just happen to have some shared characteristics. But when we were just relying on single numbers as summary statistics, we couldn't tell that. Yet it's frustratingly common to see data science workflows in which people will just load in data, fit a model, and then print out a few numbers without ever bothering to visualize it. So that's the motivating example. I hope this motivating example has highlighted the importance of visualizing your data, and Jupyter Notebooks are a great way to do that. But if Jupyter Notebooks are so great, why do they sometimes get criticized? Well, I said earlier that they pose a problem when it comes to version control. And for anyone who's not tried doing that before, let's see together what I mean. Let's save the notebook as it is, and let's make a commit. Git commit run notebook. Now let's make an absolutely trivial change. Let's just add a little line here saying fig subplot title data frames. Really small change. Let's run the cell again. The only thing that's changed is that I've added this title. If this was a Python script, and we had just changed one line of code, then if we did git diff, we would see a really small diff. However, this is not a Python script. It's a Jupyter Notebook. And so if we save and do git diff, look at what happens. We get this absolutely horrendous, unreasonable raw image diff. I look at this and I have no idea what's going on. It makes me want to stop using Jupyter Notebooks forever. However, it's all lost, because maybe it's not so much that Jupyter Notebooks don't work under version control. Maybe it's just that we need a more specialized tool. And one such tool, which I will present to you today, is called nbdime. The way nbdime works is you call it from the command line as nbdiff-web, and you will... Let me just allow that. And then you will get a URL, which you can open up in your browser. And now we get a visually pleasing, easy to understand view of the diff between the notebooks. Now if we look at this, it's absolutely clear that just one line of code has changed. We can also easily compare the diff in the outputs and see that just the title has changed. This is much easier to read compared to what we had a couple of minutes ago. This absolutely unreadable diff, now we have something visually pleasing, which makes me want to use Jupyter Notebooks again. Great. So, it wasn't that Jupyter Notebooks didn't work with version control. It was more that we needed a specialized tool. So that's the first challenge when it comes to Jupyter Notebooks, which I brought up earlier. Let's now look at the next one, because if you're keeping things in version control, then chances are you're not just looking at the diff between versions of your code, you'll also be running continuous integration. If you're used to doing continuous integration on your Python scripts, then likely you will be used to running a whole suite of linters and formatters on your code, like black, isort, jk8, pyupgrade, mypy, the list goes on. And if you tell someone who's used to doing that that all of a sudden they need to switch over to using Jupyter Notebooks, for which they won't have available that large suite of tools, then they might quite rightly feel like crying. But does that mean that all is lost, or again, does it mean that we just need a more specialized tool? Let's see. You would need a tool which would temporarily convert your notebook to a Python script and store it in a temporary directory, run these code quality tools on it, reconstruct the notebook and pass the output. And one such tool, which I'll present to you today, is called NBQA. Let's have a little look at how that works. I've prepared a notebook here for you, which produces a pretty plot at the end, which is taken from the mat.lib gallery. But inside it, I've written some purposefully distorted code. Let's have a look at what happens when we run NBQA and then some code quality tools on it. You can run NBQA from the command line. In fact, you don't even need your notebook open or to have a Jupyter instance running. So let's see what happens. Let's auto format it using black. Then let's sort the imports using isort. We will then upgrade the syntax using pyupgrade. And then finally, we will run flakate, which will not modify our notebook. It'll just let us know if there are any style guide violations. And in fact, it tells us that there's an import which is unused, seaborn. So we can open up our notebook again, remove this unused import. And now, well, let's see what's changed. To see what's changed, we're going to do git diff, except we're not going to do git diff. I just told you that we should be using NBDIME to view the diff between notebooks. So let's use it. Let's open up this link in our browser. And now, let's see what's changed. So first of all, you'll see that the imports have now been sorted thanks to isort. This unused import has been removed thanks to flakate. This outdated constructor of this dictionary has been replaced with a more modern one thanks to pyupgrade. This inconsistent indentation has been sorted out thanks to flak. And all of a sudden, the code style feels a lot more uniform, and it's going to be easier to keep this of a consistent quality. It's going to be easier to compare diffs when different people have been modifying it, if the style is consistent. Great. But we don't just want our notebook to be of a certain code quality today. We want to make sure that it stays this way in the future. And a popular way of doing that is via a tool called precommit. The way precommit works is you need a.precommitconfig.yaml file in which you specify the repositories which host the code quality tools which you want to run on your files. So here I'll be using nbqa. You specify a revision. At the moment I'm putting 0.3.3, but you should always check to see what the latest one is and probably put that one. And then specify which hooks you want to run. So I'll be running nbqa-black, nbqa-isort, nbqa-pyupgrade, and nbqa-flakate. So this is exactly what we had earlier, but now I've put it in my precommit file. So what will happen now is that if we make a commit which stages a notebook, then precommit will run all of these code quality checks automatically and it'll block our commit if they don't all pass. We need to enable precommit in our workspace for that to work. So let's get reset notebook. Right now let's add it again. Let's commit. All right. And now you'll see that it has run our code quality tools. I needed to do this twice to get them all passing. And the second time, precommit let us actually make the commit. So if you use precommit, you will make sure that not only your notebooks pass your code quality checks today, but also that they will continue passing your code quality checks in the future. You can also run precommit during your continuous integration. And so you'll make sure that any incoming change to your repository will pass these checks. Now, is it really this simple? Is it really as simple as just running nbqa black and then you can use black on your notebooks just as you would normally use black on your Python scripts? I have a confession to make. I've actually hidden away a little bit of complexity from you in the pyprojects.toml file. You can configure nbqa entirely within this file. It's the same file you can use to configure your black formatter. So if any of your tools take config files, you can put them here. If you want any file, sorry, if you want any code quality tool to modify your notebook in place, you can let nbqa know here in the mutate section. Notice that I haven't put flakate because flakate just analyzes our notebook without actually modifying it. And then if you want to pass any extra command line arguments, you can put them here in add opts. Great. So in confusion, we have seen how Jupyter notebooks play an integral role in data science. This is because they allow you to visualize your data, which helps you understand it in a way that simply printing some summary statistics to your console does not. We also saw that Jupyter notebooks present some challenges when it comes to keeping them maintainable. Namely, that viewing the diff between two notebooks is hard, and we saw how we can address this using nbdime. And also that there's a lack of code quality tools available for Jupyter notebooks. And we saw how we can keep our same Python code quality tools, but just run them on our Jupyter notebooks via nbqa. I've included links to the homepages of nbdime and nbqa here, as well as to this presentation. That's it from me. Now, please go out and write a maintainable Jupyter notebook. Good to have you. So we're going to jump to the questions from our audience. Are you ready? Sure. It's good to be here. Good, good. Question one. Have you introduced nbqa to the other data scientists in your workplace? And how much has it helped in their workflow? Sure. So I have only recently spoken about it at work, and so it's a limited buy-in. At the moment, most of the buy-in has been in the open source world. We've had PyMC3, Alibi, SKTime, Pandas Profiling, and nlp Profiler use it as part of their continuous integration workflows. And I suspect that most of the buy-in is probably going to be there. We're looking at introducing a GitHub action, and hopefully that'll help further spark, further bring it. I'm not sure how to phrase this, but yeah, bring it to more people. So can you elaborate on that? Because you were showing how to use it locally, but you can use it on GitHub in the future? Yeah, sure. So in the future, there will be a GitHub action. So this is with reference to nbqa specifically. The other tool I showed, nbDIME, which just to clarify, I'm not affiliated with. I'm not a co-author of that one. nbDIME has a GitHub integration, which you can use to review pull requests on GitHub. There are some libraries such as PyMC3, which use that quite heavily. It's free for open source projects, but not for private projects. So if you want to use that in your workplace, then you will have to make the case to your employer as to why they should pay for it. Well, I can be very convincing, so that's not a problem. Next question, would it be safe to say nbqa is similar to integrating an IDE into a Jupyter notebook and adding some steroids? I'm surprised no one came up with this before. Amazing work. Oh, well, thank you. That's very kind of you. I would like to think of it that way. I think I'm trying to think of his name, the guy who did fastai, Jeremy Howard. He describes Jupyter notebooks as being an embodiment of the literate programming environment, which was envisioned by Donald Knuth, if I'm not mistaken. And I think it's a pity that a lot of the standard development practices, which are available to us when we're programming in Python scripts, are not so readily available when we're programming in Jupyter notebooks. And given some of the benefits that they provide when doing data science, I think that any tool which helps us program in a more comfortable and maintainable way within a notebook, I'd like to think it would be welcome. Okay. Next question is from our audience member, Dido. Any recommendations on when to move code from notebook to a Python package? That's a good question. My main use for notebooks is when I have some report or some analysis that I want not just to be able to produce today, but also that I want to be able to produce one month, two months from now and know that when I try to produce it again in two months, it won't suddenly break. So with this use case in mind, I wouldn't typically migrate what I have in a notebook to a Python package. My usual thinking for making a Python package is when I want something somewhat self-contained that isn't part of an analysis or some model, while the kind of work I do in a Jupyter notebook is more to do with pure data science. So I wouldn't typically migrate a notebook to a Python script. But however, Jeremy Howard and the team, they do have some way of actually creating a package from a Jupyter notebook. And they are very prolific in the number of tools that they put out. So there is a possibility for that. It's just not something I've used yet in my own work. Next question is from Jordy VD. I don't know if you're familiar with javascript, but do you know any alternatives in javascript for Jupyter notebook? I'm not familiar with javascript, I'm afraid. Ah, okay. Well then I am. But I'm not familiar with Jupyter notebook as much, so I don't know if I'm going to be answering Jordy's question correctly. But Jordy, I would recommend if you want to just quickly test code, which basically is what I understand what you're doing, right, with Jupyter notebook? This is for you, Marco. Well okay, so just chime in on that because it's in the middle of a sentence. So some people do use notebooks just to quickly test code, but I'd like to think that with the advent of more and more maintainability tools for notebooks, then they can actually like a proper development environment rather than just something to quickly test something in. One of the previous askers said that we are transforming them into an integrated development environment. And yeah, I'd like to think that that's the long-term objective, but I don't think we're quite there yet. Anyway, go ahead and finish up your answer. Yeah, well basically I would tell Jordy to look at Code Sandbox. That's what I'm always using for these scenarios, which is a really nice website. So free advertising for Code Sandbox right here. Then there's a few people, three people typing their questions now, so let's see who finishes typing first. This is more questions than I was expecting. Alexander Schultz says, great talk. I'll have some cool things now to show to try for my own work. So that's a nice compliment for you. Thank you. Oh, and Jordy already gave feedback that he's using Code Sandbox, so, and that they're awesome. Nice. Another question from Soren. Excellent way of working with a Jupyter Notebook. Is it usable in SageMaker Notebook? I am not familiar with SageMaker, I'm afraid. If you have time, perhaps could you briefly describe what it is? I mean, if you have some command line interface available and it's saved in the same format as a Jupyter Notebook, then I presume you could use it there? Well, Soren, I guess there's only one thing you can do if you want to know more about SageMaker and the option to work Jupyter Notebook with it. You'll have to go into Marco's speaker room, which is going to be starting in a few minutes. So then you can go deeper talking and discussing about the Jupyter Notebook, Soren. I guess that's all the time we have right now for Q&A. So like I said before, Marco will be going to his speaker room right now. If you want to join Marco, just find the link below the player on the website, click on the discussion, the speaker room for Marco, and you can join the conversation. So Marco, thanks a lot for your talk and this lovely Q&A session. Thank you very much.
26 min
02 Jul, 2021

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic