The Life-Changing Magic of Tidying Up your Test Warnings


Even though we write tests for our web applications, the reality is that bugs still happen. Fortunately, many of these are easily preventable paying more attention to the warnings from our apps. However, it's often so easy to put them under the rug and never come back until we find a bug in production, which leads to hundreds if now thousands of warnings appearing in our test output. This talk is about how to prevent this situation and how to get out of it.


Hello everyone and thank you for joining to the life changing magic of tidying up your test warnings. Today we're going to be talking about preventing test warnings with two goals in mind. The first one is going to be to prevent bugs. This is the most important one. And the second one is to improve developer experience.

A little bit about myself. My name is Victor Cordova. I work at Travel Perk, which is a Barcelona startup. We are building the world's best business travel platform. So if you're interested, please feel free to join us.

All right. So let's start by asking what are test warnings for? Test warnings are essentially messages created by developers of third party libraries or other technologies that give us clues about what to avoid. For example, we want to definitely avoid bugs. We want to avoid performance issues. We want to avoid security concerns amongst many others. This is just a very small sample. We also have accessibility issues, deprecations and so on.

Now, the thing about warnings is that they tend to accumulate with time. And it's worth it to ask why this is the case. The first one is because they're pretty easy to ignore. So essentially test warnings are just text being generated either in your local machine or on another server. So this text by itself doesn't do anything. The second reason is because they don't make your CI fail. As developers, we all know that we pay much more attention to this red color that pops up whenever something fails. And finally, because they usually are not a priority.

We live in a complex world. We have product tasks, technical tasks. So warnings can easily go to the end of this list. Now, it's important to ask why do we even care? Honestly, I ask myself that.

So what happens if we ignore warnings? And I'm going to give you a very small sample of what can happen. This is a small application with a book inventory where we have the title, the registration date and the condition of the book. So let's imagine I'm going to fill this right now. I'm going to put fair, good and terrible. So now I'm going to try to use this sorting functionality and we'll see that everything but the condition is sorted. Now, why is this concerning? Because this might very easily be your output in the test run. So everything is green, which is really not a reflection of what's happening. So react, which is just an example, will give you a warning that says every element must have a unique key. That's why we need to pay attention to these warnings. This is the developer experience side of things. If you're trying to do tdd, if you're trying to debug an issue, nobody wants to see this. It's quite annoying. It's difficult to find important stuff. So the developer experience is affected.

Now, we can ask ourselves what we do about it. It's a very common issue in large applications, legacy applications. And it kind of feels sometimes like we really can't do anything. But we're engineers, so let's use some automation. So I created this very small library. It's called JS Reporter Log Validator. And it allows you to add these different rules to your warnings so that your team doesn't create new ones.

The first feature it has is that you can add validations for certain patterns. As you will see, it's not a single string for each one of the patterns because sometimes they have a dynamic part. You can put a maximum. So you are basically saying, OK, we know we have this number of warnings of this type, but I don't want any more.

The second one is that you can have a fail safe for unavoidable warnings. We sometimes install third party libraries that generate messages we don't want. But we cannot do anything about it at times, so we can just ignore it for now. We also have an option to fail if an unknown warning is found. So again, we have this, let's say, registry of allowed warnings. If something else comes up, let's say you create a pull request updating the new version of a library, then you will know in your build that it has to fail. And if you want it to pass, then you will add it to this config, but at least you'll be explicit.

And finally, let's say you are actually starting to fix your warnings. You said, OK, this warning, you cannot change router, I don't want it anymore, and I want to fix it. So somebody fixes it, but you want this to disappear. You don't want to allow for anymore. So this last setting allows you to keep this up to date.

This is an example of how it looks like. It will give you a nice summary of what's the expected behavior, what failed, and what you can do about it. Just a small caveat, this library is in early stages, so feel free to play around with it and also to give some feedback. But this is just an example of how you can use automation to fix this issue.

Now we live in a world where projects usually have these large amounts of warnings, as I mentioned before. So what do we do if we already have a lot of them? These kind of projects, like I presented, will prevent new ones from being added. But what do we do about the other ones? I would just suggest doing a very simple analysis and distribution of work. So doing this 80-20 analysis will probably give you very good results, because in my experience, test warnings tend to focus in a very small subset of files. So if you identify which ones are actually giving you trouble, you can prioritize and start solving those.

Because, again, we have to prioritize. So we need to prioritize based on the risk and effort. If you have a warning that's actually giving you potential bugs, like the one I showed, but you have others, which is a deprecation warning of a library you'll be moving away from, let's say you move from Moments to Day.js, then it's really not important at the moment.

And finally, of course, we need to organize and distribute the work. So this is really important amongst different teams when you're dealing with a large application, so that you can actually solve it piece by piece.

So just a small summary. I would encourage you in one line, basically, to establish an anti-warning culture. Also to use automation to your advantage. And finally, to address the existing warnings, because they are worth it. So that's everything for me. Thank you very much for listening.

8 min
15 Jun, 2021

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic