Even though we write tests for our web applications, the reality is that bugs still happen. Fortunately, many of these are easily preventable paying more attention to the warnings from our apps. However, it's often so easy to put them under the rug and never come back until we find a bug in production, which leads to hundreds if now thousands of warnings appearing in our test output. This talk is about how to prevent this situation and how to get out of it.
The Life-Changing Magic of Tidying Up your Test Warnings
AI Generated Video Summary
Today's Talk focuses on preventing test warnings in software development. Test warnings are often ignored and can lead to bugs, performance issues, and security concerns. The speaker introduces a library called jsreporter log validator that automates the process of adding rules to prevent new warnings and fixing existing ones. The library provides a summary of expected behavior, failures, and actions to take. Overall, the Talk emphasizes the importance of paying attention to test warnings and using automation to improve developer experience and prevent issues in large and legacy applications.
1. Introduction to Test Warnings
Today we're going to be talking about preventing test warnings, with two goals in mind: preventing bugs and improving developer experience. Test warnings are messages created by developers to avoid bugs, performance issues, security concerns, and more. Warnings tend to accumulate because they're easy to ignore, don't make CI fail, and are often not a priority. Ignoring warnings can have consequences, as I'll demonstrate with a small application example.
Hello, everyone, and thank you for joining to the life-changing magic of tiding up your test warnings. Today we're going to be talking about preventing test warnings, with two goals in mind. The first one is going to be to prevent bugs. This is the most important one, and the second one is to improve developer experience.
A little bit about myself. My name is Victor Cordova. I work at TravelPerk, a Barcelona start-up. We're building the world's best business travel platform. If you're interested, please feel free to join us.
All right. So let's start by asking what are test warnings for? Test warnings are essentially messages created by developers of third-party libraries or other technologies that give us clues about what to avoid. For example, we want to definitely avoid bugs. We want to avoid performance issues. We want to avoid security concerns, amongst many others. This is just a very small sample. We also have accessibility issues, deprecations, and so on.
Now, the thing about warnings is that they tend to accumulate with time, and it's worth it to ask why this is the case. The first one is because they're pretty easy to ignore. So essentially, test warnings are just texts being generated either in your local machine or on another server. So this text, by itself, doesn't do anything. The second reason is because they don't make your CI fail. As developers, we all know that we pay much more attention to this red color that pops up whenever something fails. And finally, because they usually are not a priority. We live in a complex world. We have product tasks, technical tasks, so warnings can easily go to the end of this list.
Now, it's important to ask why do we even care, honestly. I ask myself that. So what happens if we ignore warnings? I'm going to give you a very small sample of what can happen. This is a small application with a book inventory where we have the title, the registration date, and the condition of the book. So let's imagine I'm going to fill this right now.
2. Preventing Test Warnings
I'm going to put fair, good, and terrible. Now, why is this concerning? Because this might very easily be your output in the test run. React will give you a warning that says, every element must have a unique key. That's why we need to pay attention to these warnings. This is the developer experience side of things. If you're trying to do TDD, if you're trying to debug an issue, nobody wants to see this. It's quite annoying. The developer experience is affected. It's a very common issue in large applications, legacy applications. But we're engineers. So let's use some automation. I created this very small library called jsreporter log validator. It allows you to add different rules to your warnings so that your team doesn't create new ones. You can add validations for certain patterns. Sometimes they have a dynamic part. You can put a maximum. You can also have a fail-safe for unavoidable warnings. We sometimes install third-party libraries that generate messages we don't want. But we cannot do anything about it at times. We also have an option to fail if an unknown warning is found.
I'm going to put fair, good, and terrible. So now I'm going to try to use this sorting functionality. And we'll see that everything but the condition is sorted.
Now, why is this concerning? Because this might very easily be your output in the test run. So everything is green, which is really not a reflection of what's happening. So React, which is just an example, will give you a warning that says, every element must have a unique key. That's why we need to pay attention to these warnings. This is the developer experience side of things. If you're trying to do TDD, if you're trying to debug an issue, nobody wants to see this. It's quite annoying. It's difficult to find important stuff. So the developer experience is affected.
Now, we can ask ourselves what do we do about it? It's a very common issue in large applications, legacy applications. And it feels sometimes like we really can't do anything. But we're engineers. So let's use some automation. So I created this very small library. It's called jsreporter log validator. And it allows you to add these different rules to your warnings so that your team doesn't create new ones. The first feature it has is that you can add validations for certain patterns. As you will see, it's not a single string for each one of the patterns. Sometimes they have a dynamic part. You can put a maximum. So you are basically saying, okay, we know we have this number of warnings of this type. But I don't want any more. The second one is that you can have a fail-safe for unavoidable warnings. We sometimes install third-party libraries that generate messages we don't want. But we cannot do anything about it at times. So we can just ignore it for now. We also have an option to fail if an unknown warning is found.
3. Using Automation to Fix Warnings
We have a registry of allowed warnings that can be updated with new versions of libraries. When fixing warnings, you can ensure they no longer occur. The library provides a summary of expected behavior, failures, and actions to take. Feel free to experiment and provide feedback on this automation solution.
So again, we have this registry of allowed warnings. If something else comes up, let's say, you create a pull request updating the new version of a library, then you will know in your build that it has to fail. And if you want it to pass, then you will add it to this config, but at least you will be explicit. And finally, let's say you are actually starting to fix your warnings. You said, okay, this warning, you cannot change router. I don't want it anymore. And I want to fix it. So somebody fixes it. But you want this to disappear. You don't want to allow for anymore. So this last setting allows you to keep this up-to-date. This is an example of how it looks like. It will give you a nice summary of what's the expected behavior, what failed and what you can do about it. Just a small, a small caveat. This library is in early stages. So feel free to play around with it. And also to give some feedback. But this is just an example of how you can use automation to fix this issue.