Overtesting is a common problem with unit test suites. It's when you have too many tests that break often and take up a large amount of time to keep green. Here's a fresh perspective on why that happens and how you can avoid it.
Overtesting: why it happens and how to avoid it
TestJS Summit 2022
Hello. This is a talk about overtesting, which is a very, you can probably understand what it is from the name, but it's a simple technique you can use to improve your test suites, improve your testing proficiency. So it's a model for thinking about your testing, really. And this talk is aimed at developers, but it's also useful for QA engineers, QA teams who want to maybe discuss with their developers their testing strategies, their testing ideas. So key question by overtesting, this might be really obvious, but how much time do you spend maintaining your test suites versus your application code? Because a lot of my clients I've worked with, their problem isn't under testing, they're not testing too few of their use cases, they're actually testing a lot. And so what they end up doing is spending most of their time, I've observed this up to 75% of their time or more in their test code, making the code changes to their application to add new features or fix bugs, but then having to go back and fix the tests, all these broken tests everywhere. 75% is too much. I love tests, I'm writing tests left, right and center, but I'm not wanting to spend my life in the tests. I want to be at least in a situation where I'm spending an equal amount of time on both. And maybe even less, maybe I can be spending less time on my test suites, but still maintaining 100% coverage, for example. So, I'm not going to suggest what that number should be, but if you're kind of in this 75% territory, and you can think about whether you are personally and on your team, but you probably want to be moving down to improve your productivity. So, here's a simple question. The top screenshot is showing GitHub workflow that's running a pull request. I'm sure a lot of you will be familiar with this setup. The red X is showing that CI has failed, so continuous integration. Like I said, I don't see teams that don't have CI. I think most people are writing tests. The problem isn't under testing, it's over testing. So, to me, this question is key. How many automated tests do you need for CI to fail? And the answer is just one. You need one test. So, ideally, whatever change you could make, if there was an issue with that piece of code you've written, just one test would fail, because that's all you need to raise the flag and stop your pull request being merged. Now, this is an ideal scenario. I don't think you'd ever get to one. But if you think about the times that this does happen, how many tests are failing for you? Are you in this kind of scenario where multiple test suites are failing, hundreds of tests are failing because of a simple change you made? And this is the scenario where you then start spending that 75% of time fixing your test. So, over testing, very simply, you have too many tests telling you the same thing. I'm gonna show a very quick example. There's plenty of examples of this. This is one I see a lot, where people are kind of doing scenario-based testing. And what they'll do is they'll set up the test and then they'll print out the...they'll have an expectation for the entire response or payload that they've got here. So, this example is just call and fetch, and I'm checking method, credentials, headers, and the body. But what I can do is split this test out into three different tests. And the one on...the test on the left here is the key one. That's the body. That's probably the thing we're most interested in and the thing in our application code that will change the most. The two tests on the right, these are gonna change less often. So, I shouldn't expect these to break. You know, I'm not gonna be changing the method or the headers often. So, these should hopefully just remain as they are. They're less brittle now. The key here is using expect object containing. So, this is definitely your friend. You can make more use of this. Rather than doing these huge expectations in your test, break them down into the functional units. So, that's it. That's really the idea. It's about thinking about how much time are you spending in your test code? That's an observation. You start thinking about that. How does it make you feel? You know, are you frustrated by this? Rather than just being involved in that moment of fixing the test, getting the build working, stop and think, well, how can you be more productive? And then figure out ways to improve it. So, I don't really want to suggest too many ways for you to improve things. So, this second edition of this has just been released. Mastering React Test Driven Development. This isn't just about TDD. It's about good unit testing practices. Practices that have helped me in my career. So, I definitely recommend you check it out if you're interested in figuring out all the different ways of writing tests that won't take up all your time. So, to conclude, observation. Think about the time and the feeling you've got with your test suites. And what's serving you well. You should be happy with your test suites. They should be helping you out. And have an open mind to new ideas. Don't shut down ideas because you've read blog posts that this is a terrible thing to do. You shouldn't do that. This is how you should write tests. Don't write tests like this. Just keep an open mind to ideas. And always come back to the idea of how easy are your tests, right? How much time are you spending in them? How do you feel when you're working in them? And that's it. Thank you very much for listening. Let me know what questions and thoughts you have.