Transcription
Hi, thanks for joining me. My name is Mark Rubin, and in my spare time, I maintain a Node.js test runner called Ava. Maybe you've heard of it. As foreshadowed by the title of this talk, I'm not really here to talk about Ava, but you should totally check it out. Now, in my day job, I work as principal product engineer at Monolith, which is a financial service provider in the cryptocurrency space in the UK and Europe. I'm not really here to talk about that either, but of course, you should totally check it out. This is not a talk about test runners, nor a talk about cryptocurrencies or how I write tests in the office. Instead, it's a talk about what I think we should strive for in our profession and the role that software testing can play. My job at Monolith and my hobby of maintaining a test runner give me what I hope is an interesting perspective on this. So to start us off, let's look at an engineering project delivered in the Netherlands back in the 90s, costing nearly half a billion euros. It was a success. Two million people rely on it, and yet it's rarely been used. So this is the Muslin storm surge barrier. It protects Rotterdam and the surrounding area from storm surges. So if I zoom out a bit on the map, you start to see all the towns around it. I grew up somewhere north of that. Rotterdam is down here. Now, of course, the Dutch are somewhat famous for holding the sea at bay. And typically we built dikes or levees, as they're also known, as a wall to keep the water out. And the land around this waterway is protected by dikes. But in the 80s, the Dutch determined that the dikes were not high enough. So the obvious solution is you make them higher, right? But to do that, to increase the height of a dike, you need to widen the base. And this is hard to do when you have centuries old towns built next to the dikes. Legacy code, if you will. So relocating these towns would have cost a fortune and taken decades, and a more creative solution was required. Elsewhere in the country, we have this, which is the Afsluitdijk. And it separates the North Sea from what is now a lake, but what used to be known as the Southern Sea. But because it separates a sea from a lake, it's pretty easy to widen this and to make it higher, which is a project that is underway right now. And there is one other problem, which is that back in the 80s, Rotterdam Harbor was the world's largest seaport. I think it's still top five or definitely top 10. You can't quite close that off because, well, where are all the containers going to go? So just like how with NPM, we build the world's largest package registry, the Dutch built one of the world's largest movable structures. So there's two gates. Let's see if I can play this. Here we go. There's two gates that can be floated into the waterway and lowered, protecting the Hinsland from storm surges. Each gate is 22 meters high, 210 meters wide, backed by 237 meter long trusses resting on the world's largest ball joint by the diameter of 10 meters for a combined weight of nearly 15,000 tons. And this is all controlled by a computer because you can't have anxious operators close off a busy port because a storm is brewing. So the humans have been replaced by 200,000 lines of C++ code. And the test suite is 250,000 lines. But this is not your average piece of code. The system was designed using formal methods. You can find a 20 year old paper on that. And it's only going to cost you 40 euros. Now, I'm not going to recommend that for your next React component, but it makes me pretty confident that my family will keep their feet dry. So my apologies for this diversion into real world engineering, but I think there are some insights that we can take away from this barrier. So for instance, it doesn't actually fully close, which you can kind of make out here. There's a small gap between both arms because you don't really want them crashing into each other because who knows what kind of damage that would do. But, you know, makes you think about achieving 100% code coverage, right? They don't even care about keeping all the water out. They just have to keep almost all the water out. So code, as a profession, and I'm guilty of this myself, we can be terribly focused on code. New APIs, new frameworks fretting about tech debt, and this can be fun and it keeps us busy and it makes us look, makes it look like we're doing stuff. So at Monolith, I'm a product engineer and instead of talking more about engineering, I think we should talk a bit about product. So we built products to serve our customer. So for Monolith, that's individuals. For the Storm Search Barrier, that's 2 million individuals and businesses and a good chunk of the Dutch economy. And like building these barriers, building products is a multidisciplinary effort, design, marketing, research, support, operations, both like DevOps, technical operations, as company operations. Compliance, whether it's GDPR or in our case, certain financial regulations, quality assurance, that's a lot. So what do we as software developers contribute to all this? Endlessly refining code or figuring out how to test the edges of edge cases can be a fun challenge, but does it serve the customer? Does it deliver value? Building great products requires iteration, reflection, and making trade-offs. You have to determine your constraints, like not moving an entire village. You have to make assumptions, test them, learn, make more assumptions, and so on and so forth. And this is required of everybody, not just developers. It's a high-wire balancing act. And luckily, we can often afford to make mistakes in public. We just call them bugs. Testing assumptions doesn't always require you to write code. In fact, if you assume that writing code is the most expensive part of the process, then we should test as many assumptions as possible with the fewest lines of code as possible. At other times, it's most effective to write a bunch of spaghetti code, put it live, learn from what happens, and then change it again. Speed over quality so you can learn more quickly and deliver more value faster. And with apologies to my Italian coworkers for insinuating that spaghetti does not have quality. Now, I can hear you cry, what about tech debt? I know, building products is not easy. It requires extensive collaboration between people who have radically different ways of defining and solving problems. As long as everybody is aligned on delivering the value to the customer, you'll stand a good chance of succeeding. But of course, there are things we can do as software developers to help with this process. And yes, that does involve testing. Because the problem with rapidly writing software is that at some point, you're done. If not with the project and with that service or those UI components. And when you're done, you move on. And then you have to go back and make changes. And you don't know if you've broken anything. So, that storm surge barrier, they test it every year just to make sure the mechanism still works. Summertime, do some maintenance, no storms coming. Then late September, float it in, see if you can close it. Which sounds easy enough, right? But you got to be pretty confident that it will also float back out. Because you can't block the C port for days. That would be very expensive. And probably all that software that drives it also receives the occasional update. So, good thing they have a whole process for verifying its correctness. Now, I've long thought of code being a reflection of how you understand a problem. So, the clearer your understanding, the clearer your code. And as the problem changes, so must the code. And the iteration that is required to build good products is what causes the problem to change. And therefore, the code to change. So, good tests also reflect a problem as well as the constraints that are imposed on the code. But most importantly, good tests provide confidence. Confidence that the problem is solved by your code. Confidence that when the problem changes and you're tasked with changing the code, you don't accidentally break things. We write tests so that we can have confidence to iterate without breaking anything. To deliver value. So, who does the testing in your organization? Dedicated QA teams, they can be fantastic. But you should avoid a culture where you chuck it over the fence. Software development teams should have confidence in their code and that's going to require them to write their own tests. Of course, given that you're attending this conference, you're probably not the French talking kind. Now, if I can make a confession, I don't like writing tests. I don't think I'll ever find it fun. Spending days writing tests for a new service, even when it's a critically important authentication system. Writing tests can be tedious. Feel like it's slowing you down so that you can iterate less and deliver less value. So, my advice, then, is to focus on the confidence. Not the methodologies, whether you're test driven or behavior driven. Not the assertion libraries and APIs. Because really, how many abstractions can you handle? Frameworks can you learn? Use those in your actual code. Use that energy for your actual code and avoid unnecessary abstractions in your tests. Don't follow methodologies because they are best practice. Instead, do what makes sense for you and your team. Are you building better products? Are you wasting time? So, the code I tend to write runs in the backend. So, within that context, I found that integration tests are good if they reflect a problem that you're solving. If you can't use an integration test, but you want to test a critical area of the code, then use a unit test. Constraints, they often show up in input validation and that's where you really want to test edge cases. But most importantly, write code. Write code that tests, be confident, and build awesome products. Thank you. And I hope to chat with you in the Q&A. Hi, Mark. Hi. Thank you so much for joining us. Yeah. And magically with a different room. Magically different room. Yes. From all over the place. So, what is your main job responsibility? I mean, I would say building the better product. Building the better product. That's kind of like what I gave the talk the way I did. Yeah, I guess before I switched positions, like before I was more of a software developer, that was my main responsibility as well. But now I'm not quite sure what to answer to that question actually. I'll have to think about that. Just like I have to think about the iteration reflection and making trade-offs. It's a really great quote. I like it. And Juvel Dagnar asked, when is spaghetti code a cost too high for confidence? Do you have any opinions on that? Well, is that the right question, I suppose? Again, it's a trade-off between the commitments maybe that you're making with the code. And so, the more commitments you make to customers, the more you need to have confidence. So, if the code is like moving money around and you're not sure it's really going to the right place, then that would be a big problem. If the code is sending push notifications, but half the time it doesn't, maybe that's not such a big problem. Yeah, I guess the use case always plays a big role in what your priorities should be. But traditionally, if I remember back to university and everything, everybody told me, no, spaghetti code is bad. Don't do that. But that's what you get told when you're learning stuff, because it's otherwise too tempting maybe to just keep going and never make it better. Which also when you're learning stuff, you have to try and write better systems from the start. Otherwise, you might never learn how to do that. You're always just writing spaghetti code. Yeah. But I think when you're building something, often the problem is not how good is your code? The problem is, well, what do the users actually need out of this? And so, that's what you actually have to learn, because it doesn't matter if you have really good code, but your product doesn't resonate. Yeah, if the product doesn't solve the problem for the user, it's... Yeah. Yeah. Yeah. I guess you have to learn how to properly build things to be able to see when it's okay to build things that are not very good and where it's allowed to write code that is not up to all the... What's the word? That's not up to quality standards. And Yubel Dabna actually wrote a follow-up question. If I have confidence in my tests, but they are not built consistently, doesn't it reduce confidence? If the tests are not consistent? If I have confidence in my tests, but they are not built consistently, doesn't it reduce confidence? I remember situations where we had tests that were flaky, or in some project situations the tests weren't run regularly. Maybe he means that. Or maybe the tests itself are not always run. Yeah. I mean, two sides maybe. If you're changing code, or if anything changes, then you need to make sure you're running your tests. If you accidentally disable your tests, then it's untested, because even though the testing code exists, they never run. But sometimes everything else changes. Your code doesn't change, but the platform it runs on changes. The Node version changes. And so you... Let's say, I don't know. But back in the olden days before Docker, you would install Node on a machine, and that's where you would run your code. So if you upgrade that Node version, you haven't changed the code deployment. So then you have to make sure you have tested with that newer Node version. And even now, if you use Docker images, you might not run your tests against the final build output, so you still want to make sure all of those things are the same. Yeah. I remember... Hopefully that gets us an answer. Hopefully. And if not, he can join the speaker room later on and maybe talk to you about that directly. Yeah. And Nikolaj Advolodkin, I'm really sorry about my pronunciation about names. He asked, I'd like to understand how you balance how much tests to write to give you that confidence. Again, it's hard to answer that, to give you a rule. Because it depends on where you're not confident. So some things, if you use TypeScript and you're calling an API and it's type checked, you can be fairly confident that those arguments are fine. You're using it in the correct way. So you might not have to test all of that stuff. But other cases... Yeah, I guess it depends on the use case. For my side project, I rarely have a high test coverage, but when I'm working on systems that deal with sensitive information or something, I definitely want to have a lot of tests. And otherwise, I personally don't feel confident in the code. So it depends. Yeah. Maybe the answer to go with, again... That's another... What is the worst thing that could happen if the code doesn't work as advertised? So again, if it's money stuff, taking money from the wrong person or sending money too early, you really want to test that. In the UK, a couple of weeks ago, they accidentally deleted a whole bunch of crime records too early. They what? Yeah. Records expire after... Certain records, they can't use after X number of years because the suspect wasn't convicted and so forth. So there's code that deletes those records, but it deleted way too many records. I'm sure that's a really hard system to manage and hard to test. But yeah, that's an area where you really want to make sure you're not deleting the wrong thing. Yeah. I like the question, what is the worst thing that can happen to maybe use as a measurement for my confidence in the tests? The other thing to keep in mind is that your time has a... I'm going to assume your work in a business or an employee or a contractor, your time has a cost. And if you're an employee, you don't really have to be super concerned about wasting your boss's money. Not to an extreme. But sometimes the worst that could happen isn't very expensive and isn't very likely. And so spending a lot of time preventing that might not be worth it. But it's not easy to make those... To think that through either, because sometimes it might not cost a lot of money, but it might take a lot of time to clean up. So then it still costs a lot of money. Yeah. You should consider the cost of what it takes to build the tests and run the tests regularly, but you also should consider the cost of what happens when you don't test your code. So we have a few more questions. Jacek asked, what about starting with spaghetti code and a set of end-to-end tests to still keep you in check? I'm not saying don't write tests until you're ready to write good code. Do whatever you're comfortable with. Okay. Astar, do you want to come back? Oh, sorry. Yeah. No, no, no. Please elaborate if you want to. If not, you can also join in the speaker room later on and talk more about this. So Astar100 asked, how can a testing team help foster a developer culture that discourages throwing features over the wall? I haven't had the privilege to work with a testing team, so it's hard to answer that from experience. I think it might be helping other people in the organization care about testing and making it easier for them to write testing and changing the culture so it is not a... So people feel like testing is part of their responsibility and feel like they have support to do that. Yes. Yes, I guess if the other people care, then there's no throwing over the wall. We have time for one last question. Testibyte wants to know, what is your definition of integration tests? If you're not mocking everything in your unit test, is that an integration test? I'm not sure I am too precious about those definitions. What I do in the service testing is I'll spin up a database, pops up, but then I mock the third party services. So the code I'm testing, that's all running and it's using a database and it's storing things and I can observe those effects. And I can control the data that goes in either from a request I'm making my test or from the stopped out service that it talks to. It's not an end to end integration test because I'm not using a live service. But I think it's good enough to get confidence in the one service that I am testing. But you have to get all the mocks right. If you model it differently than what happens in production, then it's still going to be wrong. Thank you so much for joining us. Have a great day, Mark. And you can join him in the speaker room. Yes, thank you for having me and everybody enjoyed the conference.