Tests That Help you Find Defects Faster

Rate this content
Bookmark

This talk is about common mistakes people make when writing tests.


Mixing multiple concerns inside tests is tempting because it can feel like painting the whole picture. However, it obfuscates the root cause when a test fails. Setup methods are great but when developers are too focussed on keeping their tests DRY they can easily lead to test interdependence. Therefore, some principles we have learned to build our software we need to unlearn when it comes to testing.


The talk highlights more aspects like bloated tests which make it hard to figure out what they are about and proper usage of assertions to get better error messages.

Especially if you don't work with TDD it can be easy to come up with a test that looks good but stands in your way when it fails.


The talk will have a look at the four scenarios I outlined above, explain why it makes sense to think about them and actionable suggestions how to improve tests.

21 min
19 Nov, 2021

AI Generated Video Summary

This talk covers tests that help find defects faster, focusing on test case assertions, improving test failure context, test code structure, and the dangers of extracting code in tests. It emphasizes the importance of small tests, test isolation, and using TDD. The benefits of TDD and testable automation are discussed, along with setting up an engineering workflow and the use of mocking. Overall, the talk provides valuable insights into writing effective tests and ensuring code quality.

1. Introduction to the Talk

Short description:

Welcome to my talk on tests that help you find defects faster. I'm Philip, CTO at Oution, and I've learned from my own mistakes in writing tests. I'll start by discussing what this talk won't cover, such as naming tests and the number of assertions in a test case. Both Rspec and BDD styles are equally good, it's a matter of personal preference.

Hey there and welcome to my talk, tests that help you find defects faster. My name is Philip, but everyone just calls me Phil. So you can do this as well. I live in a town called Potsdam in Germany, right next to Berlin. But except for hipsters, we do have castles. I'm also CTO at a company called Oution, and I've been in technical leadership roles the past years. And I've mentored a couple of less experienced developers along the way.

And while doing so, I, you know, learned about different mistakes different people make about, you know, some troubles people put themselves through when they write tests for software. And since I enjoy writing tests a lot, I'm a huge fan of TDD, because it just helps me, you know, organize myself and you know, work in small steps. I thought this is a nice opportunity to share some of that knowledge. Obviously I've made most of those mistakes myself in the past years, often enough to, you know, kind of dissect them and look into what's really important and whatnot. Which brings me also to my first point. So I'd like to start this talk actually not with some topic, but with the topics that this talk won't be about. And the first thing will be naming tests, right? So there are a couple of different styles, how you can write test names, test descriptions. And just to give you an example, I'm just gonna run those here so you see how they look in a test runner. That's for instance one style called Rspec, where essentially you read through all the describes down to the it, and then this forms a whole sentence. So for instance, here a user can be identified. This is one way of doing it, obviously, right? There is also a different style, BDD, Behavior Driven Development, this is where you see the keyword should a lot, so here I've used user merely as a name for a group. And then the test name should be possible to identify a user. Now, the important point here is, you know, both are equally good, right? There is none... The one isn't better than the other. It's just a matter of personal taste, what you like better, what works for you. This is also why I don't wanna make this an issue here. So in this talk I'm gonna use the BDD style a lot, should, but, you know, I don't think it's the better one if you prefer to write your test differently. That is absolutely fine, so this is really not what this talk is about.

2. The Importance of Test Case Assertions

Short description:

This talk is not about arbitrary rules for the number of assertions in a test case. There are pros and cons to using one or multiple assertions. It's important to find an approach that works for you. The examples used are intentionally simple to illustrate the main points.

The second thing this talk isn't about is about certain arbitrary rules when it comes to how many assertions should be in one test case, right? There are linked rules out there that say every test should only have one assertion in it and I don't think it's necessarily always, you know, correct. So for instance, you know, these two are exactly the same. The first one uses one assertion where, you know, we wanna assert that a user object has certain structure so we can use match object to match against all those properties. And if something should be missing then it fails, but we could write the exact same test also with two assertions where we check for the properties individually. There are probably, you know, there are pros and cons to either of this approach, and either of these approaches, and but I wouldn't say, you know, one is particularly better or one is worse. So also here, you know, this is not something I'd like to talk about. Figure out something that works for you and go with that, right? And obviously…so what this talk is also not about is like, hopefully, is nitpicking my example. So I've chosen deliberately simple examples to, you know, to get the general idea across. They are obviously not from the real world, right? You probably wouldn't find them exactly like that in a real world scenario, sometimes they might even contradict certain rules that I say, but then always just to, you know, to get the point that I'm currently talking about better across.

3. General Test Structure and Use of Assertions

Short description:

Let's dive right in! The first part focuses on the general structure of tests and common pitfalls. Each test should cover one use case. Split tests that combine too much. Add a third test for the connection. Even in TDD, avoid cramming multiple use cases into one test. The second thing is the use of assertions. Express everything as a Boolean expression. Check if a user property has the correct name.

Okay, with the playing field leveled, let's dive right in and essentially I've structured this talk into two separate parts. So the first part focuses on the general structure of tests, like how to organize everything. And then the second part is, you know, how to really write tests and, you know, two common pitfalls people fall into.

So, let's start with the first thing that I see happen a lot and also let me just point my test runner. I'm to that. And this is while I don't think there needs to be one assertion, only one assertion in a test, what I do think is that each test should cover exactly one use case. This is a mistake you can, you know super easy make.

So for instance here, the test description and really just focus on the test description here is, it should be possible to read from and write to the database. Now, and we see this test fails. If I'd asked, you know, like, what is it? Is it the reading part or the writing part? You probably couldn't just, you know, tell from the top of your head, right? And also if you're working, you know, on your software and this test, you know, starts going to read again, you also need to figure out, like, which part is it that you just broke, right?

Um, and my general rule of thumb here is that whenever you, you write the word end A and D, into a test description, just make it two tests, right? Um, you're combining too much in there already, um, which, you know, shouldn't be combined. So what can we do about this? Um, you know, we can split it up and let me just, you know, um, repoint my test runner again. So now we've got two tests. Once it should be possible to reach from the database. The other one is it should be possible to write to the database. Now, um, if the second one fails, we know, okay, it's definitely going to be the writing part. Um, that's broken. Not the reading part. That's nice because, you know, already from the, uh, from the test fail, we can see what's going on.

However, there's one little, small improvement left to be made here. So what happens, for instance, if both fail? Because here we can see that use cases aren't necessarily only, you know, like the keyword end isn't the only thing you need to look out for. You know, if you think about it, like what do both reading and writing to the database need a connection to the database? So if both fail, it could be that, you know, there are separate issues. It could just be that the connection isn't working. So this would be the last improvement I would make to this at a third test that just looks for the connection. So you know, if the connection test fails, and the reading, and the writing, okay, it's definitely going to be the connection. Definitely there is an issue there, and it's very likely that this affects also the reading and writing aspects, while this isn't as clear if you don't have that spec.

And some of you now might be thinking, you know, if you do TDD, this is something that would, you know, like this structure comes really naturally, because if you start building this interface, you definitely would start with ensuring that there is a connection to the database. So this would be the first test you write, and then you would add other tests after that fact. However, even if you do TDD, sometimes you get carried away, you know, if you want to get something done, and this is when you start cramming multiple use cases into one test, this will bite you in the ass down the line.

The second thing I'd like to talk about when it comes to the general test structure is the use of assertions, right? And the pitfall you can make or you can, you know, trapped into here is that essentially we can express everything as a Boolean expression, like and make it a true or false check, right? And I've started here, let me just skip that second test for now. I've started here by checking whether a certain property on a user has the correct name.

4. Improving Test Failure Context

Short description:

Let me just point my test runner to this test to see it fail. The current test checks for the correct thing, but it lacks context when it fails. By using assertions with property checks, we can provide more context to understand why the test failed and what changes might have caused the error.

Let me just, you know, point my test runner to this so we can see it fail. Now, if I, you know, that test is fine, right, it checks for the correct thing, right? It just does it in a way that if it fails, it doesn't really tell us what's going on, right? So now you can see the test now just tells us it expected true, but it received false. So now what does it, what does it mean? Right. This forces us to open up the editor and navigate to the test file to, you know, to figure out what's going on here. However, we can, you know, do better. We can for instance use the assertion to have property, tell the thing and make our test look like this. So now we're expecting that a user object has a property called John, right? And now the arrow message looks, you know, slightly different, right? First of all, it tells us that it's looking for a certain path here called name. Then it tells us that it expected a value called John, but it got Jane, right? And what this gives us is context, right? So this doesn't just tell us that the test, which hopefully has a nice name, fails, but also some context to why it failed, right? And this then again, you know, helps you understand faster, you know, what did you just do, how does this relate maybe to this error so that you can figure out what might, you know, be causing this, you know, what did you change that causes this error?

5. Improving Test Code Structure

Short description:

You want to ensure that your error message is included in the error output. Jest offers an assertion that throws a function and expects a specific error message. It provides more context to work with your code and fix errors. Jest DOM and testing library are useful additions to Jest, allowing you to check for various UI states and attributes. Instead of using complex Boolean conditions, choose the assertion that best fits your needs. The first half covered the general structure of tests, assertions, and working with one use case at a time. Let's now examine some test code. Extracting interactions into methods can make the test structure cleaner and easier to understand. Be cautious of unnecessary interactions and assertions, especially when copying tests. Writing tests from scratch and avoiding copy-pasting can prevent bloated test cases.

You just want to make sure that, you know, some part of that error message because errors, you know, also add some standard front and for instance an error colon and you just want to make sure that your error message is in there, right? And we can do this, you know, with an index off check. And then, you know, we don't want it to be minus one because it would mean not in there. However, again, that error message expected not minus one doesn't help us a lot.

Again, luckily for us, Jest as a testing framework I'm using here offers us an assertion that just pass a function that we want to throw and expect it to throw with a certain error message. Now, again, the error output looks slightly different. First of all, we see what we expected. We now see what we received instead, custom message. And also, it points us directly to from where the error that was thrown was thrown. And this can, you know, once again, give you more context to work with your code. And then fix that error, hopefully.

There is, you know, probably a lot of you are working with DOM. Some of you might be working with testing library. And there are super nice additions to Jest. Jest DOM is one. If you're working with testing library, that just gives you a bunch of custom assertions to check for accessibility attributes for UI states, enabled, disabled, enabled, focus, all this kind of stuff. But it all falls into the same category, right? My general suggestion would be, you know, pick the assertion that is closest to what you actually want to do, whether you want to check for a property, whether you want to check for an error, whether you want to check for a certain UI state, and then go with that, you know, instead of just building some form of Boolean condition and then checking for that to be either true or false because that won't give you much.

Okay, so this essentially concludes the first half, right, general structure of tests. We got to talk about assertions and test descriptions and especially, you know, only working with one use case at a time. Now let's look at some test code, all right? So, for instance, this test, perfectly working. However, there's a lot going on in here, right? So, essentially this test just checks that if we change an input that our onChange handler was called with the correct property. But you know, we have a lot of setup, lots of, you know, interaction going on down here. And the question I would ask is, you know, is all of this interaction necessary? And it could be, right? And if it is necessary, then instead of just, you know, spelling it out here, I would, you know, go ahead and extract that, for instance, into a method, call it, for instance, simulateChange, which has a value. And then I can just move this here down there, or simulateChange instead. So now the whole Arrange, Act, the third structure of that test became much, you know, cleaner, right? And also this has a name, right? We want to simulate the change because this is what this block was all about, right? Now by, you know, extracting it to the method, giving that method a proper name, we can just make this clearer to the reader. However, I see much more. For instance, there's an onKeyDown, you know, function defined here. So it's been a long time since we've heard this pass along but we never asserted it. So my question would be like, how important is it, right? And this, you know, this can happen especially if you copy paste tests a lot because this might just, you know, was important in another test that you copied this from and you never took it out, right? And by, you know, repeating this, you can blow up your test cases one after the other which is why personally I always write out tests from the start. I try to, you know, not copy and paste at all because I'm lazy. So, and my laziness will force me to write the least amount of code and also, you know, start extracting into methods as soon as I, you know, repeat the same structure for the second or third time.

6. Importance of Small Tests

Short description:

My laziness forces me to write minimal code and extract into methods. Any change that keeps the test green, except removing the assertion, is valid. Keep tests as small as possible.

So, and my laziness will force me to write the least amount of code and also, you know, start extracting into methods as soon as I, you know, repeat the same structure for the second or third time. So I use my laziness as a forcing function for nicer tests essentially. And also given that, you know, let's assume this was written with red-green refactor so the test failed in the beginning. Then we implemented something, make it green, now we're in the refactor phase anyway. So essentially any change we can make to this test that, you know, keeps it green, and I expect this except maybe for removing the assertion is a valid change, right? So here for instance, I could say, let's just remove the on key down and then see what happens. Yeah. So it's still green, and we can also remove the initial value here because that's also, you know, that's, I know I think it's important, let's see. Let's figure out whether it was important or not. Nope, doesn't seem so. And just to prove it to you, if I remove the on change here, test the type text, but now it should fail. Yep, now the test fails. So that apparently wasn't, was a change I should, you know, I shouldn't be making. Okay, so with that in mind, the lesson to learn here is, you know, keep tests as small as possible.

7. The Dangers of Extracting Code in Tests

Short description:

By extracting code that is present in multiple tests, we may inadvertently couple the tests and introduce shared mutable state. This can lead to hard-to-debug errors, as changing the order of the tests or isolating them can cause failures. It is important to avoid these scenarios by keeping tests concise, focusing on one use case per test, using appropriate assertions, and prioritizing test isolation over completely dry code.

However, and this brings us actually to the last step is one thing, you know, I did also by extracting this is like if this is, you know, if this part of the code would be present in multiple tests, then you know, I would adhere to the dry principle. However, dry can be dangerous in tests and let me show you why.

So and let's look at this last example here. So essentially these two tests, you know, the one checks that you know onClick property isn't called when the button is disabled and the other one checks that it is called if the button is not disabled, right? And these tests look a lot alike, right? However, if we look at the renderer, then we see that this one passes the disabled prop, this one doesn't, you know, clicking the button is exactly the same code. However, you know, it's already minimal, so maybe, you know, it doesn't make sense to extract that. And then the assertion is also slightly different because one has a not in it and one doesn't. So however, the onClick method that we define here is, you know, this line of code is the same in both test cases. So you could be tempted to keep this test drive, to move it out of the test, now move it here, save, and you know, we followed a red-green refactor. So if the test is green, everything is fine and the test stay green. So this change must be a valid change. However, we made one big mistake and this, the mistake we made is we just coupled those two tests and we coupled them by extracting this, because this is something that is mutable. There we have it again, shared mutable state, you know the line, is the root of all evil. Right, so in this case, by extracting the OnClick handler, we now essentially we coupled both tests to it and even worse, the order of the test all of a sudden is important because if I move this test below this one and look what happens, it starts to fail. However, it doesn't fail because the code doesn't work. It just starts to fail because this OnClick handler here starts counting and it has been, when the first test runs, OnClick handler is called for the first time, it stores this and then if the second test runs then this assertion is not true anymore. Because it has been called in this test. So we coupled our test and are not isolated anymore and these bugs are really, really nasty to debug because if you start, if I single out this test, it will work again. And this is one of the most frustrating things I'm doing. If I look at a test failure, I put an only somewhere and then the test starts working I was like, damn. Because then I essentially have to really dig into the code and figure out, okay, what's going on? Like, where is the shared state? Where, how can I make it not shared and take this apart?

So my general suggestion is try to, certain wetness of tests, I think, is good. Just to avoid these kinds of scenarios here. And this almost concludes my talk. So just to recap, what are the four important points to learn here. So first point is, you know, try to stick to one use case per test. This is really important. One use case does not mean one assertion, just, you know, one use case. Then try to use assertions that match, really, what you want to express, right? So that if an error happens, the error message delivers as much context as possible for the particular thing you were testing. Then, when you write your tests, maybe don't copy and paste too much, right? Keep the tests concise and keep them to only what they really need to, you know, perform their work, right? This helps you should you, you know, should you need to dig into the test code to not be distracted by everything else that is there that does nothing, right? The last thing is, you know, don't try to be too dry with your tests, you know, because test isolation is more important than, you know, completely dry code. Every test needs to work on its own and shouldn't rely on something shared global because that just, you know, comes with a lot of danger and hard to debug errors. That's it.

QnA

Poll Results and UI Test Exceptions

Short description:

Thank you for listening. Let's discuss the poll results. People who use TDD most of the time were winning, but some had trouble. Questions from the audience: Mark asks about exceptions to not using the 'end' keyword in UI tests. Philip responds that it depends on the situation and mentions the importance of testability in the application.

Thank you very much for listening and see you in the QA. Bye-bye.

Hey, Filip. How are you doing? Thanks for the great talk. Thank you very much. I enjoyed giving it or recording it. Awesome. Awesome.

I think we could first look into the results of the poll question that you brought to the table. And actually, when I was voting, I thought there was one missing answer, which would be like, I like to do TDD, but I don't do it most of the time. So I think this one was the one missing, but what do you think about the results? Like I was impressed that in the beginning, people that were using most of the time were winning, but now there are some people that tried and had some trouble winning. So what are your thoughts on that? I'm still impressed by the use it most of the time as the second option in the poll. Me too. I absolutely did not expect that, even though I did expect the Trident had some trouble because that's usually the answer I get if I ask people whether they work with TTT or not. Yeah, but it seems a good sign, right? Absolutely. But maybe at TestJS Summit, maybe people are a bit biased. That's true. That's a good point.

We have some questions from the audience. So I will bring the first one, which is from Mark regarding don't use keyword end. What about scenarios driven tasks? Here it's, he's talking about UI tests when they set up takes a considerable amount of time. We tried to split up scenarios, but ended up with very long running tests. Thus, are there exceptions to the denial of end? I just wanted to give my comment before you answer it, which to me, it seems, it sounds that there are some bad smells in the testing code when it's like when there are no testability in the application to allow you to create the state in an easier way. When you're talking about UI tests and you depend on the UI for doing everything. But I would love to hear your answer for that. Yeah. In the end, I'd also go with solid it depends obviously. None of my rules are you can only do it this way. And if you do it in a different way, then you're doing it wrong, of course. But I get your sentiment a lot.

Test Isolation and TDD Usage

Short description:

If your current system does not allow you to isolate individual use cases, focus on turning it into a system that can. Having any test is better than no test. Use TDD always, even for small features, as it helps you understand the code and ensures you're on the safe side. Defining common unmutable constants in the scribe block depends on the situation. Extract constants if they are used in multiple tests, otherwise, keep them in line with the test. When the code is completely unknown, spike in with a test to experiment and understand the code before starting over.

So I would also then ask, if your current system does not allow you to isolate individual use cases, then maybe this is the issue you should put your focus on and try to turn it into a system that actually can allow you to do this. Because in the end, you still end up with the same issues, right? If you mix a lot of stuff in there, okay, you got a maybe fast-running test, but then it doesn't give you as much if it fails. But then again, if this is all options, consider it your best way of doing it. And of course, do it, right? Because having better than doing it manually, and then any test is better than no test, if you ask me.

Exactly, yeah. But yeah, I think you brought a good point. We don't have to blame the tests always. Sometimes there's missing testability in the application to allow us to write tests that are more isolated and that are faster to execute. That's great.

We have a question from Elias. Do you think you we should use TDD always or just for some features? That is a good question. Personally, I'd go with always. Because my rule of thumb is like, if it's a super small, you know, people sometimes ask me, like, if it's a super small feature, and I know exactly what it's supposed to do, then I don't need to do TDD, to which I would answer. And if it's super small and super easy, then writing a test for it also doesn't take no time. So, you know, you can just do it. And you'll be on the safe side anyways. If it's, you know, larger, and while writing the test, you figure out that, you know, you have, you know, you still need to understand parts of it. Then again, the test, you know, writing the test upfront has just helped you understand that you know, you're missing some information or you need to, you know, you know, prototype a little more to figure out what you actually want to achieve. So I just go with always because I think it just helps you so much all the time. Yeah, that's that's a good take for sure.

And we have a question from Chris Christiana. Do you think it is bad practice to define common unmutable constants, button text in the scribe block? What do you think about? Personally, also, once again, classic depends, right? For if it's, if it's a common text, common configuration, for instance, like default configuration, if it has a good name, for instance, you know, for instance, as I said, I then use prefixes like default or something else to make it clear that this is something never changes and that I just need, then I think this is totally fine. However, if you start extracting constants out into describes, and then they are just used in like one or two tests and the others don't need them, then, you know, why are they on the outside? It might be overcomplicating, right? My rule of thumb is I like to retest from top to end and understand that. If I reach a point, like if I'm a compiler and it's like, okay, here's the line, variable, where's this variable? And then I need to look up the tree to figure out, okay, what's in this thing now, then I'd rather have it just in line with the test and all is good, right? Yeah, exactly. Let me see, another question here. How do you know where to where the test when the code is completely unknown? So that is also a good question. So what I do is if the code is completely unknown, I just spike in there. You know, I did the test. I'll do an experiment to figure out what I'm dealing with. And then once I understand the code enough, I'll just throw all of that away and start with the test again.

Benefits of TDD and Testable Automation

Short description:

Throw away what you just did and go back to safety with TDD. Automation work should be testable, so define acceptance criteria upfront. Refactor scripts to have a test runner you're comfortable with. Define test cases by scenarios for end-to-end testing.

And what happens is that I 90% of the time end up with a better solution than the prototype I just wrote. So yeah, the important part really is to throw away everything you just did, and then go back to safety with TDD. That makes total sense.

We have another question from Elias, do you think only the dev team should do TDD or the automation QA should do it too? I have that as a QA, I do TDD as well. So I mean, when I'm writing a test, sometimes I write the test in a way that sometimes I might need an external function that will do something for me. I write the name of it. And afterwards I implement it. So I could say that this is kind of similar to doing TDD. What do you think?

Absolutely. And I mean, as long as the automation work is testable, then why not define your acceptance criteria upfront? I think that I've refactored a ton of S-H scripts to, I don't know, JavaScript or TypeScript or Python or whatever, just so that I have a test runner that I'm comfortable with, and I can do the exact same thing, maybe a little bit slower, but then I can define my test cases by scenarios. And it's like, okay, this uploads to the right AWS bucket, for instance, and this thing, you know, creates the correct configuration file or reads it from somewhere. And this, you know, on the whole end-to-end process, you know, wherever you can edit test, edit test, because it's gonna save you at some point. That's completely true.

Setting up Engineering Workflow and Mocking

Short description:

Andrew asked for advice on setting up an entire engineering workflow for success. Philipp recommended handing out a JS testing course to the team and using a tool for test coverage. He emphasized starting with a low coverage percentage and gradually increasing it while learning to write better tests. Philipp also discussed the importance of mocking only the smallest parts of the system and not mocking everything. He suggested using the application's state and avoiding excessive mocking. Philipp concluded by thanking the audience and expressing his pleasure in being part of the talk.

Yeah. There is another one here that I find interesting from Andrew, I'm leading our company to start writing tests. They have not really done any before. Any advice for setting up an entire engineering work for success? It's a hard topic.

Funny enough, I've been in a very similar situation, I think. I mean, what helped me is, first of all, I handed out the JS testing course by Kent Beck and not Kent Beck, Kent C.Dodds, all those Kents, to the team. When it came to testing and also test coverage and stuff, what I'd like to do is, we use a tool called, I forgot the name of the tool, but essentially what you get is a whole coverage in the repository, and then every PR must have a patch coverage. So, coverage of the Lion's Exchange, that is at least the coverage of the repository. So you start at a very low number, I don't know, 4%. Everyone can manage to write a test that covers at least 4% of the Lions. And as you move along, you raise that number, right? But as you raise that number, you also learn how to write better and better tests. So, this is a very fulfilling exercise then for the team. And you don't just say, okay, 80% test coverage on all files now from yesterday to tomorrow.

Yeah, I think that's great advice.

There's one here that I'm super curious about your opinion. What's your opinion about mocking? Ah, yeah. Was it as little as needed, as much as possible, as much as needed, I'd say. So I personally try to mock only the really, really smallest parts of the system and not just start mocking everything away. Because the issue is, you know, as soon as you mock everything, you're not testing anything anymore. And especially when it comes to integration tests, you don't need to mock so much. This is also something I learned that you sometimes think that you need to mock a lot, but then you actually don't have to, because if you have a React or Vue or whatever application, use your Vue X state, use your Redux state and just have everything run through that. Because this is how your app works. If you think it's too slow, well, maybe try make your app faster, because your users will suffer the same issues.

Yeah, yeah. And it's always contextual as well. There are some contexts where you might be depending on an external API, it might make sense to mock it. So I think you did a brief job on explaining it. So I would like to thank you, Philipp, for the great talk and for answering all the questions. Thanks again for being here with us. It was a pleasure.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Cypress has taken the world by storm by brining an easy to use tool for end to end testing. It’s capabilities have proven to be be useful for creating stable tests for frontend applications. But end to end testing is just a small part of testing efforts. What about your API? What about your components? Well, in my talk I would like to show you how we can start with end-to-end tests, go deeper with component testing and then move up to testing our API, circ
TestJS Summit 2021TestJS Summit 2021
31 min
Test Effective Development
Developers want to sleep tight knowing they didn't break production. Companies want to be efficient in order to meet their customer needs faster and to gain competitive advantage sooner. We ALL want to be cost effective... or shall I say... TEST EFFECTIVE!But how do we do that?Are the "unit" and "integration" terminology serves us right?Or is it time for a change? When should we use either strategy to maximize our "test effectiveness"?In this talk I'll show you a brand new way to think about cost effective testing with new strategies and new testing terms!It’s time to go DEEPER!
TestJS Summit 2023TestJS Summit 2023
21 min
Everyone Can Easily Write Tests
Let’s take a look at how Playwright can help you get your end to end tests written with tools like Codegen that generate tests on user interaction. Let’s explore UI mode for a better developer experience and then go over some tips to make sure you don’t have flakey tests. Then let’s talk about how to get your tests up and running on CI, debugging on CI and scaling using shards.

Workshops on related topic

React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
WorkshopFree
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
WorkshopFree
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
WorkshopFree
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.