Break the Race: Easy Race Condition Detection for React

Rate this content
Bookmark
Project website

Race conditions are among some of the most challenging to detect and reproduce issues. As such they pose a significant challenge in development notably in UI. In this talk, we explore how to detect race conditions by leveraging fuzzing techniques. We walk you through discovering the real problem of race conditions and how they impact user experience. We provide you tools and examples demonstrating how to easily detect them in your daily work thanks to tests relying on fuzzing. After that talk, we hope your React code will be race conditions free or at least that you will have the right tools to help you.

31 min
08 Dec, 2023

Video Summary and Transcription

Race conditions can be complex to debug and reproduce, causing frustration for users. The speaker discusses examples of race conditions and ways to fix and avoid them. They demonstrate an example of an auto-completion field in React and how to handle race conditions in API calls. The speaker introduces the FastCheck framework for property-based testing to address race conditions and improve tests. Randomizing inputs and outputs can help uncover bugs specific to certain scenarios. The speaker also discusses mitigating race conditions in React and handling test overhead and reproducibility.

Available in Español

1. Introduction to Race Condition

Short description:

Today, I will talk about race condition. Race condition is complex to debug and reproduce. It happens unexpectedly and can be frustrating for users. I will give examples and discuss ways to fix and avoid race conditions. I'm Nicolas, the author of Fasttech library and the founder of Pigment.

Good afternoon, everyone. Today, I will talk about race condition. My name is Nicolas and the idea is to address a problem like it's complex to debug, complex to reproduce, and this is why I wanted to discuss race condition. Because basically, race condition happens to be a bit like that. You never know when it happens and you probably Oh, sorry. It was not supposed to be that. You never know when it happens and you have to deal with them. But we will focus on race condition.

Here is an example of race condition. In the past, a few years ago, I was looking for stays in Paris in order to go to Paris. On the famous website, I'm not linked at all to this company, but I was looking for stays in Paris, and at some point in time, I changed my mind. I decided to go to London. You can see that I was browsing stays in London. There are some nice stays in London, and they look pretty nice, actually. But at some point in time, I got some results back from Paris. This is exactly what race condition is about. It's like something that you ask to at some point, but it happens a bit late, and you receive results later. At some point, the user is a bit frustrated to get this result back. In that specific case, it's just a bad UX for the user. They will cope with that, and they will be able to still use the application. But that's not great. I will give some extra example for that. But today, the aim will be to talk about these race conditions and to find some ways together to fix them and to avoid them from being a source of frustration for users.

So as I quickly introduced myself before, let's move a bit further. I'm Nicolas. As it was presented, I'm the author of the library called Fasttech. You can have the link there. You can find me on social media. But I will talk a bit about my company. My company is called Pigment.

2. Understanding Race Conditions

Short description:

I'm doing business planning and wanted to talk about race conditions. Race conditions are important to deal with, especially in finance where accurate figures are crucial. A race condition occurs when the system's behavior depends on the timing of uncontrollable events. In a front or React application, user inputs and API calls are potential sources of race conditions. To illustrate, let's revisit a video where I searched for stays in Paris and changed my mind. The API calls for Paris and London did not return results in the expected order, causing a buggy UI.

I'm doing business planning. So it's like for financial data, and in finance, it's important to have the right figures at the right time in real time. And this is mostly why I wanted to talk about race conditions. At job, I cannot afford having any race condition. I mean, at least not having a race condition that would cause the figures to be wrong, because people will use them to just make some decisions. So race conditions are very important to deal with, and you have to be able to co-op with them at some point in order to avoid problems.

But let's see together what's a race condition. And I will take the definition that comes from Wikipedia. So a race condition, according to Wikipedia, is a condition where the system's substantive behavior is dependent on the sequence of timing of other uncontrollable events. What I like in this definition is like the sequence of uncontrollable events. And if you think a little bit about your front application or your React application, because we are at a React conference. We have plenty of uncontrollable events. It goes from user inputs. We never know when they will input stuff in their application to API calls. We never know when the backend will answer. And basically, these are potential sources of race condition.

In order to better understand what's a race condition, let's go back to the video we've seen together when I was looking for stays in Paris, and changed my mind. So initially, in this video, I have been looking for stays in Paris. So I did a call to... And I typed some stuff to look for stays in Paris. At this point, I expect that the website was doing some kind of API call to get these stays back. At some point in time, it should have received result for Paris, because it was my initial query. Then I did a second call in order to look for stays in London. Then same again, second call for the API. And then I should have received result. Actually, it hasn't happened that way. I never received the result for Paris at the right time. The result for Paris came a bit later. And that's the reason why we received some kind of buggy UI. We've seen some kind of buggy UI.

3. Example of Auto-Completion Field in React

Short description:

I will focus on a very simple example of an auto-completion field written in React. The user types in a query and the field returns results as they type. I will propose a naive implementation using React and two states. The first state is the query, which is used to fill an input. When the user makes changes to the input, a function called update query is called, which sets the state. There is a special case for the empty string.

Because the result for Paris came later. And in terms of code, the code does not think about this possibility. And it just breaks. In order to better understand how it works and how we can face race condition or test them, I will just focus on a very simple example.

The example with this search stuff was interesting. But it's a bit complex. So I will take a very simple example. We will consider an auto-completion field written in React. Very simple auto-completion field. Basically something like this one. You have a field. You can type some stuff. And you get some result. As long as you type... So you type one letter. You get some result for this letter. You type two letters. You get result for these two letters, et cetera.

I will just propose a very naive implementation for that. Everything is in React. It's made of two states. I will focus on the first one. The first one is a query. So basically the user will have a query. And this query will be used to fill an input. And whenever the user will do some changes in the input, I will call a function called update query. So let's focus a little bit on update query. Update query starts by setting the state. Which is quite normal. There is a special case for the empty string.

4. Handling API Calls and Testing

Short description:

In this specific case, I reset the suggestions to empty and call the API with the new user query. Once I get the result, I update the state of suggestions and render them in the component. To avoid race conditions, we can manually test the input or write tests. I wrote a simple test using Jest and React testing library. The test stubs the API and filters results based on the user query. It ensures that the result will be 'banana' and runs the component.

Because in that specific case, I don't want to do any API call or anything special. So I just reset the suggestions to something being empty. And there is the interesting case. Which is like doing the API call. They are very simple. I'm calling the API. I have an API which is called suggestion four. I'm calling this API with the new query that the user just typed. I'm awaiting the result. And as soon as I get this result, I will send them into my state of suggestions. As soon as I get my state updated, I will simply render the suggestions in the component.

So the component is pretty naive. It's just a field with an input. And you get the suggestion as you type. So now that we have this component, as I was talking about race condition, the question we may ask ourselves is how do we make sure that we don't have any race condition? One of the possibilities can be to play with the input ourselves and check if it works or not. But it's not very reliable. Another way will be like just writing tests so that we make sure that it will work today but also tomorrow. So let's give it a try. So I just wrote a test in Jest plus React testing library.

The test is pretty simple. It starts by stubbing the API. So I want to stub the API for suggestion four. So basically, the API will just receive a query. And any time it receives a query, it will just filter on a set of all results. It's just a mocking of the API, not to call the real backend. I also outcoded the user query. I don't want to co-op with all user query. I just want one of them, which is NAN. And based on this user query plus on the array of possible results I have outcoded on line three, I can know for sure that the result will be banana and only banana. Then as I know that I have all my inputs ready, I will run the other component.

5. Running the Test and Handling Race Conditions

Short description:

I run the auto completion field with the mocked API and emulate user typing. I ensure the suggestions displayed are as expected and the number of API calls matches the query length. The test passes, but the naive implementation doesn't handle race conditions. I illustrate the problem with a timeline where API calls may resolve out of order.

So basically, I just run the other component, which is the auto completion field with the mocked API or stubbed API. And I will then emulate a user typing in this field. As I want to be as close as possible to the reality, I will simulate a user typing one letter at a time. So I just put in a small delay of one just to see the user typing the stuff as a real user will do. And now that everything is ready, we can assert and we can run the test.

So basically, I want to be sure that the suggestion I will display to the users are exactly the one that I was expecting. Nothing more, nothing less. And that the number of call to the API will be exactly the number of letters in my query. So in that case, three. So now that we have all tests ready, we can launch the test. And this time it passes, which is pretty sad, because basically the implementation I suggested is like, I would say it's very simple one. It's a naive one. And I honestly haven't done anything to cooperate with race condition. So I will be a bit surprised that it works out of the box.

In order to just illustrate some problems that we may have in this specific case, I would just show back a timeline. In the test, we are, like, trying to put a query in the input field. So basically, the user will type N to start with. We will fire an API call. Because the code is just firing the API call as soon as we update the query. And then we type the second letter. A. We do another API call for N.A., then the third letter. You get the idea. If you remember the first timeline example I showed, the problem that we faced at that time was that promises may result out of order. So basically, what if we resolved N.A. first? It's a possibility. So basically, if we resolve N.A. first, we will get banana. Because the implementation will say banana. Then if we resolve N.A.

6. Understanding Order of Promise Resolution

Short description:

In second place, if the first query resolves last, the user will see results that don't match the query. This can be compared to typing 'Paris' and getting results for 'London.' It's important to address this issue as it can easily happen in the real world due to factors like load balancing and caching. Let's explore what we missed in our original test and what we found. We came up with a timeline where changing the order of results caused the problem. To address this, we need to change the order in which promises resolve. Introducing FastCheck.

In second place, we will still have banana. Which is definitely okay. But the problem is that if we resolve the first query at the last place, we will get something which is a bit strange. I mean, the user will have type N.A.N. and the result the user will see are not the right one. The user will see the result that doesn't match the query.

It's a bit like this one. I mean, you are typing something like Paris and you get result for London. Which is obviously not what you want as a user. And as a user, if you have an input which is behaving that way, you can have something even worse. Something like just dancing all the time. Anytime you play with a letter, you have something which is dancing. This video has been really taken from the official website. Once again, I'm not linked to this company. But when you play a little bit with it, you can have this kind of behavior.

This kind of problem, just before I move to the next slide, this kind of problem can easily happen in the real world. You don't control how your load balancing will work. So maybe you will have one server hit by the first query, a second server for the second query. Maybe the result for Paris has been cached and not the one for London. There are many ways you can fall into something faster for one query and not for the other one. And this is why we need to make sure that we don't fall into that case. So let's see what we missed in our original test. Let's see together what we found.

So basically, what we come up with was some kind of timeline. And in this timeline, I have something special. I just change a little bit the order of the results. Basically, instead of resolving N, then NAs and NAM, I just decided for, I would say, random reason to say N will resolve last. And this is the reason why I got the problem reported if I run the test manually. So the idea will be, if we think back about that, the idea will be to change a bit the order of promises and how they resolve, the order in which they will resolve. So let me introduce you to FastCheck.

7. Using FastCheck Framework for API Calls

Short description:

I'm the author and main contributor of FastCheck, a property-based testing framework. Let's see how we can use it. We define a property that takes generators and a lambda function for assertions. We execute the property using the property and assertion helpers. We rely on the framework to handle API calls in a deterministic way.

As we spoke a little bit about that before at the beginning of the talk, I'm the author and main contributor of FastCheck. This is a screenshot of a few weeks ago. It's increased a little bit for the download. I'm pretty happy for that. If there are no new people wanting to test it, I'm happy to have new people. But it's a property-based testing framework. I will not go into too many details about property-based. I will just give a brief overview.

But let's see how we can use it, basically. So we have our test. So this is the test we've seen together initially at the beginning of this talk. We will just connect FastCheck. As FastCheck is a property-based testing framework, we need to define a property. A property is just something that takes generators. In my case, it's a scheduler. I want to schedule something. And based on these generators, I will have some kind of lambda function. And this lambda function is the assertion stuff, what I want to run, basically. In my case, I will just run the same test as before. But this time, I get the S parameter. The S parameter is a scheduling stuff. I will use it right after.

And now that we have the property, I need to execute it. So basically, the framework is made with two helpers. We have property and we have assertion, which are some kind of running stuff. And basically, this is the way to connect the framework. Now that we have the framework launched, we can just try to ask the framework to do something for us. So basically, what we want there is to be able to use the framework and to rely on the framework whenever there is a call to the API. So what I'm doing there is I'm telling the framework, okay, whenever you receive a call for suggestions for, you need to handle that call. And what it means is that if there is a call, the framework will block this call until it considers it useful to enter in a deterministic way.

8. Connecting the Framework and Finding Bugs

Short description:

And when it will enter is like when I told to him to do so. It's line 20. In order to connect the framework, I had to define some kind of property. The property is made of generators and the lambda function, which is the way to execute the test itself. The result is made of a counter example. The framework decided that the order should not be like the one we received initially, but it will try something else. It reports that it doesn't work. It's a property-based testing framework similar to fuzzing.

And when it will enter is like when I told to him to do so. It's line 20. I just tell to the framework, okay. Now you have everything ready. You can just choose what you want to release and when you want to release the stuff. And this is what it's done in line 20.

So basically, in order to connect the framework, I had to define like some kind of property. The property is made of generators and the lambda function, which is like the way to execute the test itself. As soon as we get the scheduling stuff, I can schedule some of the calls. And after that, when the calls get scheduled, I can just ask the framework to release everything. And my test is the same as before. I expect to have the same result, whatever the order of result I get.

Now that we have connected the framework, the question we may ask ourself is, okay, it's cool, but is it working? Will I find a bug with that one? And the thing is that, yes. And if we take back the result, we get some results a bit more complex than usual result. But the result is made of a counter example. So basically, the counter example there is telling us, okay, we got some tasks, they got planned, and the scheduler wanted to execute them. He got asked to execute them. And the scheduler decided to first of all, release the promises for NAN. So that's the reason why we started with NAN. Then he started to release NA. And at the end, he wanted to release N. But what we received initially was N, then we received NA, and then NAN. So basically, the framework decided that the order should not be like the one we received initially, but it will try something else. And based on this value, the framework is also reporting to us that it doesn't work. So basically, that instead of having only banana, I got extra results, which is basically what we illustrated with the timeline together. But we can push things even further.

So I was talking about the fact that it's a property-based testing framework. The idea is close to fuzzing. For people who know a bit about fuzzing, the way to do it is to push things, like to ask the computer to think about your test, to think about edge cases for you. So you don't want to write the edge cases, you want the computer to think about edge cases for you.

9. Improving Test and Handling Uncontrollable Events

Short description:

Today, I will discuss how to improve the test by letting the framework decide the expected results, user query, and generated result. By pre-computing the results and allowing the framework to choose the query, we can handle different scenarios. Additionally, instead of hardcoding the results, we can generate random values. This approach considers the user as an uncontrollable event and tests the code with user typing, queries resolving, and user typing again.

And today in that test, I have plenty of stuff that can be improved a little bit to ask, to give more help to the framework to decide for us what can be or what could not be done. So basically, the expected results, we could have just computed them. We have everything ready at this place to compute the result We know that we have all the possible results in an array. We know the user query in advance. So we can just pre-compute the result in advance. This one is like normal.

Then we have the user query. At the moment we consider that the user query will always be NAM. But if we change the user query to something else, maybe the code will just never fail because we never fall into a race condition. Or maybe we will fall into a race condition, but not in a way that will break the UI. So we want to let the framework decide what will be the query. So instead of outcoding this query, the idea is to ask the framework for a string, any string, anything. We just ask for a string and we take this string, it's called the user query, and the code will work as before.

Then we can also generate the result. Instead of having outcoded results, it's not cool to have outcoded things, basically. That's the idea behind the scene. We can just ask for a set of random values. They will be our new all results. And basically the test continues to work as it used before. And if you remember a bit the initial definition of race condition, race condition were problems of uncontrollable events. And I told you about API calls, but there are other uncontrollable events in the browser. Basically the user. We never know when the user will click, we never know when the user will type. And guess what? In the completion field, the user may type letters, there might be some queries coming back, they might type other letters, other queries, et cetera. And so far we haven't tried this stuff. We don't know if all code will work if we have user typing, queries resolving, user typing again. So we can change that a little bit and say, okay, I want the user to type, I want the queries to come back, but I want to interleave all of them together in a single test. And this is how we can do that. Just you will be able to take back the slide later if you want to read more about that. And there we are.

10. Updating Test and Uncovering Bugs

Short description:

We update our test by letting the framework decide the valid user query, results, and ordering. The framework tests different scenarios and identifies a failing case. By testing various user inputs, we can uncover bugs specific to certain strings.

So we have updated our test in a different way. No, we don't out code anything. We don't have user query, we don't have all results being out coded. We let the framework decide what are the valid user query, what are the valid results, and what is the ordering we consider okay. And the framework will just do its job. It will try to do stuff and to see if it works or if it fails in some cases. And in this case, I have a case which is failing. The framework is telling me, okay, I try to type A, then I try to type B. Then I go to promise for A, B. I will resolve it first. And then I also go to query for A, and I will resolve it at the end. And with that stuff, given that all results were just the array containing A, so I only have A as a possible result, and the user typed A, B, I got a race condition. So this one is even simpler than the one we come up with initially. But the point of this one is that it's even going further, because it can test even more possibilities. It will test any kind of string, any time of user inputs, and maybe we can have a bug for a specific string. So that's the point there.

11. Addressing Race Conditions and Randomizing Inputs

Short description:

Race conditions break user trust in your application. Reordering promises can help fix them, but it's easy to miss the real problem. Randomizing output and inputs is a solution. Fastcheck library, used by Jest and Jasmine, can help identify race conditions in React.

If you have to keep something in mind about that talk, I would say that race conditions are real. I would say they break the trust that the user may have in your font and in your application, generally speaking. So that's most of the time not that critical to fix them. But if you don't fix them, you will break the trust the user have in you and in your application.

The trick that we use is basically reordering. It can be done without any framework. You can just play with promises and resolving them whenever you want. Problem is that you will outcode something and you may not outcode the right thing, and you may fall outside of the real problem.

So the solution I was presenting there is like to go for randomized stuff. So we can randomize the output and the way the promise will resolve. But we can also go further and randomize the inputs themselves. So that's the point of this talk. I think that's mostly all for this talk. Before I left the mic, there was like the name of the library is fastcheck, and it's already used by Jest internally and Jasmine to give some example. And I also used it and found the race condition in React. But this one has never been merged. So that's all. Thank you very much for your talk.

12. Balancing Methodical Approach with Randomization

Short description:

I have a question about randomizing the resolution of promises. How do you balance the need to be methodical with randomization? The framework allows you to seed the algorithm, making tests reproducible. Additionally, there is shrinking logic to simplify failing cases and reduce complexity.

I definitely have some questions. I might actually just very quickly open with one, which is, so you advocate for an approach around randomizing the way in which promises resolve. Something about that doesn't sit right with me. Like I want to be quite methodical in the way that I write code. How do you balance the kind of, I don't know, the developer need or desire to remain methodical with basically just randomizing it until it works? I will say it's random in a certain way. You have some kind of way to seed the algorithm. So basically everything that is generated by the framework is like seeded. So anytime you run a test, you are able to reproduce everything. So basically you have a seed. If it's failing, I didn't show that much in the presentation, but the test is you can reproduce the test just based on the seed. And in addition to that, we have some kind of shrinking logic, which will not give you the first failing case, but will try to reduce the case to something very simple to get. Because most of the time, for the example with A and B stuff, it was the initial case where it's probably way more complex. And the framework said, okay, I can get this problem with something even smaller, with less promises, less value, et cetera. So it's really, yeah, it's really about reducing the surface for understanding what's wrong.

QnA

Mitigating Race Conditions

Short description:

Thank you for the questions! The React race condition I found was related to suspense and suspense list. It caused a locking issue where the suspense list wouldn't resolve. To mitigate race conditions, the approach depends on the case. In my company, we use a cache strategy for globally available values and real-time updates. The stability of the output and information from the backend determine the approach. Although there may be some UI flickering, the goal is to converge to the real version after a certain time.

That's cool. Thank you very much for all the questions, people. Wow, there are now so many questions. Please do keep them coming though. But that was quick.

What was the React race condition that you found? Wow, good question. It was something around React. It was with suspense, suspense list and stuff like that. There was some kind of ordering in which given the resolution of your suspenses, if you were in the suspense list in some kind of mode forward or stuff like that, which is still, I'm not sure the stuff has been released yet, but you ended up in some kind of locked and you were never able to release, I mean, the suspense list will never resolve at all.

What approaches do you use to mitigate the race conditions after you've detected them? For the presentation, I haven't shown how we can fix it in this specific example. I think it highly depends on case by case. I personally have some case in the company where we have some cache strategy to cache some values which are available globally in the application and live updating in real time. I really think it depends on from time to time, depends what you want, how stable you want your output to be and what's the information you get from, for instance, the backend. In my case, sorry for the, I don't have any versioning number, so I cannot rely on, okay, this is the last version of, this is a previous one. So I may have some cases where the UI may flicker a little bit, let's say, but in the rule that I want in the UI is that at the end, after a given amount of time, I will always converge to something which is the real version.

Handling Test Overhead and Reproducibility

Short description:

In my case, the UI may flicker a little bit, but it will always converge to the real version. The framework runs the assertion 100 times by default, allowing you to reduce or customize the number of runs. The overhead is limited, as the generation code is kept simple. 100 runs are generally sufficient to find many bugs, but specific edge cases may require more frequent testing. Different runs of a test suite using random strings will use different strings each time, making it easier to detect new bugs.

In my case, sorry for the, I don't have any versioning number, so I cannot rely on, okay, this is the last version of, this is a previous one. So I may have some cases where the UI may flicker a little bit, let's say, but in the rule that I want in the UI is that at the end, after a given amount of time, I will always converge to something which is the real version.

Interesting. Cool. This is a great question. Is there any overhead or what is the overhead? Can you control this and how many iterations or permutations should you or can you or have you generated? By default, the framework will just run 100 times the assertion. The idea is not to run everything because the scope of possibilities is infinite because we are just thinking about any kind of user query, any kind of result. So basically, the framework is just saying, okay, let's try 100 of them. If it fails, it will just try to reduce and the overhead is like you run the test 100 times but you can reduce that, you can change the number of runs you want for your test if you need it. For the overhead, I will say like it's the cost to generate the value but normally, it should be rather limited because I try not to put that much complexity in the code that is doing the generation. I mean, the idea is to be as free as possible to do the generation stuff but it still has an overhead.

If the default is 100, is 100 like a good number to start with? Like does that work in a majority of cases? Yes, initially, I was a bit surprised when I started to do that. So the properties-based technique is not like related to JavaScript at all. It came from Haskell or other words and basically, I was a bit surprised that 100 runs could find something and when I gave it a try, actually with 100 runs, it's rather okay to find many bugs. If it's a very specific edge case, you will probably not fall into it in 100 runs but in that case, you have something very special to test and maybe you want to execute it more often to launch it on more tests. Yeah, sure. So 100 feels about right for many use cases. Yeah, and you will have your CI which will run it, etc. So you will be able to see. I feel like it depends is like a pretty safe answer for this one. Yeah. Yeah, yeah, yeah.

Will different runs of a test suite using random strings use different strings each, like all of the different runs? Sounds like it might be hard to reproduce failures though. That's a good question. I often get asked this one. Basically, by default, I'm saying by default, to see this as it will be changed all the time. So whenever you will run the test, you will have a new seed, so it means that new strings will be produced. And possibly from one run to another, you might have new bugs being detected. The philosophy behind it is that you don't want any bug in your application.

Stability and False Positives in Testing

Short description:

The framework is designed to find classical edge cases and should not be flaky. However, there may be rare instances where it fails. False positives can occur if the conditions for a race are very rare and depend on specific combinations of factors.

The philosophy behind it is that you don't want any bug in your application. So if it finds a bug, basically, maybe you just want to fix it. And after that, you have like, it might be hard to reproduce that part. We have a seed which will be printed in the output and you can just take it. Yeah. So you can reproduce the run by using the same seed. That's what I assumed would be the case. But doesn't that mean these tests are flaky? What if the error only occurs one of many orders of returns?

I feel like you kind of answered this in the last one. Yeah, normally. I've used it in production for years and my company. So they, when we thought about that, it was like, maybe it will be flaky all the time. But actually, it's pretty stable. Except if you do code that is doing very specific stuff in a very strange way. But most of the time, if it doesn't find the problem immediately, it will just never find any problem. I mean, it depends on how confident you are in your code and how many edge cases you put on the code. But basically, the framework is trying to find classical edge cases. So for instance, the small values will be generated more often. If you generate an array, possibly there will be some collision between the values more often than in purely random stuff, et cetera. So it's supposed to find the bug quite quickly. And that's the reason why normally it should not be flaky. But I cannot say not flaky because it may happen from time to time.

Yeah, it's such a tricky one, isn't it? Because you're not running the test suite under the exact same conditions every time. There is differences. But in theory, they are reasonably realistic cases. So if it fails, what? Yeah. I feel like this is in the same realm of questions, but is there a chance I never test just being a false positive, perhaps because you haven't run it enough times? Yeah, you can have false positives. So basically, you merge something, you feel that it works. For instance, I recently worked on some race condition in the cache. So the condition for the race was very, very rare. It depends on the combination of many things.

Testing with vTest Framework

Short description:

When running tests in CI, false positives can be quickly identified and reverted if the test is new. A question was raised about the pronunciation of 'vTest' or 'vitest,' and it was collectively decided to be 'vTest.' The test code shown can be used within existing Jest tests without any framework bindings. The speaker also mentioned having bindings for specific frameworks like Jest and vTest, but they are not necessary if using the .assert.property syntax.

It was a bit rough, so it was difficult to find it. So yeah, I would say that you can have false positives. And when you run it in CI, you will just see the problem immediately. So you can just revert if the test is new. So normally it should be rather quick to see the problem.

Yeah. Yeah. This is a question. Yesterday, testing JS Summit, I made this comment right at the start where I didn't know if it was vTest or vitest. The group collectively came to a decision, and now I can't remember what it is. Does it run well with vTest? See, I would have defaulted to vitest. And then because it's VT, isn't it? It's VT, so it has to be vTest. Thank you. Yeah, definitely.

There is no binding to any framework. So basically, the test I've shown you is just inside an existing test of Jest. We just use the classical Jest test. It's just a construct within it. So you can put the exact same code within vTest. I don't know how you want to pronounce it. It's vTest. It's vTest. I've heard vTest. And I have some bindings for specific frameworks. So basically, I have one for Jest, so that you can do like a it.props, something like that. For vTest, I have also one, but I want to evolve it a little bit with v1. But basically, normally, you should not need anything. I mean, if you are okay with writing the .assert.property, it will work well. Yeah, cool. Thank you very much for your input there as well.

Framework's Source Location and Q&A

Short description:

Does this framework help locate the source of conditions? We have a speaker Q&A section for face-to-face conversations. Please provide more information for unanswered questions. We have a short comfort break before continuing.

I think we probably have a quick moment for just this one. So does this framework help you locate the source of the conditions? I don't get exactly what we mean by the source of the condition, but actually, I've read it out loud and I don't know what it means either. Do you know what? We are literally just at that moment where the time is about to hit zero.

So actually, it's my time to remind you that over right by the entrance is the speaker Q&A section where we can have more face to face conversations with Nicola and we and both online as well. You can head over there in person. You can head over there. And if you are the originator of this question, I would love a little more information because I read it initially. I'm like, yeah, no, I get that. And now I'm a little too. Let's make sure we answer questions that people ask. We still have. There's so many questions that we didn't have time to cover. So please do use the speaker Q&A section.

We have a very short comfort break where you can pick between sticking out in here or heading over to the other space. If I can borrow my slides back, that would be wonderful, folks at the back of the room.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Top Content
Few developers enjoy debugging, and debugging can be complex for modern web apps because of the multiple frameworks, languages, and libraries used. But, developer tools have come a long way in making the process easier. In this talk, Jecelyn will dig into the modern state of debugging, improvements in DevTools, and how you can use them to reliably debug your apps.
TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Our understanding of performance & user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Top Content
Cypress has taken the world by storm by brining an easy to use tool for end to end testing. It’s capabilities have proven to be be useful for creating stable tests for frontend applications. But end to end testing is just a small part of testing efforts. What about your API? What about your components? Well, in my talk I would like to show you how we can start with end-to-end tests, go deeper with component testing and then move up to testing our API, circ
TestJS Summit 2021TestJS Summit 2021
31 min
Test Effective Development
Top Content
Developers want to sleep tight knowing they didn't break production. Companies want to be efficient in order to meet their customer needs faster and to gain competitive advantage sooner. We ALL want to be cost effective... or shall I say... TEST EFFECTIVE!But how do we do that?Are the "unit" and "integration" terminology serves us right?Or is it time for a change? When should we use either strategy to maximize our "test effectiveness"?In this talk I'll show you a brand new way to think about cost effective testing with new strategies and new testing terms!It’s time to go DEEPER!

Workshops on related topic

React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
WorkshopFree
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
Top Content
WorkshopFree
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
WorkshopFree
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.