Flaky Test Management with Cypress

Rate this content
Bookmark

This workshop is for Cypress users who want to step up their game against flake in their test suites. Leveraging the Cypress Real World App, we’ll cover the most common causes of flake, code through some examples of how to make tests more flake resistant, and review best practices for detecting and mitigating flake to increase confidence and reliability.

Table of contents:
- Cypress Real World App Overview
- What is Flake?
- Causes of Flake
- Managing Network-related Flake (Activity)
- Managing Dom-relate Flake (Activity)
- Flake Detection and Mitigation Best Practices
- Q&A

114 min
23 Nov, 2021

Video Summary and Transcription

This workshop focuses on flaky test management with Cypress. Participants will work with the Cypress Real World App to learn about different types of flake and how to address them. The workshop covers topics such as managing network requests, handling flaky tests, and reducing flakiness through test data management. The use of Cypress Dashboard for flake detection and analysis is also discussed. Overall, the workshop provides practical strategies and best practices for managing flaky tests in software development.

1. Introduction to the Workshop

Short description:

I'm Cecilia Martinez, a technical account manager at Cypress. I'll be talking about flaky test management with Cypress in this workshop. We'll have coding activities and time for questions throughout the workshop. The setup instructions were included in the workshop description, but we'll have time to troubleshoot if needed.

I'm Cecilia Martinez. I will be your workshop instructor today. I am a technical account manager at Cypress. I have been at Cypress for almost two years now. January will be two years. I started when the company was pretty small. There was about 20 of us when I started. I was the first kind of person on the, on our support and success team. And I spent essentially the last two years just working with our users, helping them overcome technical challenges, talking to them about their test strategy, understanding how they use Cypress and how they tested their organizations, and then also developing resources and doing some education around how to best leverage Cypress and implement it on their team.

So through that time, I've gathered some learnings which I love to share with the wider community with things like this. So today I'll be talking about flaky test management with Cypress. This is a workshop, so there is some coding elements to it, we'll be doing two coding activities. Like we're just like a little under there's about 26 people so far. So we ended up having like around around that many.

I do like to keep things pretty casual, there will be time for questions throughout the workshop, but also feel free to drop them in the chat as they come up. I'll be keeping an eye on it. And then we will have some dedicated Q&A time at the end as well. So if you prefer to, you know, turn off your microphone and chat about it instead of posting a question that's we'll have some time for that as well. So some general housekeeping, the slide link, I did post it in the discord. It's also on the screen as well. These slides are public, so you can feel free to use, you know, save them for leverage them at your convenience, share them. And the I did we'll talk about the setup instructions in just a minute, but they were included in the workshop description. So hopefully y'all were able to follow through with those, but if not, we'll have some time to troubleshoot initially as well.

2. Workshop Logistics

Short description:

We'll have time for questions and breaks during the workshop. Feel free to take breaks when needed. If you have any questions or need assistance, you can ask in the chat or raise your hand in Zoom.

They we will have, like I said, some, some time for questions, we'll also have some built-in time for breaks. So when we do the activities, I'll say, feel free to take some time now, we'll come back at X time or you know, after the hour, this time, that way people can kind of you need to grab some water, or whatever it may be, we'll have some built-in time for that as well, because three hours is kind of a long time without any kind of break. So but if you do need obviously to step away at any time, like girl, you feel free to do so.

Yeah, so any, any questions on like logistics or anything, before we get started, feel free to pop them in the chat or raise your hand in zoom, if you want to voice, ask me a voice. Awesome.

3. Setup Instructions for Cypress Real World App

Short description:

We'll be working with the Cypress Real World App, a full stack react application that demonstrates Cypress best practices. It's a Venmo-style payment app for testing UI and API. We'll focus on the 'flake demo' branch with intentionally flaky behavior. To start, clone or download the branch, run 'yarn install' and 'yarn dev' to start the app. Then, run 'yarn cypress open' to open the test runner. Let me know if you have any issues. Raise your hand in Zoom once you're ready. The GitHub repo and branch link are in Discord. Feel free to ask questions or seek assistance. Take a few minutes to set up.

Alright, so let's go ahead and talk about those setup instructions. So we'll be working today with the Cypress Real World App. The Cypress Real World App is a full stack react application created by our developer experience team to demonstrate best practices and use cases for Cypress. It's a Venmo-style payment application where you can send money back and forth to your friends. It's just a codebase and doesn't live anywhere, but it's meant to show different types of UIs and different user flows that you may need to test. It also comes with a full suite of UI and API tests. Today, we'll be working with the UI tests specifically. The branch we'll be working on is called 'flake demo', which has intentionally flaky behavior for us to troubleshoot. To get started, you can go to the 'flake demo' branch on GitHub, either by cloning or downloading the zip file. Once you have it, run 'yarn install' to install the dependencies. Then, run 'yarn dev' to start the application on your system. In a new terminal window, run 'yarn cypress open' to open the Cypress test runner. Let me know if you have any issues with that. Once you're up and running, raise your hand in Zoom so I can get a sense of where everyone is at. The link to the GitHub repo and the 'flake demo' branch is posted in the Discord chat. If you have any questions or need assistance, feel free to ask in the chat or raise your hand in Zoom. I'll give everyone a few minutes to get set up.

4. Guided Setup for Running Cypress Tests with Yarn

Short description:

If you have to use NPM, you may run into some technical issues, but yarn is preferred. Let me know if you have any issues with that. Once the installation is complete, run yarn dev to start the application on your system. Then run yarn cypress open to open the cypress test runner. Raise your hand in zoom once you're done. The link is posted in the discord. I'll leave the instructions up for a few minutes to allow everyone to get up and running. If you have any questions or issues, I can assist you. Make sure to check the system requirements for Cypress and the Real World App repository for installation instructions. If you're using NPM, refer to the package.json scripts for the commands. You'll need to start the API and React concurrently. If you're using Yarn, install it globally to leverage it in the project. Feel free to ask questions and share your versions in the discord. Start the CyberSkrill World app in Visual Studio Code and run yarn cypress open in a new terminal.

If you have to use NPM, you may run into some technical issues, but yarn is preferred. And so let me know if you do have any issues with that. But yarn install and then once the installation is complete, run yarn dev. This will start up the application on your system. And so and then once you have the application running, you'll see it on localhost 3000 and a new terminal window, run yarn cypress open and this will open the cypress test runner. And then raise your hand and zoom once once you're done. Or that way I can kind of get a sense of where everybody's at.

And so the link is posted in the discord, we will be using the discord for chat. I just posted I posted the general GitHub repo. And I also posted specifically that one branch, like demo branch. And then again, hopefully you're able to kind of get up and running in advance and check that out before you're able to join today. But if not, we'll walk through these steps. Now, I'm going to leave these instructions up for a few minutes. Again, if you could raise your hands in zoom, once you're up and running, there is a option under zoom to be able to raise your hand. That way I kind of just get a sense of how many people are ready to go and how many people are still setting up. I will also if you have any questions, I can share my coding kind of walk you through the steps as well. I'll give it about five minutes for us to kind of get up and running.

The slide link in the chat as well. Getting ready. It may take a minute to install everything the first time. Also if it's the first time that you're doing this now, if you weren't able to do it in advance, it will take some time to install Cypress the first time that you run Learn Cypress Open. And then if we're able to use discord for the chat. That's ideal, it's a little bit easier to follow along with the messages and respond there. So depending on I'm seeing some kind of some activity in the chat and zoom again, if we could try and post in discord, it's a little bit easier to follow along and respond directly. If you're having some issues with running yarn node or Cypress, you could have some security considerations on your machine. One of the problems that I are the the idea why it's preferable to kind of do this in advance is to kind of address those in advance, but we can take a look here. If you are running into issues with yarn dev, make sure that you're running yarn install first, and that everything is completing properly. If you have any errors when you're running yarn install, then I would like review those errors and see what could be causing them. Sometimes you have maybe you're not running the correct node version, and then there, it could be something in their kind of dependency that's missing. So if we go here to the kind of getting started installing Cypress, this does show the system requirements for leveraging Cypress, including what version of node that you need to have, And then I can kind of post this in the chat. And then additionally, there's going to be some information like I said, if you have certain security considerations. What is it called? No, it's not VPN, it's proxy. I think, you know. If you're using like a work computer, for example, if you're in a work network, you have issues with proxy configuration. Yeah, so Real World App I think prefers Node version 14 or higher. And again, the installation instructions are also on the Real World App repository. So if you follow that link as well if you have any troubleshooting issues getting started. I can see some more hands raised. That's awesome. I think people are getting set up here. And thanks to everyone who kind of was able to set up in advance that helps us kind of move things along, get started. So I'm going to give it a little bit more time. I want to get to where at least half the people here are able to make sure that it's up and running well for them. We can continue to kind of. Yeah, OK. So there's a question about using NPM. You would do NPM install, and then I think it's NPM run dev. Let me take a look at the package.json. Go into the scripts. This is still leveraging Yarn. So essentially you'd have to do NPM and then you'd have to do the same kind of start commands that are built into here. So it's really preferred to use Yarn, but you can essentially take a look at these commands and then break them down. So we need to start the API. So you like NPM concurrently run tsnode files backend, and then you also need to start the React here. So all of these commands, you would need to concurrently, we need to set the environment and concurrently run the NPM version of the start React and then start API watch if you wanted to use NPM. Thanks James for helping out in the chat, I appreciate that. Yeah, so you do have, again if you wanna use Yarn, you wanna need to install it globally in order to be able to leverage it in the project. Oh, thanks for sharing in the discord what versions that you're using, Claire, that's super helpful. I'm gonna go ahead and reproduce some of these links from the zoom chat into the discord as well. And then just kind of, while we're going through that, I'm gonna go ahead and pull mine up. So this is my CyberSkrill World app and the Visual Studio Code. I'm gonna go ahead and start mine up as well. So to kind of show you what you can expect. It does take a minute to get started. Then once it loads up, you'll see kind of the CyberSkrill World app and it'll look a like an onboarding or welcome flow. OK, here we go. So it'll kind of like this, it'll show the real world app and you'll kind of have this kind of onboarding flow. You can go ahead and close that out. We actually don't need to have it open, but then you'll know that it's running. And then again, in a new terminal, then you'll go ahead and run a yarn cypress open.

5. Flake Definition and Test Example

Short description:

The workshop will proceed with troubleshooting and downloading. Flake is a common pain point for many attendees. A flaky test can pass and fail across multiple retry attempts without any code changes. We'll be working with the Cypress Real World App, specifically the flake demo branch, which has built-in flaky test cases. One example is the notification spec, which tests the inconsistent response time of an API. Let's run the test and see the results.

And then that will open up and we can see the real world app. Cypress open. And that'll go ahead. Again, if it's the first time that you're doing it, you'll have to install cypress, but then that'll go ahead and open the test runner which looks like this. So you'll be able to see all of the spec files, they'll be pulled up automatically in the test runner. I pulled it down recently, so it's version 9.0. There was a new release like this morning, I think for 9.1, but 9.0 is good. And then this will also pre-fill with whatever browser's available on your machine. So it comes pre-bundled with Electron. I also have Chrome 96 on my machine, that's why it's showing here. I'm going to be running the test in Chrome, but feel free if you have a different browser, if you wanna use the bundle of Electron, you can do that as well, it just may look a little... It just may look a little bit different, obviously, cause it's a different browser.

Still seeing seven hands raised. Has anybody else been able to get up and running if you could raise your hand and Zoom, or let me know in the chat. I think it would have actually had eight cause I believe Clare said that. Oh no, your hand is raised as well. And then if you're planning on just watching today, but you're not, don't plan on getting set up, or you could also raise your hand just to let me know. That way I'm not waiting for you to proceed. So again, if you're kind of just here today to watch and you don't plan on coding along, so this doesn't apply to you, if you could also raise your hand, just to let me know that you're ready to proceed. Come back here to the instructions. Awesome. All right, great. So we're about half. So I'll go ahead and proceed. Again, we will have some time, as we're moving along, that we'll be taking breaks or being available for questions, so we can continue to troubleshoot. And this is being recorded, and all the slides are being available to you, so if you want to kind of come back and later on run through that, that is totally fine, as well. So let me go ahead and lower some hands here. I don't think I could lower all hands, but feel free to lower your hand as well, if you are, and we'll go ahead and move forward. There's a housekeeping here. Awesome. Okay, great. Okay, so for those of you who do have the code repo pull down there's also one more thing that we need to do. This, this, this line here can cause some unexpected behavior in this branch, and I don't want to kind of interfere with anything that we're doing today. So it's going to be in the Cypress slash support slash commands.ts file we're going to go ahead and just need to delete our comment outline 204. So that is in again, our Cypress slash support slash commands file line 204, you can either do two lines in front of it to comment it out or you can delete it, and then go ahead and just save that file. If you all have a minute to do that let me know if you have any questions. I found that today when I was putting pull down the most recent version. Yeah, and again we are working on the flake demo branch. So, again that's this flake demo link that has was on the slides. We do have to be working on the flake demo branch this is the branch that has the specifically built in, flaky examples And then if you already pulled down the main branches and just check out like demo branch, you don't have to go through the process of putting it down again just check it out. And then use that branch instead. So, awesome. All right. So I mean we're going to go to move forward if anybody has any questions just let me know in the chat. And then again, you can also follow along the slides. So the slides are linked in the Discord and also in the Zoom chat. So if you need to refer back to this page around what the setup project setup instructions are, if you join later on if you just get a chance to, you can go back and reference that page. Let's go ahead and move on. All right, so the first section is going to be around me talking about Flake. So you will be able to kind of continue troubleshooting and downloading things as we as we move forward. But let's go ahead and start talking about what is Flake, right? I mean I think all of us here the reason that we're, the reason that we're kind of that y'all are here today is because you're probably very familiar with Flake. And it's probably a big pain point for you. But I want to kind of define what the technical definition is of Flake and talk about how what the impacts can be. So a flaky test is, a test is considered flaky, when it can pass and fail across multiple retry attempts without any code changes. So if a test executes and fails, but then you rerun the test with no other changes, no changes to the environment, no changes to the test code, no changes to the source code, and then it passes again that is what we define as a flaky test. And then there's no need to log into the application, once it pulls up on your screen, you can just close it out. I just want to make sure that it's actually running. Just saw that question in the chat. So again, today we're going to be working with the Cypress Rail world app, this is a again full stack demonstration app built by our VX team, we are going to be working specifically with the flake demo branch, because it has some built in flaky test cases that we can leverage in order to understand how flake works and how to mitigate it in our test feed. So, one of the test cases that we'll be working with today is in the notification spec, so I can go ahead and actually show you what this looks like running. So in our test code, we have a notification spec and we're going to be running a specific test here I have it.only, you can see the test is flaky, and so like APA has an inconsistent response time. So we'll go ahead and just run that test and see what happens. So under our UI test, maybe the notification spec, and because I've put a dot only after the it block, it's only going to run that one single test, it's not going to run the entire file. So go ahead and let it get up and running. First, I'm running it today, so it might be a little slow to get started. All right. So it's going to go ahead and go through this test case. Okay, cool. And so that's time it did pass the first time, which is great.

6. Understanding Flaky Tests and Their Impact

Short description:

Flaky tests can cause longer deployment processes and debugging processes, leading to reduced confidence in the test results. They can also hide underlying issues in the application or test suite. In this specific case, a slow network request caused the test to fail intermittently. Cypress has automatic test retries, which can help mitigate the impact of flaky tests. However, it is important to track and address flakiness across the testing process. If you have any questions or similar experiences, feel free to share them in the discord.

And so that's time it did pass the first time, which is great. Let me go ahead and double check and make sure that I didn't fix anything. OK, so let's go ahead. Notifications looks pretty good. So I'm going to run it a couple of times until we can see the broken action. OK. So what's happening now is Cypresses retrying the test automatically. Cypress has a built in retry functionality, which talk a little bit about more later. But we did two attempts here. The first attempt, it failed. In second attempt, it passed. So with no other changes, the test here was flaky. What happened on the first attempt here is that it expected this button to be disabled. It did not find the elements. Second time through, though, it expected the button to be disabled. And it had no, where is that? It's up here. It expected the button to be disabled, had no problem, found it, it was disabled. So this is what we mean when we say that there's a flaky test.

So the reason that this test is flaky is because in our backend source code, again, this a branch that specifically has some flaky test cases built in, there's an arbitrary delay on the server to simulate inconsistent response times. So you have probably run into this in the real world, right, where your API can take a long time sometimes, other times it's fast. Maybe it's a slow environment. Maybe it's just chugging that day. So what we've done here is we've essentially added a random set timeout before sending the response back to simulate those inconsistent response times. So as you saw, Cypress was waited up to four milliseconds, I'm sorry, 4,000 milliseconds or four seconds before the item to appear before it actually failed the test. So what that means is that sometimes this API response is taking more than four seconds and sometimes it's not. And so when it takes less than four seconds, it passes. If it takes more than four seconds, it fails. And that's what's causing our flake in this specific situation. So we can see this is the test code that we're zeroing in on the behavior here. So we're going and heading we're saying that the like count selector has a count of zero. We're getting the Like button and we're clicking it. That click is sending off that API response to our backend saying, hey, we liked this transaction. Then our test code is looking for the Like button saying that it should be disabled because we fire off that request to our backend. It's also disabling the button so we can't re-like it. What's happening in this case is that that's failing because it's taking too long for the API response to complete before our next command fires off. So this maybe a use case that you've seen a lot. Where there's just inconsistency in your DOM related to an inconsistent network. So like we saw sometimes it can pass, sometimes it can fail. So when you have flaky tests like this, how does it impact your test suite? It can cause a longer deployment process. So if you have to re-run tests or restart CI build, it causes a failure. With Cyprus, we have automatic test retries. It retried the test and it passed the second time. That's great. It's better than having to go in and re-run your entire build process or kick off the test run again. It does still take some time, though, in order to run the test the second time. It can cause a longer debugging process because you have to determine if a failed test is a real failure or flake. So in that case, the functionality of the UI was working, right? We fired off the request. Eventually, the Like button was disabled. The like showed up, but because of the slow network request, if that test were to fail, we don't actually know if the test failure is related to the functionality of that test case or if it's related to something in the environment or the back end. So we have to then double debug not only what the failure reason is, but then whether it's a real failure or if it's flake. And that ultimately results in reduced confidence. So do failures actually represent regressions? Or is it something that's due to the way that the test is written or to our test environment? And then the other part of it is that, is flake hiding underlying issues in the application or test suite? So in this instance, you know, obviously this is an engineered situation. But if you do have certain areas of your application, where maybe you have an API endpoint, that's consistently slow and it turns out that the way that that API end point was coded, it's just taking longer to response because maybe it's doing, it's processing things in a way that aren't quite as performant. So when we're seeing flake, if you're consistently just rerunning the tests or just restarting it up and you're not really keeping track of that flake across your suite, then you may not notice those patterns and be able to then make better decisions about how to address flake across your testing. Any questions so far about what flake is and then anything that we've covered so far. Feel free to drop them in the discord.

And then also just super interested to, if you want to drop in the discord, if you've had similar experiences with with like if you've seen anything like what we just saw where it was a slow network request, causing the dom to take a long time to reload. Maybe just like a little plus one in the chat or just kind of let me know what experiences that you've had. I'm curious to see what people here think as well. Cypress will note a retry. It will note it in the output that the test was retried. It'll also note it in your Cypress dashboard if you're using Cypress dashboard. And then we're actually going to talk about causes of flake right now, James. So network is definitely one of them. But there's a couple others as well that we'll talk to. Good questions. And so yeah, if you can within your output, it will say that the test was retried. So you'll see attempt, one attempt to attempt three. And again, just to confirm, this is all configurable.

7. Understanding Flake: Types and Causes

Short description:

Flake can be caused by inconsistent functionality or unrelated inconsistencies. There are different types of flake: DOM-related, network-related, and environment-related flake. DOM-related flake occurs when elements are rendered inconsistently or slowly. Network-related flake is caused by inconsistent or slow network responses. Environment-related flake is related to the test environment and can be caused by inaccessible or inaccurate environment variables, differences in machine resources, and inconsistent data.

And again, just to confirm, this is all configurable. So and which we'll talk about in just a little bit later on. And then if you do switch branches, you may need to rerun your install. There could be different modules and different branches. All right.

So let's talk about flake. So at a really, really high level, flake is caused when either the functionality that you're testing is inconsistent. This is actually really helpful flake. So if the way that the application is coded, if there's logical inconsistencies, for example, that can cause the test to pass once and then fail a second time. That's good flake, because that points to issues in your underlying application. But it can also be caused when something completely unrelated to the functionality that you're testing is inconsistent. Right. And so that's what we just saw in our example. So really, like in our case, that's something that we want to try and mitigate as much as possible. So again, if you're seeing flake and you determine that it's because the functionality is inconsistent, then that's good flake because it's pointing to a defect. But if you have something that's really unrelated to the functionality that you're testing, then that's bad flake because that flake is going to what's going to reduce confidence in your test suite and make it harder to understand if that test case is actually failing or flaky because of the purpose and the functionality of that test case.

So there's a couple of different types of flake. I put them into categories. The first is Dom related flake. So this occurs when there is inconsistency with how elements are rendered to the DOM or how quickly they're rendered. So some of the errors that you may see associated with this type of flake are timed out, retrying, because the element is disabled, or timed out, retrying because the element is detached from the DOM. So again, Dom related flake. Now, the core, the root cause of DOM related flake could be related to network, but it typically is related more to how your DOM handles network and data updates. So if it's not rendering as quickly as it should, if it has, you know, if it refreshes when new data comes in, even after you've already done an input. If it's not handling state changes properly and for in this disabling elements when it shouldn't. These are all things that are related to your inconsistency with the DOM that can cause flake. So some examples I just talked through, right? Sometimes an element will load within that Sci.get timeout. Other times it won't. It may get an element, but then when it tries to click it, the elements disabled because the state hasn't actually updated yet. So we see this a lot and that Cypress is faster than a user would be sometimes right. So it may say, okay, like now this button becomes enabled because all the form elements are filled out, but the application hasn't caught up yet. So the state is still pending. The state hasn't recognized all those form inputs yet. That button is still disabled. And when it goes to get in and tries to click on it, it's still disabled. The DOM could also rerender between the Sci.get and an action command, causing a selected element to be detached from the DOM. One of the activities that we're going to do later is actually related to this exact use case. So we're going to dive pretty deep into those detachment DOM issues. And this is kind of the one that I was talking about, where Cypress will take an input field, but the application is slow to process those keypress events, and the field value doesn't update completely before clicking submit. We've seen this a lot, where especially if you have state changes on each keypress, where it's going to, like for example, if you have the model in view, where it's connecting the input value to something in your state and it types really fast and the update hasn't completed yet. So when it goes to submit the password, for example, the last two characters are missing. Right. And then that's what causes the failure. So definitely have seen that one before.

And then we have network related flake. Right. This is what we were just talking about when a network request responds inconsistently, it could be an internal API or a third party or service endpoint. This can be related to either incorrect responses or also slow responses. Right. So in the first example, it didn't have the correct number of elements. This is because the response that we got back from the API was not correct. It didn't get the correct data. The second example is it reached it timed out. Right. Because the request didn't occur or the request got aborted or just to it didn't occur within that time frame of the five of the five seconds. So, you know, slow API response time, like in our example. This happens a lot with third party providers. One of the things that we recommend if you're using like a third party authentication provider is to actually sub out that behavior so that you're testing that your application is not dependent on that. In our documentation we actually have a testing strategy section that covers off that the kind of the pattern programmatically authenticate with a zero, Amazon Cognito Octa Google authentication, breath ul. So these are some of the really kind of most common authentication providers and some patterns for how to bypass them in your tests so that you're not dependent on them, because they can definitely cause some plate. And then a microservices endpoint has, this is something that I see a lot, too, is that if you're using maybe like a microservices endpoint or a Lambda or something that has a cold start, or that can take some time to start up the first time, it may fail the first when you retry it, it's ready to go and it's warmed up and it and it does that. So one of the things that I've seen leveraged here is actually firing off a site out request to that endpoint in advance, or even better, as part of your CI build process, before the test run even starts up sending off the request validating the endpoint ensuring that it's up and running before you even fire off your test run. But that's another thing that I've seen cause Network related flake as well. And last but definitely not least is environment related flake. So these are things related to the test environment, so the place where you're actually running your tests against. So sometimes you can have inaccessible or inaccurate environment variables in your, in your environment. Maybe you have something different locally versus in CI. You have running tests across different size machines and then the resources are different across different machines, then you know like one, the test may be going a lot faster on a machine, a lot slower on another machine. You can have a failed dependency installed in the environment so when you actually are building your application, you may have something failed to install or may not have all the dependencies on that machine in order to actually run properly. You can also see a really, really big one and inconsistent data across your environment.

8. Managing Flake and Test Writing Best Practices

Short description:

If your tests are dependent on certain values in your test data, like hardcoded values or assertions against the page content, changes in that data can cause failures unrelated to the test itself. For network-related flake, Cypress provides the sci.intercept command to spy on and stub network requests. This command allows you to wait for long requests to complete before proceeding with the test, ensuring that the request is completed and giving it more time if needed. We'll discuss more test writing best practices and managing flake in the upcoming section.

So if your tests are dependent on having certain values in your test data, like if they're hard coded in or if you're making assertions against the actual content on the page. If that data changes that can cause like and failures that have nothing to do with the test itself.

Okay, awesome. So those are some of the causes of flake. Before we go into managing flake I just I have seen some questions in the chat so I want to just talk to those real quick. And then situation of Cypress is not attempted to retry so retries are configurable, we'll talk about that a little bit later on, but you can set how many times you wanted to retry in open mode versus closed mode. And then we'll also talk about chaining selectors and assertions and commands and assertions. We talked about the chain of command and how to make that entire chain retryable. If you are seeing that error, try just restarting your server. That happens to me sometimes. If you just kind of shut down the Yarn dev, that was enough to resolve it for me earlier. And then do a code example on getting microsite endpoint to run before the test. I don't know if I have a specific example. We have to kind of find one. But I know there's some up there. But essentially you just said, sign out request to the endpoint. Actually, I can kind of show you what that looks like with our API tests. So Cypress, a real world app. We also have a suite of API tests. We actually run those first. So like, for example, in our CircleCI config, we have our API tests that we're running first, and it's going to run these specifically. So the syntax that we're leveraging for the API tests are similar to what you would do, maybe like in your support file or something that we're just going to kick off before the tests actually start. So let's say we have like a slow-likes API, right? We just want to make sure. You can just do a cydot request to that endpoint and then response and then say that you expect it to equal 200, right? So that's going to fire off a real request to that endpoint. So again, if you have like a cold start or if you have something where it needs if it's like a Lambda or a micro-site, you can fire off a cydot request, validate that the response is good. If it's not, there's a couple of ways that you can kind of like repeat or retry that, but essentially, you would not want to proceed with the rest of your tests until you know that that endpoint is up and running. Again, if you don't need to actually leverage it for your tests, then you can also stub that out and bypass it completely, which is what I would recommend. We'll talk a little bit about that in a minute, but this would be the model for essentially firing it off as using a cydot request. If you're going to do in your CI, that's going to be really dependent on what CI provider you use. You may have to... Some of them have like a run script option that you can just run like a fetch or like XEO's call, like install a package in order to do that. Curl, something like that. But that's going to be dependent on your CI setup. Awesome. All right. So we're going to go ahead and talk about managing Flake. It's kind of a long section. Is everybody okay to keep going or do we need a break? If anybody has a strong opinion. All right, let's keep going. And then we'll take a break when we do the first activity. All right. So like I said, the first thing that we talked about was network related Flake. That was the example that we looked at. So in order to mitigate network related Flake, here's some test writing best practices. Yeah, so we're going to take a break right after this section once we start the activity, in like 10 to 15 minutes. And then, again, if you need to take a break at any time, please feel free to do so. Okay. So the test writing best practice. Sci.intercept can spy and stub all network requests. So I just did a talk on network requests for test.js summit last week. Not sure if you saw it. But the slides are also available. It's at cypress.slides.com. You'll see them on there. If you want to dive deeper into this, we'll be using it today as well. But sci.intercept is a really powerful command that allows you to spy on and stub all of the network requests that are going in and out of your application. So how can we leverage this to battle network-related flag? We can wait for long requests to proceed. We can wait for long requests to complete before proceeding. We can say, hey, we have this API accounts endpoint. That's just slow. We don't have a lot of accounts. It takes a while to get it. It's like a data dump. It's huge. So what we can do is we can say, that's going to take longer than the typical—I think it's like 6 or 8 seconds that we give for a network request. So we can say, let's go ahead and wait for that request, but let's give it more time. Let's give it up to 30 seconds. It's because we know it takes a long time. And so what that will do is it will give that request more time to complete. It will also ensure that it's completed before we proceed with the test.

9. Managing Network Requests with Stata Intercept

Short description:

To manage inconsistent or unneeded network requests, we can use Stata Intercept in Cypress. By declaring an intercept with a URL, method, or route matcher, we can stub out unnecessary requests. This allows us to pass a fake result back from the API request. Once declared, the intercept can be saved and used throughout the test code. We can wait for the request to complete using the .wait command. This helps ensure that critical requests are completed before proceeding with the test code.

That way, we know that the account name will be on the page or it should be on the page. At least that's what's expected of the DOM behavior that we're testing. I've warned that in just a second. Don't worry.

We can also stub inconsistent or unneeded network requests. This is what I was just talking about. If you don't need it for your test, if you're not testing its functionality, then you can stub it out. That way, it doesn't affect the behavior of your test. And you can use Stata Intercept for stubbing as well. So you can either use it to watch a route request or you can also use it to intercept and then stub, which is essentially to pass a fake result back from that API request.

So, this is how we do it. Declaring a slide with Stata Intercept. You can pass a URL, a method, or a route matcher. If no method is passed, any method type will match. If you watched my talk last week on network requests, this is from that, so I apologize if it's repeated for you. But essentially, this is an option that you can use to declare an intercept. We'll be using this a little bit later on today. Again, you can leverage the slides. But I want to point out the second one here, where you can pass through a method and then a URL. You can say, please go to this endpoint, and we're going to be looking for a certain method type, which we'd get post, put, patch, delete. Once you've declared that intercept, you can save it to use throughout your test code by using.as. In this case, we have our CIDA intercept. We're saying any requests that goes to the search endpoint with a certain query, please save that as search. And then once we have saved it, then we can wait for the request with.wait.

10. Managing Flaky Tests and Fixing Flaky Test

Short description:

To manage flaky tests, you can use Cy.intercept.as and Cy.wait to wait for critical network requests to complete. If a problematic network request is unrelated to the functionality being tested, it can be stubbed out. The Cypress Railroad app provides three options for logging in: via UI, via API, and via XState. Each option reduces variables and focuses on specific functionality. The next activity involves fixing a flaky test in the UI notification spec file. Run the test in the Cypress Test Runner and note the failing command. Refactor the test to be resistant to flakiness by increasing the default timeout or using Cy.intercept.as and Cy.wait to wait for slow post requests. The Cypress Test Runner provides helpful error messages to identify the failing line of code.

So here again, we have our intercept declared. We're saying any get request to our users endpoint, please save that as get users. Then within our test code, we're going to say Cy.wait get users. And then that, what that's going to do is it's going to wait for that request to complete before proceeding with the rest of your test code. So again, if you have a network request that takes a long time, or if you have a network request that's critical that it complete in order to have the data on your DOM, then you can leverage the Cy.intercept.as and then Cy.wait pattern in order to ensure that those are completing before proceeding. And then if you need even more time, you can pass the time out in order to make that longer.

So again, going back to our initial kind of thing from the beginning, is Slake is causing something unrelated to the functionality that your testing is inconsistent. This is bad Flake, this is not helpful Flake, it gives you no insight into what's happening in your application. So if a problematic network request is unrelated to the functionality you're testing, stub it out. So if you're having just a lot of pain with a certain network request, and the only reason that you're using it is to get to a certain page or because you need to be logged in to test your application, then don't use it. Just stub it out.

So what we have in the Cypress Railroad app, going back to the code there, is in our custom commands, oh, wait, okay, I have the real code... A lot of the guys use this. In our custom commands, we have three options for logging in. We have login via UI, which is a login command, where we're actually typing in the username, we're typing in the password, we're checking the box, we're logging in via the UI, because we're doing it with the same default. We should have a test that tests our login flow, makes sure the form is validating properly, etc. But then we also have a login by API command, where we're just making a POST request to our API endpoint, and we're bypassing the UI. So that's a lot faster. We're saying, hey, let's not type in everything. So send a POST request to our back-end and then telling them that we're logged in. We also have login via XState. So XState is the front-end store that we're using for this application. And when we're doing this, we're actually just going into our store, and we're telling it that we're logged in. What's that doing? It's actually not XState, it's WIN-OFF-SERVICE.SEND. So what that's doing is actually just telling our front-end store, hey, we're logged in with this user. We didn't touch the back-end. We didn't log in via the UI. So if all we need to do is log in to get to a certain screen so that we can test our date picker and that's all we care about with that test case, then reduce as many of the other variables as you can and just focus in on that test case and that functionality specifically.

All right. So we are going to do activity now. This is also going to double as a break time. So we're going to be working with the UI notification spec file, the test case that we were just looking at. So again, this is going to be under UI. So it's under tests, Cypress, then tests, let me minimize all this. Okay, Cypress, then tests, UI, and then notifications.spec.ts. It's going to be on line 30. And then you'll see it has the word flake. If you just like search for flake in the file, that's another option too. But this is a flaky test, right? We've talked about why we've identified that, that it's a flaky test. So using these instructions, we're going to fix this flaky test. So run the tests in your Cypress Test Runner. You can do that again, I'll go ahead and close this out. And the Cypress Test Runner by selecting the notification spec, I would recommend adding dot only after the if block here. That way you don't have to wait for all the tests run. So you can add dot only on line 30. And then go ahead and select it, on notifications and run the test in order to note the command that fails. Once you've done that and you have a sense and you may have to... Again, you may have to rerun it a couple times because it's flaky. It's not failing every time. But go ahead and run it until you do get a failure and note the command that fails. All right, so we got a failure the first time there and go ahead and take a look at the error. Right? The Cypress Test Runner gives you pretty helpful errors. It'll go ahead and tell you what line of code is failing, which is helpful. And it'll tell you what the error is. Right? And it's on line 53. Button should be disabled. Then refactor the test so it's resistant to flake. And you can use one of the following strategies. Option one, and you can use both, you can try both and test them both out. Right? One is to increase the default timeout on the flaky command. Recommend using the docs for this if you need to. But there's a timeouts guide in our docs that talks about how you can increase the time to retry. I'll go ahead and post that in the chat as well. I'll post that in the Zoom chat, too. The second option is going to be using the model that we just talked through, which is identifying the slow post request and then waiting for it to occur using site.intercept.as and.wait. If you want to see, so there's a couple different ways to see post requests. Or to see requests. You can see them in the command log. And if you click on them, it will output it to the console. It will output the information to the console.

11. Managing Flaky Tests: Timeout and Intercept

Short description:

You can have the network tabs open while running the test to view the network requests or check the source code for the flaky code in the back-end routes. One solution is to increase the default timeout on the flaky command, allowing the test to wait for the required time. Another option is to use the sci.intercept.as and .wait pattern to intercept the post request and wait for it to occur. Declare the intercepts as early as possible in the test code, and use the saved intercept in the test code to trigger the API call.

You can also have the network tabs open while you're running the test. And it will show all the network requests that are taking place. Or if you just are familiar and comfortable with looking at the source code, the code that's causing the flake is in back-end slash like routes as well.

Okay, awesome. Go ahead and feel free to lower your hands now as well. Thanks so much for all of you who put them up. Looks like we have almost half. So, I can go ahead and talk through some solutions. And I did see some people posting some solutions in the Discord. That looked good. So, that's exciting. We'll go ahead and talk through the options now. Let me go ahead and get my code pulled up here back in the window.

All right. This is our test code. When I run it, it's flaky. Not great. So, the first option that we talked about was to increase the default timeout on the flaky command, right? And so, let me go ahead and get this pulled up. And we saw there that the flaky command, so the failing command, was the psy.getDataTestLike button, right? It's on line 53. We have that really helpful error message from Cypress. So, we come here and we go in line 53, how can we increase the default timeout? So, timeouts can be configured on either a block level, so you can do it for a describer, a block, or context, if you use context-describe. They can be on the command level, so you can pass it through as an option to any individual Cypress command. You can also increase the default timeout globally in your cypress.json. So, if you just have like a really slow app and you just want to give everything a little extra time, then you can do it there too. So, why would you want to do a timeout and not just an arbitrary Cy.wait? So, say, for example, this life button, we go ahead and we increase the timeout and we say give it 10 seconds, just give it a bunch of time, who knows how slow it's going to be. If instead, if we were to do Cy.wait, 10 seconds, it's going to wait 10 seconds every single time, no matter what. So, your test is now going to be 10 seconds slower every single time. Whereas if we increase the default timeout, it will only take 10 seconds if it needs 10 seconds. If it needs four seconds, great. If it needs six seconds, great, but it's going to be dependent on the time that it actually needs. So, you're increasing the maximum allowable time versus saying, please wait this much time every single time. So, if we go ahead and save that, and again, we're passing this as an option to the actual command itself. So, you can just do a comma and then pass this timeout option and pass the milliseconds. So, we save it, Cypress automatically detects the change and reruns the test, and let's go ahead and take a look and see what that looks like. One thing that you'll notice too, is as it's running, the bar is bigger now, right? So, it gave us 10 seconds instead of 4 seconds. So, as your bar is going down, it's going down slower than it was previously. So, there we go, voila, passing the first time. So, that's one way to do it. Now, not necessarily the most elegant way to do this for a couple of different reasons. Number one is, what if one day it's 11 seconds? What if one day it's 12 seconds? Hopefully not. But you have to manually update these timeouts. It's also something that you have to kind of maintain on the command level, right? So, what if this network request happens a couple times in a task, so we want to wait for it multiple times? So, the other option, this is kind of the quick fix method. The other option is to use the sci.intercept.as and then.wait pattern or model in order to intercept the post request and wait for it to occur. So what does that look like? So we want to declare our intercepts as early in the test code as possible. We actually have some existing intercepts here if you scroll up and notice in our before each. Because these are ones that we leverage heavily throughout all the test on this spec file. So you could do it in your before each hook. Just to make it easy, we'll go ahead and just continue working with this one int block that we have. But if you see the sci.waits kind of thrown throughout the test code, those are defined up in your before each. So, well, we'll go ahead and do it here. But you do want to declare it as early as possible, because you need to make sure that it's declared before the network request happens. I've seen people kind of throw them in the middle of their test code, like they would declare it here and then wait for it there and it doesn't happen fast enough. So we'll go ahead and declare our sci.intercept. And this is a post request that's going to the likes endpoint, and it's gonna have like a transaction ID. We can just wild card that. And then we want to go ahead and save that. So we can save it as, and I like to just follow the method of post likes. So the type of request and then the endpoint. Sometimes you'll see like get users, get likes. I think it's descriptive. And it also kinda lets you know what the pattern is that you're following. So now we can use this post likes in our test code. So this is the action that triggers the test code. I'm sorry, that triggers the API call. And this is the command that's flaky because it needs that to occur. So here's where we'll wanna put that in our test code. So we'll use the at symbol. That's how we reference anything that's saved with that as. And we'll go ahead and put our post likes. And again, if you're using VS Code and you have IntelliSense, if you're using the where all that it's already installed here, this shows you like the syntax.

12. Handling Flaky Tests and DOM-Related Flake

Short description:

In this activity, we used the Cy.intercept.as and Cy.wait methods to handle flaky tests. We also discussed instructing Cypress to run tests multiple times and the possibility of adding a test burn-in feature. Moving on, let's explore DOM-related flake and how Cypress addresses it. Cypress has built-in functionalities like query-command-retryability, which retries commands and allows time for DOM slowness before failing the test. By understanding this chain of command, you can leverage it to fight DOM flake by ensuring the chain is re-triable. This helps in re-querying the DOM whenever assertions fail. It's important to avoid doing the same assertion against the same element without re-querying the DOM. Let's continue with the discussion on DOM-related flake.

It gives you an example when you have rollbook right? So you can see this is like the pattern that we're using. We're using the CyDot intercept as and then we're doing CyDot wait. And in this case, we're actually making assertions on it. But in this case, we're just waiting for it to complete. So that's all we need. So now if we go ahead and save that, again, Cypress is gonna rerun the test. And just hop it over to the Discord. And again, no longer a flaky test. If we go ahead and we look here, Cypress TestVendor actually keeps track of all of our intercepts. So we have here, this is the post request that occurred. If we hover over this we can see that it matched the SCIA intercept spy with an alias of post likes. We can also see here that we waited for it and then we found the alias. So we're able to kind of track this along with the command line as well or command log as well.

So there's a question about instructing Cypress to run the test a certain number of times so that you can check that your fix is not just lucky streak. We'll be talking about this a little bit later, but so there's a way to do it yourself using what's called the Module API. So if I go back to our lovely docs. So the Cypress Module API, essentially allows you to orchestrate test runs how you want. It's like using node, like using a node module. So you could essentially like say like run the test, run the test, run the test. But the feature that you're talking about is something that we're actually going to be adding or it's in their development consideration. It's on our kind of roadmap for the Cypress dashboard. And it's something that we call test burn in, where essentially the first time that you add a new test, it will run it a bunch of times to ensure that it's a quality test and not leaky. So we'll talk a little bit more too about how you can leverage retry to kind of ensure that it's a good test, but essentially, that's something that we've gotten feedback on, and it's a really good idea. Right now, you would have to orchestrate it yourself. But yeah, good questions. Awesome. So did anybody have any other questions about this activity? Do you all feel like you are more comfortable now with the kind of sci.intercept.as. and then sci.wait method? Feel free to throw any questions again in the Discord chat. But hopefully you found that helpful. Awesome. Seen some thumbs up. Nice. All right. So let's talk about DOM related Flake. Ooh. This is one that is a... So, Cypress has a couple of built-in functionalities to help address Flake on the DOM level. So I'm just going to walk through those first. And these are things that Cypress already does under the hood, Cypress Test Runner. One of them is query-command-retryability. Right. So what Cypress will do is it will retry a command. And it'll allow up to four seconds by default, again, as we just saw, you can extend that. You can increase the default timeout. But before failing the test, it'll give it some time. It'll say, hey, like, just keep retrying. Maybe I think we may have some DOM slowness here. We don't want to fail it right away. So we kind of look at this example. So we have an it block where it adds two items to a to do list. So we're getting the new to do. We're typing the first one. We're typing the second one. And then we're making an assertion that the to do list should have a length of two. A very kind of like, a simplified example there. So if we look at kind of what this is, is CyDoc.get is a command. It's a selector, but it's also a command. It's doing something. And then dot should is an assertion. So if the assertion that follows the CyDoc.get command passes, then the command finishes successfully. But Cypress will also do this automatically for you. Where if the assertion that follows the CyDoc.get command fails, then the CyDoc.get command will re-query the DOM again until the command timeout is reached. So if we get to do list length item should have length and it comes back with one, then we won't just like re-try the should, we'll go back and we'll re-query the DOM for the get. So that's really important to understand how that chain of command works. Because you can leverage this to kind of fight DOM flake by ensuring that the chain is re-triable. So you want to be able to leverage this pattern as much as you can, so that it's re-querying the DOM whenever the assertions fail. Otherwise, you're gonna just be doing the same, you're gonna do an assertion against the same element and it's not gonna actually be updating. So what could cause that to happen? Again, so right now it's waiting. It only had a list of, it had zero but it's re-getting the DOM element over and over again, and then finally it passes. So it's actually re-querying the DOM and that dot get over and over again.

13. Managing Chain Selectors and Assertions

Short description:

Only the last query is retried. If you have chain selectors, use a single query command if possible. Alternatively, use a specific selector or alternate commands and assertions to ensure the correct version of the element is obtained before proceeding.

Something to keep in mind, only the last query is retried. So if you have a psi dot get and then you chain a dot contains after it, or I've also seen psi dot get and then dot find, or dot children, or dot parent, or dot its. If you have anything chained after that, only the last query command will be retried. So if you have psi dot get, dot contains, dot should, the psi dot get will not be retried. That's the, that initial selector will always stay the same. The dot contains will retry. It'll keep looking for that, it'll requery whatever it is that it finds with that text, but the entire chain will not. If you leverage instead psi dot contains and pass through the selector and the text, then it's going to requery that entire psi dot contains. So whenever you have chain selectors, so I see this sometimes with like psi dot get and then dot children, it's going to requery the children. It may not requery that initial get. So you can either use a single query command if possible, right. You can do something like psi dot contains, or passing both the selector and the text, or you can have a really specific selector where you're zeroing in on exactly what you need instead of having to traverse the DOM. Or you can alternate commands and assertions to make sure that you have the right version of the element before proceeding and chaining off of it. So if you absolutely have to like get a list and then get the child and you can't just zero in on the child element, then what you can do, and this is the example here, is that psi dot get selector, and let's say it's list, should have length of three before we go up to the parents. Or child is probably a better example because that's more common. But say, for example, we set up a list and then we didn't have that should have length of three. We might be grabbing a list that only has and only two of the elements have rendered so far. And then again, that's never going to query again in this chain. So we want to make sure that we have the right version of that element before we proceed. So in this case, we have Cy dot selector, should have length of three. Now that we know we have the correct version of that list, OK, cool, it's got three children, like whatever the assertion may be, if we need to assert that it has a certain state or it's showing certain content, then we can go ahead and proceed. Otherwise, what's going to happen is if we have like assertions only at the end, it's only going to requery that parent. Then maybe we never get the original, we never actually had the correct one to start with. So if you alternate commands and assertions around areas of your application that you have to be very specific, so I see it a lot when they you know, select drop downs or forms or things where you have like multiple, like really complex components that have a lot of child components instead of a single parent. And maybe it's dynamically rendered with lots of different data. So it's a little bit hard to get into. You'll want to make sure that you have the right version of each selector before you proceed.

14. Handling Assertions and Best Practices

Short description:

If you have a side dot get and then a side dot click and then an assertion, and the assertion fails, what's happening is that side dot click should pass through the subject. Dot click will automatically retry until all chained assertions have passed. Detached from DOM errors occur when there is a DOM refresh or state change before an action. Cypress Test Runner is open-source and feedback is welcome via GitHub issues. Writing long chains without assertions in between depends on the application and can lead to more flakiness and inconsistency. We'll have another coding activity and then discuss best practices, examples, and Q&A.

Yeah, so good question, Blaine. So exactly. If you have a side dot get and then a side dot click and then an assertion, and the assertion fails, so what's happening there is that side dot click, so are you, you're chaining an assertion off of dot click. Typically you chain an assertion off of … I guess you could do that. So what'll happen is this side dot click should pass through the subject. I don't know if it'll retry dot click. Let's find out. That's a good question. So all of our API methods have information about them. So, yes. Okay. So dot click will automatically retry until all chained assertions have passed. And then there's also information about the subject management that comes through. But I don't believe it'll retry the … Yes. So it won't. So side dot click will automatically wait for the element to reach an actionable state, requires it being chained off a command that yields the element. And it'll automatically retry until the chained assertions have passed. But it's going to retry the click, not the get. So you want to ensure that you have the correct element before proceeding with the click. Awesome. Okay. So I hope that's helpful. And then another thing that I wanted to share here, this is in the slides, is we had a webinar, wow, last year already. Oh, my gosh. About using code smells to fix flaky tests in Cypress. And in this webinar, Josh went through three different examples of detached from DOM errors and how they related back to the source code. And then also, I wanted to share this blog post here of that Gleb Bomitov wrote about, Detached From DOM Errors and what they can mean in your underlying source code. So detached from DOM errors typically are caused whenever there is a DOM refresh that occurs or the state changes in your DOM, that's causing a rerender after you use the get and before you do the action. And we do have a couple of open like GitHub issues with like requests and features of how to maybe address this. And we've done some additional like error handling and some better error messaging. But if you think about it, that's essentially what's happening is that you're getting the elements and then your front end application, the DOM is rerendering and the element that you had previously isn't there anymore. So you're trying to click on an element that is no longer there, it doesn't exist in the DOM anymore, it's been detached. So what we typically need to do is ensure that the element that we have is going to be consistent before we action it. So we're going to take a look at an example in the next activity. Yeah, and then one of the things is we get a lot of feedback around this behavior. We definitely always are open to feedback via GitHub issues. So Cypress Test Runner is open-source, we use GitHub. And so if you do have any requests about behavior or questions, you can always leverage that repository to submit features. I think we may have some open ones as well. So if you want to go into that as well, add comments. But yeah, and then another thing is writing long chains without any assertions in between, is that in general? So not necessarily, Claire, it's a really good question. It depends. So it depends, right, is like always the answer, but it depends on what you're doing with those chains. So one of the things is if you're doing a lot of just like actions versus selectors. Now if you have like repeated selectors in a chain, I would recommend putting assertions in between those just to ensure it, because again, otherwise it's only ever going to get the last one. But if you have a lot of maybe like actions or things that are chained off of each other because you just have to do that a couple of times, it's probably not a best practice, but it's not a bad practice either if your application can handle it. Now, if you have a lot of flake or if you're having a lot of inconsistencies or if it's causing performance issues to do that, then maybe don't. Like, so it's all going to depend on your application. But if you kind of look at some of, like, the Cypress Railroad app tests, right? Because this is what we're leveraging as kind of some of our best practices. As we have, so we have like should and then we get the first and we have another should. Then we have an and. So we have some chains, but they're never super, super long, right? Even if we have things that are taking place, like get, buy, sell location, should, you know? Should we have quite a bit of assertions kind of bundled into those chains? So if you have super, super long chains, like I wouldn't say it's like it's not a best practice. It may not be bad, but it could definitely lead to more flake and more inconsistency. All right, so we're going to do another activity, we're also coming up on the second hour here. So this is going to be the second coding activity that we do, the rest of the time we're going to be used to talk through some more best practices and examples and then also have some Q&A. So let's go ahead and take some good time here. I have about 15 minutes but let's go ahead and maybe take a little bit of a break as well. So I'm going to walk through it first. So we're going to be using a different spec file. It's going to be the new transaction spec line 259. So let me get that pulled up here. So we have our new transaction spec, again that's still going to be under UI. And then line 259 is where it kind of talks about it. Line 262 is where the test code starts. I would definitely recommend popping a dot only on that line to only run this one test. So what's happening here? Let's go ahead and run this test. So we'll stop this one and then in our Cypress test runner we're going to look for the new transaction spec. And again because I have the dot only that I've added on line on that line 262 it's going to only run that single test. So we'll go ahead and see what's happening here. And yeah.

15. Understanding Element Detachment from the DOM

Short description:

We have a long error message that occurs when the element is detached from the DOM. The test intentionally demonstrates an element becoming detached from the DOM. The post request triggers a refresh of the DOM, causing the element to detach. By intercepting the post request and waiting for the desired element, we can make the test less flaky. The test code involves the user list search form, which fires off a request whenever there is a change in the text field.

So the sideout is probably going to fail every time. I think this is pretty flaky. So what is happening here. We have a really long error message. Right. So then we timed out after four seconds for four seconds. The sideout should fail because the element is detached from the DOM. So we have this long error message. We have kind of like a learn more that goes directly to a section in our docs about the error message. Cypress has a bunch of custom error messages that are meant to kind of point you in the right direction. But walks through the entire example about how you can fix the test code. We query for newly added DOM elements understanding when your application renders guard Cypress from running commands until a specific condition is met. Usually by writing an assertion or waiting on an XHR. So, I wanted to point that out. In this case the elemental that we're working with is this Kaylin, right? This is getting detached from the DOM. So, what's happening here? Let's go ahead and take a look at the instructions. The test is intentionally flaky and demonstrates an example of an element can become detached from the DOM. So, let's go ahead and take a look and see what's happening here. We are clicking on our element and then we're... Once we're doing that this post request fires off. I'm sorry, I get requests fires off. Then we're getting our list items on. We're looking for the first one, but what's happening is once this request completes, if we were to look at our application source code, that's what's... that actually triggers a refresh of our DOM. So, that is what's detaching the element from the DOM. And again, we can kind of see this if we inspect and we look at the network, and if we rerun the test. It can kind of be interesting to watch this because it shows them happening in progress and you can see the milliseconds. So, it was really fast, but what's happening is that search query endpoint... sorry, API request, once you receive that back, that's actually triggering a refresh of our DOM. So again, understanding how what's triggering re- renders of your DOM is really important kind of helping debug some of these cases. But that's what's happening there. So, because we know that it's related to a network request, we can leverage the same kind of pattern that we did last time, where we know that we are essentially when we're doing that click here, that's what's firing off that request. And then by the time we grab, we grab this right away. So, we like we click that, and it fires off the request, and then we immediately, with no wait time at all, grab that user list item, then we have the doc first, and then we have should contain. So, again, we are grabbing the element here. Somewhere in between here and the should, the request is completing. The DOM is refreshed or re-rendered, whichever term you like to use. And then it's detached. And so the time we go to make our assertion, it's been detached from the DOM. Let's go ahead and take a look at the solution. So, again, a couple of different ways to solve it. If you actually looked at the source code and you saw what was causing it, that would be something. But let's go ahead and start with what this looks like here. So I'll drag my code back over. So, again, we're using this site at intersop as wait pattern here. So when we took a look at the test code, we were able to identify that there was this post request that was fired off. So this actually isn't being caused by the click. I misspoke earlier. It's actually being caused here. So when we type in Kalin and then we just enter, that is actually was causing her to show up as the first item. But this takes a minute. We were clicking outside, which is the next thing. It doesn't actually resolve. And so we're waiting for it to get that's what's kind of causing that re-render. So what we've done in the test code in order to make this pass, make it less flaky, is we're intercepting that post that get request. We don't have to pass that. It's by default. And if you get if you want me more prescriptive, you can I just go for the simplest version. Again, to our users endpoint with a search and then this the wildcard here as search users. And then we're waiting for search users here. So just to kind of dig in a little bit deeper, if you wanted to maybe investigate why this is happening. So what is this is going to this is involving our user list search kind of what's happening here? So we're using our user list search form. This is the react component that we're working with here. And we can see that whenever there is a change in the text field, we're firing off userless search. So any time that we're typing in and there's a change, it's going to fire off that request. So that request is actually searching for every single time that there's a change in that text input, which is fine. We just need to understand that to test it. Because from a user experience perspective, the user doesn't really care if the item is getting detached from the DOM every single time and it's refreshing. They don't really notice it. But from a testing perspective, we want to understand that.

16. Handling Detaching Issues and Test Retries

Short description:

Every time there's an onchange in the text field, it fires off the userless search method with the query. The debounce delay is causing the delay. Understanding how this works with DOM refreshes is important. The intercept query property can be used to specify a particular request. Test retries can be configured in the Cypress test runner.

So every time that there's an on change in this text field, it's firing off this userless search method with the query. And that query is, obviously, whatever's in that input. So this is being defined in our transaction create container, the userless search. So we see here that a debounce has been added and a debounce will delay by 200 milliseconds. And so we're kind of doing it, we're doing a type fetch. And we're sending off that payload to our back end with that query. So the fact that this is debounced is actually what's causing that delay. But, again, we could we could kind of solve for that a little bit. A couple other things to keep in mind is just like understanding how that works with our DOM is being refreshed. So if this was something we were say, for example, using X state, and instead of using a function within the component to fire off that request, if we were say, for example, updating our states are leveraging our global store, we could actually follow along with what's been dispatched and wait for the dispatch to complete before proceeding. And then but in this case, we went ahead and just waited for the user for it to start with a poster. For the HTTP request to come back before proceeding. Target user first name within the set of intercept. So you can say, yes, you absolutely could. That's a really, really good question. So a site intercept, you can do a lot of things. And so actually, in one of the examples that we looked at in the slides, and I'll come back here for network requests, let me find it here. Yeah, so we're actually like using a query, right? And so query is one of the properties in the route matcher object that you can use in order to specify. So if you really wanted to get granular and say, like, yes, we want to we want to do this particular request, which is actually a really good best practice, because there could be different requests firing off to that search endpoint. They want to make sure that we're waiting for that one. Then you could leverage the query property on that object and you could pass through a variable. So you could pass through, like, whatever username that you're using in the test. In this case, it was Kalen or whatever it may be, or whatever the expected term is. So, yeah, really great question, really great idea. Yeah. So, Lars, that's another. Oh, sorry. That's an unrelated question. It's about the yeah. So the test case is more readable. Yes. And that's. We're talking about command logging in the Cypress command log. But that is correct. Your is essentially if you're using methods that are defined outside of outside of the case or you're pulling that in. You may not get as helpful information in the command log unless you're leveraging something special like logging customizations. Anyway, sorry, got a little distracted there. But so that was how to handle some detaching issues. But I really wanted to show the codes that you understand, how to kind of follow that trace into the source code and understand what's causing those re-renders. How was that? Any questions on that activity? Hopefully now you're feeling much more comfortable with the intercept as weight pattern, in order to handle a lot of the flake that you may be experiencing in your application. So that's when the flake can be addressed with test code. Sometimes it can't. And so we have to talk about how to address all different types of flake. Awesome. Yeah, thanks for that example. That's perfect. Yep, you can pass that through directly as part of the path as well. Yeah. Okay, so wait so now we have flake again like we talked about Xamarin later like network related, like, but we also have environmental flake or flake that we that is related to things outside of the test case itself outside of the app even sometimes even outside of the application itself. So, we're always I say we're always at the mercy of our source code and our test environments. So those are sometimes things that we, you know, we're not testing perfect things we're not testing in perfect environments. And so there's a couple of strategies that you can take to try and mitigate some of those issues that you're having outside of what you're writing in your test code. First one is test retries. So we talked about this. So test retries is a function of the Cypress test runner. So for those of you who may not be familiar, Cypress is a Cypress test runner, a Cypress app, which is free open source, always will be. That's what we've been using today this entire time so far. On top of that, we also have the Cypress dashboard. The Cypress dashboard is essentially like a fast product that allows you to record your test results to a location where you can see them get analytics. We're talk a little bit about some of those functionalities to that, but it's kind of wanted to say that a lot of what we're talking about, though, is in the test runner itself. Test retries is configured in the test runner. You don't have to use the dashboard for it, but it can be helpful because the dashboard will show you those attempts. You can retry it either once, like just retries, and then set the number here for how many times you want it to retry. Or you can also configure it by mode by passing an object to the retries property with runMode and openMode. So sometimes when you're running it locally, you don't want it to retry. I actually turned it on because I wanted to leverage it for this workshop. But when you're in runMode, which is typically gonna be in CI, you want it to retry to get that information. And so again, let me close out all this stuff, make it nice and clean. That's going to be in your Cypress.json, so here I have runMode, openMode. This was one I just upped it for this workshop so you could see how that's working.

17. Cypress Dashboard and Flake Management

Short description:

In the Cypress dashboard, you can view screenshots, videos, and information for each test attempt. The Cypress Dashboard automatically detects flaky test cases, flagging them and providing flaky test analytics. Test retries are used to identify flaky tests and determine their severity. The flakiness of the test suite can be analyzed overall or by branch. The Cypress Dashboard also provides alerts for flaky tests and offers flake management information. Test vernon, a roadmap feature, will automatically retry new tests multiple times to ensure their reliability. Orchestrating tests and considering factors like environment, machine resources, and hosting can help manage flakiness. The Cypress documentation provides further guidance on these topics.

And then in the Cypress dashboard, you can see the screenshots, videos and the information for each attempt. So just to show you what that looks like, so with the Cypress real-world app, we use the Cypress dashboard to record all the results of our test runs, and we make those public. So from the GitHub repository, if you click on this Cytest button, that will take you to our public dashboard, where you can see all the information that I'm going to kind of show. But if we go into, actually, there's a like demo. Yeah, here we go. So this is the branch actually that we're using today. It has a lot of built-in flake in it. But if we click into a failed test case, for example, we're able to see here that it attempted three times. We're able to see that it had the same error all three times. And that it caused 24. This error caused 24 failed test results. And we're able to see like the screenshots and the video from each error. So that's. But that kind of shows you the how the test retries can be leveraged in order to. It will also show you flaky tests. So as we've said earlier, a flake is one it fails but then passes. So the Cypress Dashboard will automatically detect these flaky cases where on the on a subsequent retry, it ultimately passed and it'll flag them. It will say that this test is flaky and it will kind of let you know how many flaky tests there are in each run. And it will collect all those flaky test cases together in the flaky test analytic. And that's all again based on test retries. So it'll identify the flaky tests and the flake severity based on the number of times the test is retried. That's the only indicator that we're leveraging in order to define a flaky test case. So you have to use test retries. And this Cypress Test Runner, that's how it's enabled. You can see artifacts. And then you can also see kind of some historical information. And again, if you wanted to take a look at the Cypress RealWorld app, all of this is public. But our flaky test case, we have an overall flakiness of 18 percent across the entire suite. What's helpful though, what's more helpful, is to kind of do it by branch, right? So in the developed branch, we only have 3 percent flake. But if we go to our demo branch and then we undo develop, we will see that a lot of that flaking is coming from this branch, which is instead the 16 percent flake rate. And then if we kind of look at the flaky test cases, we'll recognize them from the ones that we used today. So user A likes a transaction of user B. We can see that this was while the runs that it was flaky on. We can see the errors that caused it, which is the like button and then the transaction item. We can see the test code history last time that it was changed. Kind of just some information about those flaky cases and the severity. So I mentioned before that we have some some kind of information, like things coming with flake management. So you do get alerts back to get hub pull requests and then also in Slack whenever you have flaky test and it's identifying this flaky test. We don't want you to just say like, OK, well, it passed eventually. I guess we're good. Because as we kind of talked about, flake can be an indicator of issues with either the way that the test is tests are written or with the underlying application or the environment. So it's helpful to kind of see those patterns across all your entire test suite of what's causing the flake. So, for example, if if in the if we're seeing here that like, hey, you know what? Like a lot of these are coming from transaction feed notification and new transaction. And, you know, a lot of them are related to things like not populating. Maybe we have an API endpoint that's spoofy or maybe we have a debounce method. Obviously, those are contrived to contrived examples, but that could kind of point to some of the underlying causes of flake in your overall test, not just on individual case levels. And then the other thing that I talked about was test vernon that's coming soon. It's a it's a road map. Essentially, it would identify new tests, wherever new tests are added to your test suite. It'll automatically to retry them multiple times. It's going to it's supposed to be a configurable one of times. So if you wanted to do like five or 10 or whatever it may be that way and you check for flake, that way you can kind of battle test new cases before introducing them into your test suite to see if they're actually like good cases, good tests. Or I'm not sure if that would cover change tests. That's like I was kind of the question. What caused this earlier is if I make it to make a change, how can I ensure that it's a good change? And that's like really interesting feedback. So definitely make sure to get that to our to our team. But again, all of this is kind of. You can also kind of orchestrate some of this stuff yourself. We have a lot of really good blog posts around like orchestrating tests, like running change specs first, running modified areas. First of your application. You also do get CIA recommendations in the dashboard. But I wanted to point out this section in our documentation. See, is obvious. And like there's a lot of variability in using like testing an application. See, I write there's the environment. There's the machine resources. There's where the application itself is hosted. Are you building the application and see I every single time based on the code changes? Are you running the test on the same machine that you're executing the application on? A lot of stuff goes into it, all super variable. But we did have this section to our documentation and that can kind of give you some indications that if you're having flake and see if you're having performance issues in CI that it could be resulting from machine requirements. And so when we do have some kind of example, information about how we execute the real world app and our Cypress documentation and the machine sizes that we use.

18. Test Data Management and Flake Reduction

Short description:

When dealing with environment-related flake, it's recommended to consult the documentation and utilize the Cypress dashboard for machine recommendations. Test data management plays a crucial role in reducing flakiness. Tests should be independent and impotent, allowing for parallelization. Seeded data and test-specific endpoints can ensure consistent and reliable data. Leveraging API or front-end state management, as well as network stubbing and fixtures, can further enhance test data management. Feel free to ask any questions or seek clarification.

So if you're having environment related flake, that something that's only happening in CI or it's like slowing down and crashing in CI, I definitely would recommend checking out that section of our documentation. And then I wanted to also point out that Cypress dashboard will also give you recommendations on how many machines to run your test suite against in order to maximize the time savings and also to ensure that, you know, this is like that they're being split up across the right number of machines to ensure that they're being quick enough and that you have those resource savings.

Okay. So I think I want to talk about was test data management. That was one of the causes of environmental flake. So, this kind of goes back to like that really big bullet point that I said earlier, where is it where when your tests are flaky for something, anything other than the functionality that you are testing? That is bad flake. So, a lot of what I see is failing or flaky tests related to test data management. And test data management refers to essentially what data that you're using to test, how it's updated, how it's maintained, and how you are referencing that data in your test code. So, I've seen everything from hard coding checked for this content, to using fake data and passing it through your variables and generating it every time to using fixtures or static data and seeding a data test database. You know, there's a lot of different strategies that you can take. But some general best practices for your test data management to kind of reduce some of that environment related flake. Tests should be independent of each other. So, you shouldn't be setting up data in one test that you're going to use in a different test. Tests should be, I think the word is it impotent. I can never say it. But essentially, they are, should be able to, you should be able to do a dot only in front of any it block, and it run and it pass. So, they should not be related to each other. And that's a good practice. Yeah, it impotent. I don't know how to say it. And so, and they also will help you too when you go to parallelize because Cypress was parallelized by spec file. So, if you have things that are interfering with each other and they're running on parallel machines, I've seen that cause issues too if you don't have good test data management. And so, when possible use seeded data. So, with a Cypress Railroad app, and if you go to the cypress.slides.com slash Cecilia user kind of like I have a presentation around test data management in the Cypress Railroad app. So, I would definitely recommend checking that out. And so the way we do is we actually, you may have noticed that there's like a DB seed command that runs in your before each hook. That's literally receiving the database in before each block so we always have clean fresh data every single time. The other thing that we're doing is that we're leveraging an endpoint specifically for test data. So, if I pull back up a test case here. So zero means no retries by the way, so the question there. And so if we come back to one of our test cases here. You I, and then, this is a new transaction. So we're, we're kind of creating like a, we're getting a user. And we're getting transactions, we're doing that from our database. So we have a site on database command, where we're actually just, kind of grabbing a user and a username from a test from an end point specifically designed to give us test data. So that is essentially allowing us to know that we are always going to get a value that is in our database. So instead of saying, wow, I really hope that username exists, or I really hope that password exists. We're actually able to say, Hey, test data end point, give me a username, so I can log in. And we know for a fact it's going to work because we just got it from that end point, in our backend. Another option is to use your API or front-end state management to set and clean up data for tests. So as we talked about those three custom commands, we had login via the UI, we had login via API, and then we also had login via X state. We also have a logout via X state and a switch user via X state. So if we just, again, don't care about our backend, we've already tested it elsewhere and we just need to switch contexts in order to test something in our UI, we can use our front-end state management in order to set up that data for that it block without having to worry about anything else affecting that. And then again, when possible, leverage network stubbing and fixtures when you don't need to hit your real API for testing. By all means, like your critical paths and if you want full end-to-end coverage on those paths, then yes, run through everything, hit your backend, ensure that those end-to-end tests are working. But again, for other areas of your test suite where it's not dependent on that, like, obviously log in is like the low hanging fruit here, right? So you have some applications you have to log in for every single test because everything is behind a log in. You only need to test your log inflow once. You don't need to test it for every single ICC block. So, test it once, yes, have that full end-to-end path coverage. But then for other areas of your application where you're only testing certain feature areas, just log in via API or even better State Management on the frontend. That way you're not dependent on those.

Any questions on test data management, or anything that I showed about flake management with test-free tries or the dashboard? Let me go ahead and hop back over to the Discord here. Okay, awesome. All right. So, I said we probably wouldn't take up the full three hours today. So, did pretty good on time. I think the boss of this workshop took about two hours and 15 minutes or so. I wanted to make sure we had enough. I could choose between two or three. I wanted to leave some time for general questions around Cypress, flake, when it's like in Atlanta this time of year. But probably most of the Cypress-related. Anything I can help answer, look up in the docs. Feel free to use this time. If you don't have any questions for me, please feel free to go ahead and drop off if you like. The slides, like I said, are available. The recording will be made available as well. Question here. What's better way to add tests data to environment API request or database via SQL queries. It depends on how your data is set up. You could do both.

19. Database Access and Test Data Management

Short description:

You can have an API end point to see the database or tap into your underlying node process with Sci.tasks. Cypress real world app uses Sci.tasks and a local JSON database managed with LowDB for test data management. The app has a script to generate data based on predefined trends and business logic. The before each hook uses a side out test DB seed to request the seed command from the test data endpoint. The endpoint, specific to test data, accepts the post request and executes a database command or function. The test data is then written to the database and can be queried for use in tests. Cypress provides custom commands for database operations, including filter and find. The data can be fetched from the endpoint and passed back into the test code. Aliasing return values from side out tasks is not supported, but further documentation can be consulted for clarification. Measures are in place to prevent data seed processes from running on a live database.

You can have an API end point that you're using to see the database. And then you're kind of — then that end point ends up being a sql query in order to see the database. Or you can tap into your underlying node process with Sci.tasks. It depends on where your database lives. If you're using a database on the same machine and you're able to do Sci.tasks in order to use a SQL query to see that database, then you can definitely do that. But if your database is hosted somewhere else and you're not able to access it directly from the same process as where you're running it, then you typically need to use an API request.

So with the Cypress real world app, we are using a Sci.tasks. I can dig into that a little bit more here. Give me one second actually. I'll go ahead and pull up the slide I was talking about earlier. Okay, test data management. Here you go. Exploring the Cypress real world app. Present. All right. So there you go. Test data management. So there's a local JSON database managed with LowDB. So, I mean, if you didn't have a JSON flat file database, but you did have a database that was running on the same machine that you were testing on, you could use that to, so databases received on the app and started in between each test. And it's based on the business logic of the application. We actually have a script that will generate the data based on what we kind of data that we need. For example, if we need transactions between friends and we need a transaction with this type of dollar amount during this date. So there's no need to create business logic within the tests because we already have certain trends, big data types or data transactions that have been created. If you have a really complex data model, that could be difficult to do with like a script. But the goal here is to keep the test code clear and not have to set up data within your test code specifically. So what we do is we have in our before each hook this kind of side out test DB seed. So what this is doing is this doing a request to our test data endpoint for the seed command. And then the APN endpoint, which is created only for test data. So it's in our back end test data routes that TS file that is essentially accepting the post request. And then it's going to go ahead and do a database command or function and then renders that send status back. And so it's honing on that database function. So see database is in our database that TS file, it is actually doing kind of like a read file sync. So it's actually a node command that's being executed. And then we are just doing DB that set state with the test seed. So the test seed is coming again from our database, see that JSON file that was generated when the application started up. And then we are writing to our database with that test. And so we can then use the test data throughout our test. And again, we can kind of query our database in order to get certain data in order to leverage our tests. We have a database custom command. That's just like a general command. And then we also have an operation database that allows you to do two different things. We have a filter and a fine. So filter will essentially kind of like give you data results based on the filter. Low-function and then fine is based on the find low-function. And so what this looks like in our tests is we essentially, if we had like a query database, we're fetching the data. We are passing that back into our test code. So we're actually going to that endpoint. We're passing through the to the entity, whether it's users, transactions, likes, you know, banks, accounts, whatever it may be. And then we're waiting for that to take place and come back and we can use that in our test code itself. So that is how we do it in Cypress. Real world app, again, it's going to be dependent on whether you have access the database in order to see it directly with that task. If you have access to like be able to run a SQL query or if you want to do an API request. Can you alias to store a return value from a side out task? That's a good question. I don't. I don't think so. But let's go ahead and check the documentation as always. So, okay, so, that as can be chained off of so equal the requirements. There's require being chained off of previous command. It will not run assertions typically is using Senate and then so let's take a look at side out task. Requirements. Okay, it's not a task will only run assertion you've changed once it will not retry. It can timeout. Not sure, doesn't say in the documentation. That's a really good question, I can find out and then post that. As a follow up. I don't believe so. Just because typically I see.as being used for either routes are variables and I think that side out task. We just kind of, yeah. It's a really good question I'm not, I'm not sure I have to dig in and find out for you. What's going to save cards when you put in place to make sure those data seed processes don't run on a live database.

20. Environment Variables, Timeout, and Plugins

Short description:

In a test environment, use environment variables to set the database URL based on the environment. Avoid making changes to production databases. Cypress has timeout configurations, but no default timeout. Plugins can be used to customize the command log and add symbols and pictures. Changing environment variables for test data won't mutate the database. Thank you all for your time and questions. Follow me on Twitter at Cypress Communication Twitter at Cecilia creates.

Yeah, so definitely would only want that to be an a test environment or test database environment. A lot of times you can just use like environment variables to see what kind of environment that you're in, obviously, in this case we're running. We're only running these commands, locally on the machine that we're running the tests on right this is a. This is just like a local database, it's not a product may not even touching staging or production or dev. But typically I see people use different environment variables in order to set. For example, the database URL is just like what hosts it's on, depending on what environment they're running against. And I would say you really never want to make any changes to production database, even if you are testing against production, that's really dangerous. And so, you know, typically you will set your staging database or your dev database or your testing database, whichever location that is, and you can set that environment that hosts URL in your Cypress dot JSON. So let's say that you also have different environment as if it did different Cypress config files based on one environment that you're on. So you can pass through that we're in the environment we're on staging today. All right, use this Cypress dot JSON with all of our staging variables are you know what we're testing on dev today, let's go ahead and use the dev environment Cypress dot JSON with all those config files in it. But yeah. don't don't don't I can just so they see that happening.

So okay we recently faced a big pipeline blocker from a test server on infinite loop. Yeah, so, you. so as far as a timeout on execution. Cypress has some built in timeout configurations where it will like ultimately eventually crash right. But so that's something that I think can be configured. I think it may be on the config option, and have to check the documentation for that, but there is a default Cypress like timeout configuration where I think it's like 30 minutes, if it runs for more than 30 minutes, it'll just crash and, and I think that's configurable. Let's see, um. Probably went on chest tech. Cypress run command line. This has all the options, so the config file and then I don't know if timeout though is one. I don't think it's one of the options. I think you'd have to probably if it does exist or it is configurable, you probably have to do that in your Cypress dot JSON. But uh, but yeah, there isn't. There isn't like a default timeout for Cypress. Davis queries in particular plug in. It's yeah, it's really gonna depend on what database you're using. Site out tasks allows you to tap into the underlying node process that's happening and then from there you can kind of. Code a node, however you like, but there's are some plugins as well. There's a lot of plugins so we can kind of definitely check out this page. All of our plugins are tagged as either official. If it's something that's from Cypress verified if it's been reviewed or community, there's also some experimental ones on here. That you know, just to kind of keep in mind about whether you want to introduce it into like a you know, a production application. You may want to just think to take that into consideration if it's a supported plugin. If it's something that's experimental, maybe not, but that's going to kind of keep in mind as well. But. Yeah, so that site that log. The symbols and pictures, right? So you probably notice that we have like authenticating and it tells you like the lock and it says the username. That's part of that patterns and practices that I posted as a response to site out log. So let me get this pulled up here so that the slides. It was also a webinar that we did. I didn't do it but the Dx team did call patterns and practices and one of the things that's covered in this training is like customizing the command log so it actually talks through how to essentially add these messages. So it's authenticating the username and we actually like put in a little emojis there. Uhm? Nice, thanks for the link there. OK, can someone change environment variable? Yeah, that's true too. So if you are hitting like a certain endpoint right just for test data and you're grabbing that, then it wouldn't actually be able to mutate anything on your from your database. So. Awesome, any other questions and feel free if you want you to come off like mute if that's easier or you can keep popping them in the chat. Awesome alright. Great well thank you all so much for for your time today for great questions for participating and I hope that this is helpful for you. Hope that you at least have some good strategies that you can take back to your team about mitigating some of the flick that you're experiencing. So I again I'm Cecilia I'm a technical account manager at Cypress, so I work with all of our users that leverage the dashboard and I also do things like this. You can follow me on Twitter at Cypress Communication Twitter at Cecilia creates. It's just my first name. The word creates on Twitter. It's usually a good place to kind of just keep up to date. I tend to I like to post a lot of Cypress toxin resources and things that I see. But yeah, thank you so much again. Make sure to leverage. Feel free to leverage the slides and share them with your teams and hopefully we'll have the recording for this soon. But I understand so. Awesome, awesome, great thanks everyone. Well Cheers, enjoy the rest of your day and for those of you in. Celebrate. Have a have a nice thing.

Watch more workshops on topic

React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
WorkshopFree
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
WorkshopFree
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
WorkshopFree
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.
TestJS Summit 2023TestJS Summit 2023
148 min
Best Practices for Writing and Debugging Cypress Tests
Workshop
You probably know the story. You’ve created a couple of tests, and since you are using Cypress, you’ve done this pretty quickly. Seems like nothing is stopping you, but then – failed test. It wasn’t the app, wasn’t an error, the test was… flaky? Well yes. Test design is important no matter what tool you will use, Cypress included. The good news is that Cypress has a couple of tools behind its belt that can help you out. Join me on my workshop, where I’ll guide you away from the valley of anti-patterns into the fields of evergreen, stable tests. We’ll talk about common mistakes when writing your test as well as debug and unveil underlying problems. All with the goal of avoiding flakiness, and designing stable test.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Cypress has taken the world by storm by brining an easy to use tool for end to end testing. It’s capabilities have proven to be be useful for creating stable tests for frontend applications. But end to end testing is just a small part of testing efforts. What about your API? What about your components? Well, in my talk I would like to show you how we can start with end-to-end tests, go deeper with component testing and then move up to testing our API, circ
TestJS Summit 2021TestJS Summit 2021
31 min
Test Effective Development
Top Content
Developers want to sleep tight knowing they didn't break production. Companies want to be efficient in order to meet their customer needs faster and to gain competitive advantage sooner. We ALL want to be cost effective... or shall I say... TEST EFFECTIVE!But how do we do that?Are the "unit" and "integration" terminology serves us right?Or is it time for a change? When should we use either strategy to maximize our "test effectiveness"?In this talk I'll show you a brand new way to think about cost effective testing with new strategies and new testing terms!It’s time to go DEEPER!
TestJS Summit 2023TestJS Summit 2023
21 min
Everyone Can Easily Write Tests
Let’s take a look at how Playwright can help you get your end to end tests written with tools like Codegen that generate tests on user interaction. Let’s explore UI mode for a better developer experience and then go over some tips to make sure you don’t have flakey tests. Then let’s talk about how to get your tests up and running on CI, debugging on CI and scaling using shards.