Synthetic Monitoring and e2e Testing: 2 Sides of the Same Coin

Rate this content
Bookmark

Despite the emergency of DevOps to unite development, support and SRE factions together using common processes, we still face cultural and tooling challenges that create the Dev and SRE silos. Specifically, we often use different tools to achieve similar testing: case in point validating the user experience in production using Synthetic Monitoring and in development using E2E testing.
By joining forces around common tooling, we can use the same tool for both production monitoring and testing within CI. In this talk, I will discuss how Synthetic Monitoring and E2E Testing are two sides of the same coin. Furthermore, I shall show how production monitoring and development testing can be achieved for JavaScript-based applications using Playwright with Typescript, GitHub Actions and Elastic Synthetics.

Carly Richmond
Carly Richmond
21 min
15 Nov, 2023

Video Summary and Transcription

The Talk discusses the relationship between synthetic monitoring and end-to-end testing, emphasizing the importance of using a common tool set and shifting monitors left as code. Challenges of collaboration and shifting left are addressed, including existing silos, cultural barriers, and different priorities. The process of converting a Playwright test into a monitor is explained, along with wrapping the test as a monitor using the Synthetics project. Running and deploying the monitor are covered, emphasizing the importance of setting parameters and environment variables correctly. The Talk concludes with the importance of monitoring, issue resolution, and collaboration across roles.

Available in Español

1. Introduction to Monitoring and Testing

Short description:

I'm Carly Richmond, a Senior Developer Advocate at Elastic. Today, I'll discuss the relationship between synthetic monitoring and end-to-end testing. Using different tools for these is duplicating effort. We'll explore an example of using Playwright, GitHub Actions, and Elastic Synthetics to shift monitors left as code and promote collaboration.

Hi, DevOpsJS! It's great to see you all today. My name is Carly Richmond. I'm a Senior Developer Advocate at Elastic. I'm a front-end engineer. And also, I am a bit of a testing nerd. And today I'm here to talk to you about monitoring, which from a DevOps conference, you probably expected, but I'm also here to talk to you about testing, which might have been a bit of a surprise. I'm here to make the case that synthetic monitoring and end-to-end testing are two sides of the same coin. Very similar constructs. And in fact, using different tools for these is effectively duplicating effort on the dev and op side. We're going to walk through an example of how this can be done for JavaScript web applications, making use of Playwright for JavaScript and TypeScript, GitHub Actions, and also an observability provider, which in this case is Elastic Synthetics, so that we can try and shift our monitors left as code and try and collaborate more together.

2. Challenges of Collaboration and Shifting Left

Short description:

In my experience, as a developer, I've faced challenges due to the lack of collaboration between developers, support, and testers. Shifting left, although a great idea in theory, often leads to a fractured experience. Existing silos, cultural barriers, and different priorities hinder collaboration. Quality becomes an afterthought, and using different toolkits for the same goals creates confusion.

So first, I need to set the scene of my experience because this is something that at the start of my career, I really wish I had. It would have solved several issues that I kind of battled through. So my first developer job, which was 2011, I was a bit of a rarity because I worked on a tiny team. We had to do everything. So we were developers, we were support analysts, we were dealing with outage situations, user acceptance testing, requirements gathering. I pretty much did everything and wore so many different hats.

And then later, it was only really when we ended up offloading things like some testing and some support where I got to work with individuals and actually learn some best practices rather than muddling along as I went. But then when I moved roles, I found that actually that prior experience is very, very rare indeed. In fact, the normal situation, which I still hear from speaking to many DevOps engineers and SREs is that it's a bit more like a game of Rock'em Sock'em Robots when it comes to developers and support kind of almost at loggerheads sometimes with the conflicting ideas. And quite often there's testers maybe in the background, unless we've ended up with the situation where developers are responsible for their own testing and validation.

Now, shifting left is not a new thing. And when I first heard about this, I thought, this is a great idea. This will lead to perhaps a little bit more harmony in that situation. Perhaps, you know, with the emergence of common practices, we can work together. Myself and other devs can learn more about how to build more maintainable applications. You know, maybe everything will be all great. And instead, I've seen something a little bit more different happen where actually in some regards, we've got all these great additional disciplines. If you think about SREs, DevOps engineers, there we've got prompt engineers, all these people that are able to dive deep into topics and show off expertise. But we still don't necessarily collaborate together very well. And this can still lead to a very fractured experience. And there's several reasons for that. The first is that particularly in large organizations, those existing silos still persist. And in fact, DevOps is often adopted out of an existing production management function, meaning that it doesn't really align with developers because they're still having those kind of cultural mishmashes and barriers in place. Our empathy isn't great. We're not really great at trying to understand the other side of it, because SRE activities, ML engineer activities, developer activities, these things are all surprisingly different, to the extent where even the priorities that we have as teams is different and can be struggled to align together too. Sometimes quality is a bit of an afterthought. Developers and the notion of shifting left means that we've had to learn all sorts of wacky things around security, best practices, error handling, monitoring diagnostics. And actually, we need more help because otherwise we end up just covering over the cracks. But the biggest problem, to be honest, is that we're using very different toolkits, which sometimes makes sense for the role that we're doing. But when we're using the same different tools to achieve the same ends, it doesn't really make sense anymore.

3. Challenges of Different Tools for Testing

Short description:

In my last engineering job, we faced challenges due to using different tools for end-to-end testing and synthetic monitoring. This led to different understandings of how the system should work and resulted in duplicate efforts. To address this, we need to come together and use a common tool set, shifting monitors left and using them as code. Today, I'll show you an example of how to achieve this using Playwright for JavaScript and Elastic Synthetics for observability. We'll also discuss the traditional flow, from local development to running monitors in the CI pipeline and deploying to production. The example we'll cover is the login flow, including manual text entry and adding items to the cart.

And in my last engineering job, the one that really got me was the fact that we're using different tools for end-to-end testing and synthetic monitoring. And I couldn't understand why. So before I go into the idea about how similar they are, let's first set the scene for anyone who's new to these terms.

End-to-end testing refers to the ability to use an automation framework, such as Playwright, like we're using today, or perhaps Cypress, or in a prior life when I was building Angular applications, Protractor, which thankfully isn't here anymore, to try and automate the user workflow, those operations they do to interact with their web apps, such as clicks and text entry, in order to try and validate that what we've built corresponds to that specification and will work as they should intend it to. Synthetic monitoring is where we basically, on a periodic schedule, run some form of script to test the availability of our application every few minutes or a few hours potentially. We might think that's something simple, like pinging an HTTP endpoint just to see that it's still alive, but with the emergence of scripted monitors in recent years, that can also mean performing user journeys, automating those same steps that they do to make sure that a user can indeed perform that workflow at that point in time in the system.

So we're automating the same thing. We're automating the user workflow, the clicks, and because we're building them in different tools, so while I was building in Cypress in my final developer role, I had colleagues who were SREs that were writing these monitors in Selenium using Apica as a wrapper. And I just don't understand because that leads to different understandings about how the system is intended to work because the workflows aren't always necessarily functioning exactly the same way. And it's also leading to duplicate effort fundamentally too. So I think we need to bend that notion. I think we need to come together and try and use a common tool set and basically shift your monitors left so that we use monitors as code, which can then double as an end to end test specification. And we're going to show an example of how to do that today. I'm going to use Playwright for JavaScript, which is a Microsoft maintained end to end testing library. And I have a TypeScript examples here too. You have GitHub actions so that we can run these tests in CI as part of any kind of pre-deployment or pre-integration checks. And then we need an observability provider because we're monitoring our app. And we are going to use in this case, Elastic Synthetics to wrap our Playwright test and allow that to capture the monitor activity and run it on a schedule against production. All the codes and the slides I'm going to walk through today are available in that QR code that you can see off in the corner and don't panic. It will come back at the end. But before we dive into the example, let's just get right in our head exactly how these elements fit in terms of a traditional flow.

So let's take the example of developers working on a new feature. So what they'll do is build out the feature alongside writing a journey file, a monitor that they can use as an end to end test making use of Playwright. And they will run that as part of their local development practice. Hopefully engaging in test driven development. They'll then push their changes, the monitor and the code changes into source control and undergo any reviews and checks running these monitors as an end to end test within the CI pipeline. Then when we come to deploy our application to keep our monitors in sync with the production app, we need to be able to push them out to where they need to run using API key authentication, which will then run either in your own location or within the location provided by the observability provider. And then in this case, persisting the results down to Elasticsearch and then being able to see the results in a nice shiny dashboard. So the example I'm going to cover today is the login flow that you're seeing flashing in front of you at the moment. So a simple manual text entry of entering username and password to go into an order screen and then start adding items to cart.

4. Converting Playwright Test into Monitor

Short description:

To convert the Playwright test into a monitor, we start by checking the login form and getting the button. We add the credentials to the text input boxes and verify their values. Then we check that the submit button is enabled and ensure that we navigate to the expected page.

So first, we need that Playwright test that we can then convert later into a monitor. You'll see here I've got a very vanilla test up in front of me here that is basically going through starting from line four. We are going to the login page. We're pulling out the login form using the data test ID attribute as part of this selector. This is what I would recommend as a practice because the last thing you want to do, which I've done in the past, is couple this to CSS classes that you're also using for styling or to complex CSS structures with nested parents, meaning that as soon as you move something or change something in the view, you break all your tests. So try and avoid doing that. Checking the login form is defined, then going off and getting the button, checking it's disabled, adding in our credentials. So this is just a single test I have. So if you want to reuse your credentials, Playwright does have the option to do that by persisting their credentials in a JSON file. Check out the authentication documentation for that. But we are adding it to the text input boxes for the username, checking the value of the text box is actually what we added. Same thing for the password. We're then checking that we can now submit the buttons enabled, and then go off and make sure that we're navigating to the page as we expect.

5. Wrapping Test as Monitor

Short description:

To wrap the test as a monitor, create a new Synthetics project. Use the init wizard to generate a sample project with skeleton code. The project includes example monitors in the journeys folder. The project configuration specifies parameters, such as the URL and environment variables. Playwright options and monitor settings are also configured. Transform the test into a journey by breaking it into smaller steps using the journey and step constructs.

So what we need to do next is wrap this as a monitor. And the way we do that is we create a new Synthetics project. So we'll have a global install, which you can see up here, then use the init wizard as part of the third command to generate that sample project, which will generate the skeleton code. So you see, you have the journeys folder. So these are example monitors that we're going to run as tests to show you how to get started.

You've then got lightweight heartbeat monitors. So if you're wanting to do things like pings, you can keep them all in the same repository as code, not going to cover those today. And then you have the project configuration with all your settings. So if we move on to that, you'll see here, this is the config file. I am specifying params. So I've got the URL, you can see that this is local host. So this is what I would ping under a default scenario. But if we scooch all the way bottom to line 30, you'll see that actually when the environment's production, I changed the URL to match. And that means that we don't have the situation where the monitor is going to fail when we push it because it's trying to access a locally running app that doesn't exist.

I'm also passing in the username and password so that I can then use the same values picking up from the environment in the monitor as well as NCI and under local development. So you would use an environment file locally to manage that. Then you have your playwright options, as per the playwright documentation, any specific settings you need. And then the monitor settings for all the monitors in the project, unless you say otherwise. So we're setting the schedule to 10 minutes. We have it running as a UK location, and then I haven't set up any locations myself. This is an elastic location in the UK. So that's all fine. If you set up a private location, that will be listed here. And then I've got the project settings. So I need to know where my deployment is. So this is the cloud deployment details. This is a point to a local running version, if that's what you had, the cabana space and then the ID.

So we need to now transform that test that we had before into a journey, which means we want to try and break up into smaller steps, which allows us to basically get more fine-grained information as to what's failing. So the first thing you'll notice, if you look at the top, is that our imports are now different. So we have a journey and a step construct here.

6. Running and Deploying the Monitor

Short description:

Pass the page object, parameters, and configuration to the journey. Override the monitor settings if needed. Set up before and after actions. Split the steps into login page steps and manual login steps. Run the suite locally and push changes to the CI pipeline. Use the node environment as Dev and specify the username and password. Specify a JUnit file for the report output. Run Elastic Synthetics in the journeys folder. Push the monitor as a production monitor. Specify the node environment as production and the API key. Use appropriate security measures. Ensure the parameters and environment variables are set correctly. Use a test account with limited actions. Avoid irreversible state changes.

You'll also see for the journey that actually I'm passing in the page object. This is the playwright object, and I'm also passing in the parameters and the configuration. I can override the monitor with default settings, so if I want it to run on a different schedule, less often or more often, I can do that. I can set things up and tear things down with before and after, similar to unit tests. And then I have two steps. I have the login page steps, which are just exactly the same as what we covered in the playwright example before. And then I've also got the manual login steps, which have been split out as well.

And then when I run this locally as a developer, everything's green. I then know I can push my changes. So that means when it comes to the CI pipeline, there's two things when I'm trying to merge things into the main branch on any kind of reviews being complete, there's two things that I want it to do. I want to be able to run this suite, and I want to be able to push the monitors out to production alongside the deployment. So here, specifying the node environment as Dev, basically to pick up the correct configuration, and adding in the username and password so that those parameters can be picked up, I am going to run the suite exactly the same way as I did before with the same command. So what I will also do is specify a JUnit file for the output for the report, which obviously I didn't need to do for local development.

So scooting down to 18, I am, you know, sorting out my installs, I am starting the app running locally. And then what I'm doing here is I am running Elastic Synthetics within the journeys folder itself, so it can pick up the definitions, which is why I've specified the working directory. And then the JUnit reporter output option means that my publish unit test results task underneath is able to pick up that definition and show me my test results. But I also need to push the monitor, so when I deploy my app, which is happening in the background with Netlify whenever this flow passes, I want to be able to push those definitions to run as a production monitor. And the way I do that is specify the node environment's production, specify the API key. This has to be a secret. You don't want to be publishing your keys in your publicly visible GitHub workflows, so make sure you're using an appropriate vault. Making sure it's dependent on tests and doesn't push in the event of a failed test run. Again, running this time within the main test folder rather than the journeys folder. And then the tasks are that we again do run the installs and then run push, and this is generated when you run the generate wizard. And when we do that, we then go into Elastic. We see we have monitors, but we go, oh, you're failing. What's happening there? It needs to be able to pick up the parameters. Unless you specify them in the global parameter settings, it's not going to be able to pick up the environment variables. So you need to be careful of that. This is also a good time to mention that if you're adding credentials in for these tests, you need to make sure that this is a test account running a production that's limited to performing particular actions. Ideally, you don't want it to be changing state in a way that is irreversible.

7. Monitoring, Issue Resolution, and Collaboration

Short description:

Ensure correct role configuration to prevent unauthorized actions. Monitor performance and failures. Use the same definitions for issue communication and resolution. Collaborate across roles and prioritize building user-friendly applications. Utilize common tooling for unified workflows. Feel free to ask questions and access code resources. Stay connected through various platforms.

You don't want it to be able to do things like making payments. So make sure that the roles are configured correctly in your application to make sure that someone is not going to be able to make tons of transactions against a legitimate card rather than a test card or be able to do anything they shouldn't. Because at the end of the day, if these credentials get out, you need to assume a worst-case scenario.

You also are able to see how long these are taking, how many failures have had over the period of time, and then we can start to do smart things. We can see when it's failing. We can see what step it's failing at. We can see what the trace is. We can see things like how long the steps are taking, which for me is useful because I used to have a situation where my intent test would get longer and longer and longer. So the idea for a monitor being able to see if it's starting to run a bit longer and you can then figure out if perhaps they're starting to degrade, maybe there's optimizations you can make to make it be more performant experience for the user. That's all super useful.

Many potentially do smart things like alerts. Now the hope is that this will catch something before your user does, but that's not always necessarily going to happen. Sometimes a user is going to find an issue and what we can do is we can use these same definitions as communication for what a user's encountering. So if we write specification or we record a specification, we can then use that to recreate the problem, work on the fix and then push all the way back through from that starting flow that we covered before.

Because in this particular situation, DevOps for me, given my experience, given those challenges I talked about at the start, it's all about coming together, making sure we collaborate irrespective of whether us as SREs, as DevOps engineers, as developers, irrespective of the challenges and the priorities that we have, we all have one goal and it's to basically ensure that we're building awesome applications that users are happy using. So it's time to come together and if common tooling can help us do that, in this case with end-to-end testing and synthetic monitors being two sides of the same coin, let's do it.

So the next time you are looking at your end-to-end suite or you're looking at your scripted monitors, just pop your head over that wall and ask the question about are they writing the same thing? Are the workflows in sync? And is actually there a way where we can write the same definitions using common tools? So thank you very much DevOpsJS, it has been an absolute pleasure. I will be floating around so feel free to ask me any questions about playwright, about synthetic monitoring, I'll be around for that. Also dive into the code which you can check out at the QR code which is listed there and there's a copy of the slides there as well. If you don't catch me around and you think of something after, I'm on X, I'm on LinkedIn, I'm on Masterdod, just reach out and say hi and I'll see you next time. Bye!

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Don't Solve Problems, Eliminate Them
React Advanced Conference 2021React Advanced Conference 2021
39 min
Don't Solve Problems, Eliminate Them
Top Content
Humans are natural problem solvers and we're good enough at it that we've survived over the centuries and become the dominant species of the planet. Because we're so good at it, we sometimes become problem seekers too–looking for problems we can solve. Those who most successfully accomplish their goals are the problem eliminators. Let's talk about the distinction between solving and eliminating problems with examples from inside and outside the coding world.
Using useEffect Effectively
React Advanced Conference 2022React Advanced Conference 2022
30 min
Using useEffect Effectively
Top Content
Can useEffect affect your codebase negatively? From fetching data to fighting with imperative APIs, side effects are one of the biggest sources of frustration in web app development. And let’s be honest, putting everything in useEffect hooks doesn’t help much. In this talk, we'll demystify the useEffect hook and get a better understanding of when (and when not) to use it, as well as discover how declarative effects can make effect management more maintainable in even the most complex React apps.
Design Systems: Walking the Line Between Flexibility and Consistency
React Advanced Conference 2021React Advanced Conference 2021
47 min
Design Systems: Walking the Line Between Flexibility and Consistency
Top Content
Design systems aim to bring consistency to a brand's design and make the UI development productive. Component libraries with well-thought API can make this a breeze. But, sometimes an API choice can accidentally overstep and slow the team down! There's a balance there... somewhere. Let's explore some of the problems and possible creative solutions.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
TypeScript and React: Secrets of a Happy Marriage
React Advanced Conference 2022React Advanced Conference 2022
21 min
TypeScript and React: Secrets of a Happy Marriage
Top Content
TypeScript and React are inseparable. What's the secret to their successful union? Quite a lot of surprisingly strange code. Learn why useRef always feels weird, how to wrangle generics in custom hooks, and how union types can transform your components.
Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Hooks Tips Only the Pros Know
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
React, TypeScript, and TDD
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
Designing Effective Tests With React Testing Library
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
Josh Justice
Josh Justice
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
Detox 101: How to write stable end-to-end tests for your React Native application
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
WorkshopFree
Yevheniia Hlovatska
Yevheniia Hlovatska
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop