Despite the emergency of DevOp to unite development, support and SRE factions together using common processes, we still face cultural and tooling challenges that create the Dev and Ops silos. Specifically, we often use different tools to achieve similar testing: case in point validating the user experience in production using Synthetic Monitoring and in development using e2e testing. By joining forces around common tooling, we can use the same tool for both production monitoring and testing within CI. In this talk, I will discuss how Synthetic Monitoring and e2e Testing are two sides of the same coin. Furthermore, I shall show how production monitoring and development testing can be achieved using Playwright, GitHub Actions and Elastic Synthetics.
Synthetic Monitoring and e2e Testing: 2 Sides of the Same Coin
AI Generated Video Summary
Carly Richmond discusses the importance of shifting left in testing and monitoring. She emphasizes the need for empathy and a common toolset in the software development process. The talk explores the use of end-to-end testing and synthetic monitoring, showcasing an example with Playwrights, GitHub Actions, and Elastic Synthetics. It also covers the configuration, setup, and integration of tests in the CI workflow. The talk concludes with the benefits of monitoring application and test state, and the importance of collaboration in issue recreation.
1. Introduction to Testing and Monitoring
Hi, TestJS Summit. My name is Carly Richmond. I am a senior developer advocate at Elastic. Today, I'll talk about end-to-end testing and synthetic monitoring, showcasing an example of combining an end-to-end testing framework with GitHub actions for CI and monitoring using Elastic Synthetics.
Hi, TestJS Summit. It is so great to see you all. My name is Carly Richmond. I am a senior developer advocate at Elastic. And I'm here today to talk about testing, which you probably expected from a testing conference, let's be honest. But did you expect me to talk about monitoring as well? Potentially not. So today I'm going to leverage my prior experience as a software engineer to talk about end-to-end testing and synthetic monitoring. I'll talk about despite our thoughts, actually, these elements are two sides of the same coin. And also I'll showcase an example where we take an end-to-end testing framework, in this case and combine it with GitHub actions for CI and then monitoring using Elastic Synthetics to show how we can use the same scripts for both end-to-end testing and synthetic monitoring.
2. Understanding the Challenges
But before all of that, we need to talk about the notion of shifting left because that's imperative to understanding this argument. My experience with that was really my early idea about why DevOps makes sense, because we were working together. We've got to talk about why this is. The first reason is empathy. We are not great at empathizing with the other sides within this equation. The other thing is a lack of common priority. And the last problem that I saw was quite often our tool sets are so disparate that it's very difficult to figure out if there's any common ground between us. And synthetic monitoring in end to end testing is the prime example of that.
But before all of that, we need to talk about the notion of shifting left because that's imperative to understanding this argument. So for me, shifting left was something that actually made perfect sense. Tried to pick up defects earlier in the cycle. And also it made sense to me because actually, I'll let you in on a little secret, I used to be more than just an engineer. My first ever role as a software engineer, I also had to do everything else. I had to do production management, I had to deal with user issues, and given it was a regulatory system, there was always a rather quick turnaround required for those ones. And we also dealt with deployment, testing, and coordinating user testing. We did everything on that application because of the size of the team and the expertise that we had.And it was only really when we started to try and hand over some of the support responsibilities to a dedicated production management team that I actually realised that these activities are more commonly done by different groups. But my experience with that was really my early idea about why DevOps makes sense, because we were working together. They set up the ticketing system, for example. I worked with them to document all the knowledge, get it out of my head, so that people could react to issues and try and help users where they could. And then if they tried to fix an issue and it didn't quite work out, they would call me in to help and we would update the knowledge article with the additional information. But I also got to learn from them as well because we talked about testing, we talked about how we could make the application easier to monitor. All these things that really as a developer you don't get a lot of exposure to and I was really fortunate for that. And I thought it was normal until I moved to my first web dev role and realized that that's actually a very rare experience. And even speaking to devops engineers and SREs now, I see that actually the relationship even despite the emergence of devops is more akin to a game of rock. And we've got to talk about why this is.The first reason is empathy. We are not great at empathizing with the other sides within this equation. Developers are not empathizing well and collaborating with testers who they feel might be throwing features back at them that they thought were perfect. Production management are receiving regular new features that are built in small increments that mean they are overloaded often with features that they don't know much about. And everyone is just really feeling like the other side doesn't really get what their role is and what they are trying to do. The other thing is a lack of common priority. This sign here on the seat we all pretty much know what this means. It's very clear who this seat is intended for. But if we think about backlog prioritization within the recent agile world it's not always as clear cut as that and my experience of working with product owners was often that new features were regularly prioritized over small toil enhancements, additions such as enhanced monitoring capabilities and even sometimes minor bug fixes were all put towards the bottom of the backlog in comparison to new features that they could understand. And the last problem that I saw was quite often our tool sets are so disparate that it's very difficult to figure out if there's any common ground between us. And synthetic monitoring in end to end testing is the prime example of that. So test for anyone who's not familiar are basically tests that allow you to test and validate the user experience.
3. End-to-End Testing and Synthetic Monitoring
This section discusses the use of end-to-end testing and synthetic monitoring. It highlights the need for a common toolset and introduces an example using Playwrights, GitHub Actions, and Elastic Synthetics. The example demonstrates how these technologies can be used for local development, CI, and production monitoring. The section concludes with a vanilla Playwright test that sets the stage for integrating Elastic Synthetics.
So it performs actions such as clicks text entry that a user would do and allows you to validate that the application as a whole behaves as expected. And in my experience of writing these tests as a developer, we ended up using Cypress way back when we used Protractor for Angular, which thankfully is no longer in existence.
The other thing is synthetic monitoring, which is more from the SRE side of things. And synthetic monitoring refers to the ability to, on a regular schedule, run scripts against a particular application to validate that it's up and it is alive. And this can be as simple as a simple ping on an endpoint to make sure that a health service is up to something as complicated as simulating user clicks and action to make sure that the application is responsive for users who are going to go in and use the application. And you've probably guessed the thing that's common between both of these things is, in fact, the user perspective.
But these things are often built by different audiences, developers and testers, compared to SREs and production management individuals. And they always use different tools as well. So, my last engagement in the bank, I used Cypress for N10 testing, but my colleagues who were writing monitors were using a PICA and Zebra tester. And the reality is, if we want to come together and try and work together to make more maintainable, more reliable systems, and also share the road when it comes to writing these automations, we need to use a common tool set that we all can understand.
And I'm going to walk through an example, which you can check the QR code to go and have a nosy through afterwards if you'd like, using Playwrights, which is an end-to-end testing and automation framework maintained by Microsoft, GitHub Actions for CI, and Elastic Synthetics to show how a combination of these technologies can allow us to use the same scripts as an end-to-end test and local development in CI, and then again as a monitor in production. The way that these tools interact is similar to the flow that I have up here. So bringing out my mouse, what you'll see is that we have a project using Elastic Synthetics and TypeScript journey files, which have Playwright interactions within them. And these sit alongside those vanilla heartbeat monitors, which are specified in YAML. And one of the great advantages is, this effectively gives us monitors as code. We have configuration in a repository, instead of having it manually configured in a UI and observability platform. So we can run these locally against a version of the web app to see that everything is running as expected when we build out features. And then we can push them up into source control, raise that PR, and then actually execute them as end-to-end tests to validate on the potential merge that everything's working as we expect. Then, when we deploy our app at the same time, we are going to take those same journeys, those monitors, those tests, and we're going to use API key authentication to push them to the location in which we're going to run them, and then they are going to ping against the production web app instead, and then process the results into observability platform so we can see what's going on.
So let's dig into an example, shall we. So this is a vanilla Playwright test that I've written just to show how Playwright works on its own before we try and integrate Elastic Synthetics together. So you'll see that I'm using Playwright tests so I've got that installed within my web app project, and you'll see here that what I'm doing is I have two tests, I've got one where I'm moving to the order page in this little ecommerce app that I've built, and I have another one where I'm adding an item to the order. So I can use the page object within Playwright to navigate, so going to the home page for example, I can use various locators to pull out the particular elements in the page that I want. So for example here I'm asynchronously pulling out the order button using the get by test ID shorthand. Important to note that this means that I have separated my styling and all those other changes from the logic that pulls out my elements because in this particular occasion I'm using the data test ID attribute which I recommend you do as well if you're not already. So then I can click on that button, I can check that I've navigated to the order page as expected and I can also pull back all of the menu item cards to see that I have a few of those. And then on my adding order I'm going to the order page instead. I'm pulling out all of the add item buttons off of my menu cards, checking that I've got a few of them again and then I'm getting the cart count label and then I'm checking with the clicks of individual add item buttons that this particular value is incrementing each time. It's a relatively straightforward test. So if we want to now make use of Elastic Synthetics on the top, we need to install it.
4. NetWizard and Synthetics Config
Using a global install allows you to use the NetWizard to generate a structure with three elements: the journeys folder for end-to-end test cub monitors, lightweight monitors in YAML syntax, and the synthetics config for configuring monitors in Elastic Synthetic.
5. Configuration and Settings
I'm able to specify playwright options such as being able to ignore HTTPS errors. I'm able to configure the default settings for monitors. So here I'm setting that each monitor journey will run every ten minutes. I'm actually using the UK Elastic infrastructure. But if you set up the Elastic agent on your own infrastructure you're able to run it from that location instead. Just follow the wizard.
And I'm specifying the params here for localhost by default. But if I scooch all the way to the bottom from line 28, what you'll see is that if the environment is production, I'm changing the parameter to point to my production application. And this means that it will pick up the right URL depending on the node environment that is passed out.
I'm able to specify playwright options such as being able to ignore HTTPS errors. I'm able to configure the default settings for monitors. So here I'm setting that each monitor journey will run every ten minutes. I'm actually using the UK Elastic infrastructure. But if you set up the Elastic agent on your own infrastructure you're able to run it from that location instead. Just follow the wizard. And then I specify the Elastic project settings. So where is the URL for my Elastic deployment? Is there an ID and which kibana space do I want these to sit in? Which might be useful if perhaps you have multiple teams looking after different sets of applications you can use different kibana spaces and entitlements to segregate them.
6. Test Setup and CI Integration
In this section, we discuss the new test setup using Elastic Synthetics and Playwright. We explore the concept of journeys and steps, which capture user behavior and interactions. We also cover the integration of test changes into the CI workflow, including running the end-to-end suite and pushing monitors to the Elastic deployment. It's important to set the correct environment and specify the API key for authentication when pushing monitors.
So where is the URL for my Elastic deployment? Is there an ID and which kibana space do I want these to sit in? Which might be useful if perhaps you have multiple teams looking after different sets of applications you can use different kibana spaces and entitlements to segregate them.
Moving on, this is the new test. So there's a couple of things that have changed here. So the first thing that we need to point out is that we are using Elastic Synthetics, but we're still using Playwright because we've got this page object sitting in here as well. It's an explicit dependency.
I'm able to override the settings on an individual monitor. So if you wanted to bypass the defaults and have a particular journey that runs more often, perhaps as a regular validation, then you're able to do that. I'm able to do set up and teardown for setting up my test sets and tearing them down using before and after, similar to unit tests that we're used to using. And then I'm also able to do the steps. But the thing that's different is rather than having individual tests and more of a unit-based format, we're really leaning into more of the user behavior elements. So we have a journey, which wraps the entire individual test, and we have steps which will then capture a screenshot of that individual interaction. So if for example you're making you solve behavior-driven development practices to identify user behaviors and assertions, perhaps you're using techniques like example mapping instead, each of those individual points will map to an individual step in your test, and then we're just using Playwright as normal to pull out elements, using locator, probably should be using GetByTestId instead, and doing those clicks, text entry, and other actions we're used to. And then if we're doing test-driven development practices and going around that loop, we can then run these tests as we're building out features and say, hey, it's working as expected, fantastic.
The next thing that we need to think about though is what happens at the CI point when we go to integrate not just our code changes, but those test changes as well. And there's two tasks that we need to think about. So here I have a GitHub actions workflow and I have two jobs within it. I have a test job, which is going to run that end to end suite. And then I've got a push job, which is going to push the monitors to my Elastic deployment. So here you'll see that I'm using the node environment as development as an environment variable to make sure that I'm using the local URL, thinking back to our configuration. And what I'm doing is I am starting the application. I am running the command Elastic synthetics. You need to make sure in both running it in local development and here that you actually run the command outside of the, within the journeys folder, which is why I set the working directory here. And then we've specified the JUnit reporter, which means that I will be able to receive the published test results within each CI run and see if my tests are succeeding or failing.
Now, when it comes to pushing, we need to push with the node environment of production, because if you don't, what will happen is it will pick up the default, thinking back to that configuration file. And what we'll have is a situation where you're trying to ping local hosts, where the app is not running and you will get failing monitors. So make sure you set the right environment. You also need to specify your API key for authentication, making sure you store it in an appropriate vault. Don't put the plain text key in your workflow file. Make sure it's dependent on that test step so we don't push broken monitors inadvertently. And then from within the project directory, we're just running the push command, which is set up by that init wizard.
7. Monitoring Application and Test State
We can monitor the current state of our application and tests. Green indicates success, and we can track test durations over time. We can set up alerts for test failures and use AI assistance for deeper analysis. We can view failures, trace information, and screenshots of each page. Additionally, we can monitor the duration of each step to identify any degradation.
And when we do that, we end up with this. So we've pushed our application in a separate step. We've now pushed our monitors. You can see each of the cards showing the current state. So green means we're A-OK. We can also see how long these tests are taking to run, which is important when we consider things such as end-to-end tests quite often degrade over time. They may increasingly take longer to run. And we want to try and catch that to see if there's perhaps an issue that we need to raise back to fix. And we can see the duration running over time. We can also start to do smart things. So maybe we can run and trigger alerts when a particular test fails. Perhaps we can use AI assistance to get information on what's happening within our data and get more in-depth analysis. But we're also able to see the failures. We're able to see exactly what the trace is. We're able to see the screenshots of each page to see what it looked like. And as you can see over here in the corner, I can see the duration of each step. So I can see if an individual step is starting to degrade as well.
8. Recreating Issues and Collaboration
So this is all great, but we all know we can't catch everything and sometimes a user is going to ask for help. And when that comes, what we need to be able to do is make sure that we can create similar journeys to feed back through the loop again. But if in an ideal world, we are in fact working in multidiscipline teams with that product owner, with testers, with developers, with designers, with SREs. Not everyone is going to have Playwright expertise and sometimes being able to record what the user is doing to recreate an issue can be helpful.
So, we've talked about a lot today. Talked about my prior experience of DevOps and how, I'm still surprised that we're really struggling to come together. We've talked about the idea of using synthetic tests and end to end monitors using the same artifacts. Because they're both automating the user perspective. And I've given you an example using Playwright, Elastic Synthetics and GitHub Actions. But what I want to leave you with is that irrespective of the tools that you're using, what's more important is talking to each other and aligning on a tool set that you can use. So, if your observability platform has a different scripting library that it allows you to use, try and use those as your end to end tests so that you can collaborate together. Because ultimately that's the important thing to take away today. And with that, I want to say thank you so much, TestJS Summit. It has been an absolute pleasure. I will be hanging around if you want to ask any questions. I'll share the slides. I'll also share the link to the code if you want to go and dig into it in GitHub actions and haven't caught the QR code, but I will see you all next time.