Automated performance testing can help detect the harmful effects of code changes on application performance. Learn how to use tools like Lighthouse and Web Core Vitals in your CI and set performance thresholds to maintain optimal frontend performance in this session.
Measure and Improve Frontend Performance by Using Test Automation
AI Generated Video Summary
The Talk focuses on the importance of testing and gathering information for building good applications. It highlights the use of test automation for performance monitoring and logging for performance measurement. The Talk also discusses the impact of performance on user engagement and search engine rankings. It emphasizes the use of Cypress plugins for monitoring performance metrics and setting thresholds for tests. Overall, the Talk emphasizes the value of test automation tools in providing valuable information at a low cost.
1. Introduction to Testing and Information
Hello, everyone at TestJS Summit this time. I'm so glad to be here again. Today, I'm coming home with an interesting discovery on performance. I'm passionate about testing and believe that information is king. Gathering information is important for building good applications. Make it work, make it right, make it fast - the order of building good applications.
Hello, everyone at TestJS Summit this time. I'm so glad to be here again because the TestJS summit was one of the first conferences I was honored to be a speaker at. It's just like coming home today.
Today, I'm actually coming home here with a little interesting discovery I want to showcase to you when it gets to my testing experience. If you might have guessed already, it's on performance. Well, before that, real quick, my name is Ramona. I'm working as a Developer Advocate at Offs Zero. Apart from that, I'm a Google Developer Expert in web technology and a Cypress Ambassador. As you might have guessed, I think I'm, yeah, at least I'm passionate when it comes to testing. Yeah, it shouldn't be a surprise that I'm here to talk about testing today.
So well, especially when it comes to testing, there are some things which are really important. And yeah, this little thing, Content is King, which I hope I have enough for this 20 minutes. It was done by Steve Bommer, I guess. He's doing lots of quotes, which are memorable. But well, let's change this quote a little bit. Because, if Content is King, Information is King like, doubled or even more. And it's not just because I like to play video games and this is the title of a World of Warcraft quest. But I, well, find interesting. At least I like those little sidequests where, for example in this case, a goblin vendor named Goodskitch wants to have some Krukal skin in exchange for information. So, it's not only in video games that information is really important. Gathering information, having a good grasp on what happens inside of your application, is really important. Not only in gaming, but also when it comes to testing.
So, well, I guess this should be clear for most of the time. But, well, why do we need to have information on our application? Because we want to build good applications, right? So, you might have stumbled upon this quote by Comeback, which says, make it work, make it right, make it fast, in that particular order. So, just to dive into it a little bit, make it work is self-explanatory. So try to solve a problem at hand, which might build a good application. So, well, just make it work when it comes to your requirements. If you are able to do that, do the second thing, make it maintainable, make it right. So, try to mind non-functional requirements. Mind clean code, for example, to make it not only work, but work the right way and work the way that you still want to work with your application, let's say, one year from now.
2. Using Test Automation for Performance Monitoring
And last but not least, make it perform, make it fast, make it that the user is not annoyed by using your application. We can use automation to support us in all of those three points, even performance. Test automation can be used for monitoring performance and gathering information. Logging is the art of keeping a log or a list of events that occur in your computer system, such as problems, errors, or just information in the current operations.
And last but not least, make it perform, make it fast, make it that the user is not annoyed by using your application. And yes, of course, the first point, make it work is pretty easily achieved with testing. So, if you test your application, you know that it's working as intended. The second one, make it right can be supported by tooling, for example, if we had good tests, making it easier to write good code. And I don't want to start with all the linches, all the static analysis tools you can run, like phpStan, ESlearnStyle and whatever. But, well, we have good support when it comes to first, like, make it work, make it right. But what is with the third point, make it fast and perform and basically make it feel flawless? Well, make your application perform seems really daunting, because as I said, you don't have that much tool support yet. And well, even the point, the steps of speeding up your application code wise is really difficult, because you could ask, like, who has time for that, who has the power to do that? And may it be because of pressure by clients, but especially head space inside of yourself. And do we get the time for improving the performance or maintaining it? Because even getting all the information you need on your application to monitor performance, doing all those measures locally takes lots of time. And all of this together seems really daunting. So what if I told you some interesting point that we don't need to feel lost. We don't need to feel so drained or scared. What if I told you that we can use automation to support us in all of those three points? Even performance. Not only working, not only making it right, making it perform. What if I told you you can use test automation for that? Not only trying to learn GitHub Actions to implement some tools to help us to monitor performance, but using everything we already have. Test automation we already have for our project. Well, I know, I know, it's a little like misusing testing, right? So well, it's still good because you can use your knowledge you already have for it. You don't need to learn new tools. You can use your well known familiar CI. For example, if you don't want to use GitHub Actions, which can give it to you, or if a standardized GitHub Action is not enough to fulfill your needs. So, in my mind I think it's interesting to use end to end testing for measuring performance because the AppStack application stack is completely there, and existent. So we can do more of it, than only when it comes to unit tests. And we are close to the user who would be annoyed by performance problems, right? But, if we want to think about using test automation for monitoring performance and gathering, monitoring, and collecting information on our performance, let's take a step back and take a look how we measure or collect information on our test automation to begin with. Well, as this talk is called Measuring, let's start with the defaults to shed some light on it.
Many think about the most interesting or most normal way of taking a look at information from your test automation pipelines. For example, if you want to take a look why something fails, you will take a look at the logs. Logging is actually the art of keeping a log or a list of events that occur in your computer system, such as problems, errors, or just information in the current operations. It might be testing, for example, in this regard. It logs everything what happens inside of your tests. It looks like that.
3. Using Logging for Performance Measurement
CRI logging is a vital part for enabling you to measure things. We need to be able to notice the problems we have and make it fast. There are ways to support you in having logging as easy and accessible as possible. Let's see how we can use logging to measure performance values. Performance in web automation depends on factors like page load time, responsiveness, and overall user experience. To achieve a high performing website, we can optimize by reducing file sizes and minimizing server requests.
CRI logging is visible inside of your CRI. Even if the presentation is differing sometimes, it's handled similar in many frameworks, no matter if it's Cypress, no matter if it's PlayWrite. So, you can take a look at the test results and of course to the errors, if there are any. And some frameworks give you a little more insights on it.
For example, here the test runner of Cypress and PlayWrite has a similar one. But all of those frameworks make it clear what happens inside of your test. Well, you could wonder now, do I tell you that right now? Why should I bother because it's a standard feature, right? You're right. But there's something you need to keep in your mind, because logging is a vital part for enabling you to measure things. And if you cannot measure something, you can't improve it, right? Because you don't have a messenger to tell you that something's wrong. So, yeah, we need to remember the make it part. This is crucial. We need to be able to notice the problems we have. So especially the point of making it fast is important in our regard. So we are dependent on metrics. So we need to let the metrics show as a base of comparison. And before you worry right now, you don't need to always handle those daunting log outputs.
There are a couple of ways to support you to have this logging as easy and as accessible as possible. For example, if you want to use a custom reporter to showcase it a little better, which could be more awesome, but there are many more which you could take a look at. Well, there are plenty of plugins who extend the log output inside of your CI. Maybe Cypress plugin, in my case, where I will add the name later on because I seemingly have forgotten it. So you could think about printing of the DevTool console output or request to make it even easier. Well, so far so good. These are logging defaults. So let's see how we can use them in detail to misuse test automation to measure performance values. And before we do that, of course, this talk is about performance, but I guess we should have a common understanding of what performance is in this regard.
So performance in web automation, we first speed and efficiency with which a website or web application functions and delivers content to the user. And it can depend on various factors like page load time, responsiveness and overall user experience. Basically everything which might let you feel triggered or let's say it like annoyed or something which doesn't feel like a flawless user experience that you cannot do your job on. And our dream of a high performing website is that it looks quickly, it allows users to access information without delays or interruptions. So it's just flawless, right? To get to that point, we can do some optimizing which could involve optimizing reducing file sizes, minimizing the number of requests made to the server.
4. Performance Testing and Monitoring
All of those little things which add up pretty quickly. And yeah, if we don't take a look at this, we will have higher bounce rates, lower user engagement and a negative impact on Search Engine rankings.
So when it comes to performance testing, which is basically the type of testing conducted to evaluate the speed, the responsiveness, stability and scalability of application or website, there are a couple of ways on how you could use this. So when I'm thinking about stuff like load testing, where you have the normally expected user loads to determine its response time, but also stress testing, spike testing, when it comes to load spikes on how to handle it in your application, even stress testing, to try to push your website beyond its normal capacity, there are two aspects of performance testing you could utilise to get a bigger picture of your application's behaviour.
And I guess when it comes to debugging locally, many of you guys maybe already tried it out, but were you aware that you could automate this? You could use a GitHub Action which is I guess even predefined, but you could use your end-to-end test to keep track of those too. There's a plugin which is called Cypress Audit which provides Lighthouse audits but also Pally audits, but I don't want to focus on accessibility right now because it's I guess enough for the slot here. So Lighthouse it is. So you could use the Cypress Audit for Cypress tasks, there's a Playwright Alternative too, and there you take a look at the typical Lighthouse thresholds. The categories what we use for Lighthouse and provide the score between 0 and 100 in the realm of performance, which is important for us. Accessibility, best practices, SEO and PWA.
And when it comes to what's good and what's not, we could zoom on the performance audit here and we see that everything in between 0 and 49 is bad and probably will influence your ranking. Then everything in between 50 and 89 is something with some some possibilities of improvement and everything above 19 is great and you can see that here right now. And for our part, the performance, there are some interesting metrics named here like first contentful paint and largest contentful paint which are really interesting for us. We will take a closer look because these are the ones we want to take a closer look and we want to act on those. Okay, those two terms were taken from Google's web vitals which are a set of specific metrics that measure the user experience of a website. They focus on three critical aspects of the web performance loading, like how fast something is, interactivity, how fast the application reacts to our input and visual stability. So maybe you already happen to have a mobile page. You are scrolling through it, you want to click something, but it jumps. Yeah, this is something we need to take a look at. And yeah, as said those metrics are by Google and they have implications on the website. So the three core web vitals metrics are first, the largest content for paint, which measures how quickly the main content of a web page looks. It should occur within a 2.5 second or when the page first starts loading. The second one is the first input delay. It measures the time it takes for a web page to become interactive. Should be less than a hundred milliseconds. Last but not least, the cumulative layout shift, which measures the visual stability of a web page by tracking unexpected layout shifts of elements. And it should have a score of less than 0.1. And there are many more metrics, of course, but we will cover the core ones, the core web vitals here. And there is a Cypress Plugin 2, which you could use to monitor the core web vitals through our test automation.
5. Using Cypress Plugins for Performance Monitoring
Use Cypress plugins to monitor performance metrics and set thresholds for your tests. Tools like Lighthouse and Cypress web writers can help you monitor website performance and detect failures. Treat your end testing job pipeline as a detective that checks metrics every night. Remember to measure performance, focus on the front-end, and use test automation to capture performance metrics. Test automation tools can be more than just testing tools. They provide valuable information at a low cost.
So use this QR code to find it. And let me showcase it a little bit. So you could install it via npm or yarn, or your favorite packet manager. And then you can import the commands so Cypress knows them. Again, the typical Cypress installation workflow when it comes to plugins. As we see it here.
Afterwards, you could basically start using it inside of your tests. So I will import it in the commands.js. And then I will use an existing blank test to include the cypress-writers command. And use some web-writers config, which is the URL I want to take a look at. And apart from that, it's just, I want to use the normal thresholds. And as you see it here, something errors because a threshold has been crossed. So the threshold we saw in the slide before. And yeah, the test is failing if we don't need those. And well, if you see such an example with some metrics in it, you could even use other thresholds if you want to. For example, I added the first content for page, which is the first displayed element, how long it loads. Well, the time to first byte, the time in between the request for resource. And when the first byte of a response begins to arrive, which are useful metrics too. And if you take a look on how it is locked, you can see it here in the test runner, you will see, yeah, the fail visible inside of the log. This is the log. And of course, the result is there too. And you can be sure that your test will fail if it doesn't match the thresholds. Okay.
So, these tools like Lighthouse and the Cypress web writers will help you monitor your website's performance and make your test fail, which is really great. So, you could think of a pipeline in your end testing job like a little detective, who goes there every night and takes a look at the metrics of your application. And fail visible performance metrics aren't hitting the thresholds. You can see that through login. And yeah, you have your own Sherlock Cypress or Sherlock Playwright if you want to. So, it provides you with lots of information with not that much of a cost. Great, right? So, if you want to remember four things from my talk, which are really important when it comes to monitoring and improving front-end performance by utilizing test automation is always measure performance to improve it because otherwise you have no possibility to improve it. Performance issues are often noticed inside front-end, so focus here first to measure. You can use your normal U2E test automation to capture most of the performance metrics and light holes and especially cobalt vitrates can be used easily as cypress clutters. So, I said you can use test automation tools for more than just testing. I hope they can feel like an assistant to you, so feel free to use them and hopefully they are helping you a lot. So, what else to say then? Thank you for your time and for listening to me.