When writing web performance tests, the tests you create from these tools come from lab data collected from pre-defined environments, devices, and network settings. Lab data allows you to reproduce performance results repeatably, making it useful for detecting and fixing performance issues early. However, lab data ignores a very important aspect: your users’ real experience of using your applications. Field data, collected from real users under various conditions, reflect how your application is being used in the real world and track the performance your users experience and even the errors they encounter. Real users also have different user behaviors, which cannot be all simulated realistically via test scripts or test cases. Real users use different devices, network conditions, caching mechanisms, and even system fonts that can impact how a site loads. Therefore, you must still complement your performance tools that are using lab data with field data or real-user monitoring (RUM) solutions to achieve a holistic approach to web performance testing. One way to do this is by complementing Grafana k6 browser with Grafana Faro.
Holistic Web Performance With Grafana and K6
AI Generated Video Summary
Today's Talk focused on achieving a holistic web performance approach using Grafana and K6. The importance of complementing lab data with field data for realistic user experience testing was highlighted. The use of GrafanaFaro for collecting performance metrics and integrating with other Grafana OSS products was discussed. Detailed instructions on setting up GrafanaFaro with the QuickPizza sample application were provided. The benefits of using K6 for reproducing errors, improving testing, and demonstrating browser testing capabilities were explained.
1. Introduction to Holistic Web Performance Testing
Today, I will share how to achieve a holistic web performance approach with Grafana and K6. Lab data alone is not enough for a realistic user experience. Complementing lab data with field data is crucial for web performance testing.
Hi, everyone. My name is Marie Cruz and I'm a developer advocate here at Grafana Labs. Today, I would like to share with you how you can achieve a holistic web performance approach with tools such as Grafana and K6. But first off, if you want a copy of these slides, feel free to scan the QR code that's displayed here on my screen.
So let's explore the problem statement that I want to focus on in my talk today. Performance testing tools are awesome. But what about real users' experience? For example, performance tests that are created with the Grafana K6 browser module use Lab data collected from predefined environments, devices, and network settings. Lab data allows you to reproduce performance results repeatedly, making it useful for detecting and fixing performance issues earlier on. Lab data, however, doesn't account for a very important thing, which is the real user experience. Real users have different user behaviors, which can't be simulated realistically via test scripts or test cases. This is why it's best to complement lab data with field data to achieve a holistic approach to web performance testing. But I think first it's very important to know the difference between lab and field data to understand why both are essential for web performance testing.
2. Lab Data vs Field Data in Web Performance
Lab data is collected in a controlled setting, simulating user interactions on a website, while field data reflects real-world conditions. Metrics may differ between the two due to factors like query parameters and targeted ads. Lab data is useful for early development and testing, allowing for detection and fixing of performance issues. It provides insights into specific settings without external factors.
Here's a car analogy that can help us illustrate the difference. Imagine that you're testing the speed of a new car model in a controlled environment such as a racetrack. You have a stopwatch, and you measure how quickly the car accelerates, you measure how it brakes, and you measure how it can take turns. You record these measurements consistently with no traffic, perfect road conditions, and there's a very skilled driver behind the wheel.
Now imagine a car's performance on railroads like a busy city street or highways. The road conditions will vary, there's gonna be unavoidable traffic, and different drivers may handle the car differently. The lab data for web performance is similar to our car performance under an idealized setting. Lab data are collected in a controlled setting, often with specialized tools or scripts that simulate user interactions on a website and are useful for detecting performance issues earlier on. On the other hand, field data is similar to our car performance under real road conditions. Field data reflects how your users access your site from different devices and locations and under different network conditions. These users also might have various browsers and connection speeds just as real drivers on railroads will have different skills and behaviors.
Now, when you work with tools that use lab data, you might notice that the web performance metrics could differ from those generated by tools that use field data. As an example, when you run a web performance test in a lab setting, you normally just include the base URL of the web application that you are testing. However, in real world, different users will access a website using different query parameters or different text fragments, which can also impact the web performance metrics. When your users access your website in real life, there will also be different targeted ads depending on your users browsing history, so that can also impact the web performance metrics. So, you've now understood the difference between lab and field data. When should you use them when it comes to web performance? So, both types of data are valuable for different purposes. Lab data excels when it's used during the early development and testing stages. Because of the controlled environment, performance issues can be detected and fixed earlier on. So, as an example, if you want to understand your performance metrics in a specific setting such as a particular device or a particular screen size, then in that case, you'd be able to set this up in a controlled manner without really worrying about other external factors.
3. GrafanaFaro: Collecting Performance Metrics
Tools like K6 can simulate different settings for catching pre-production issues. Field data provides an overall picture of website performance. GrafanaFaro automatically collects logs, errors, and other metrics. It can be set up with a Grafana agent instance and integrates with other Grafana OSS products. The easiest way to get started is by signing up for a free GrafanaCloud account.
So, for example, tools such as K6 can be used to simulate all these different settings so that you can catch issues in your pre-production environments. Now, on the other hand, field data provides an overall picture of how your website functions in the wild. When your website is now public, you're going to need ongoing monitoring to detect performance issues in your live environment. And this is where solutions like GrafanaFaro can help.
So, what is GrafanaFaro? GrafanaFaro is a web SDK that you can configure to automatically collect all relevant logs, errors, and other performance metrics for your application. You can set up GrafanaFaro by setting up a Grafana agent instance, and then you can then forward the collected telemetry data to other Grafana OSS products such as Loki for logs and Tempo for traces. But the easiest way to get started with GrafanaFaro is by signing up a free account to GrafanaCloud, and then you can use the free forever tier plan.
4. Setting up GrafanaFaro with QuickPizza
So, when you sign up for GrafanaCloud, there is a product there called GrafanaCloud Front End Observability, which is the hosted service from Grafana that uses GrafanaFaro web SDK under the hood. It contains predefined dashboards, making it easier to start with Faro.
Now, since I only have a short time for this talk, I'll be demonstrating how you can set up GrafanaFaro using the Front End Observability product, which is included for free as part of your Grafana's free forever plan. So, to demonstrate how we can set up GrafanaFaro, I'll be instrumenting a sample application called QuickPizza. It's available publicly at github.com/grafana/quickpizza. And there's a bunch of detailed information on how to set this up locally.
So now let's jump into the demo. QuickPizza is a very simple application that you can use to generate new and exciting pizza combinations. It can be set up easily using Docker, and we've added a bunch of instructions here that can guide you depending on what sort of tests or what sort of use case you want.
So I already have the application up and running. And as you can see, it's just a very simple application. There is a pizza please button. If you click that, it will just generate some random and exciting pizza combination. At the same time, there's also an advanced tab and you can set the maximum calories per slice, minimum number of toppings, and the maximum number of toppings. And you can also exclude some different tools.
Now let's just imagine that one of your users have gone rogue and they've just typed in a very random number as part of the minimum number of toppings field. Now when they click pizza please, they can still see that there are some pizza recommendations, even though clearly the value that we've added here is not correct. So let's open the developer tools console and see if there are any errors that are being printed. So let's just open up the console and see what's happening.
The first thing that I'm going to do is I'm going to just go back to home. So when you create your Grafana Cloud account, you're going to be redirected to this landing page and you should see a frontend tab here for the frontend observability product. I've already created the QuickPizza application here, but you can simply create a new project just by clicking the create new, providing a name, course allowed origin. So this is basically the URL that's allowed to send the frontend data to the Grafana Faro endpoint. So in our case, this is just going to be the localhost port. So once you've done that, I'm just going to press the edit here, you should see a tab for the web SDK configuration.
The Grafana Faro URL, so this is the URL that I'm talking about. So from a QuickPizza perspective, all you need to do is add that URL and export that to an environment variable called QuickPizza.conf.faro.url and then add that to the .env file.
5. Setting up GrafanaFaro for QuickPizza
To set up GrafanaFaro with QuickPizza, add the Faro URL to an environment variable and the .env file. Use the QuickPizza application as an example. Grafana Faro tracks Web Vital Metrics and top errors in real time, providing insights into user experience and identifying issues.
So from a QuickPizza perspective, all you need to do is add that URL and export that to an environment variable called QuickPizza.conf.faro.url and then add that to the .env file.
So now I'm back on the QuickPizza project as part of my frontend observability product. And there's a bunch of different tabs here such as the overview, the different errors, the overtrack and the different sessions that I have done. So let's go back to the overview tab. So from a web performance perspective, Grafana Faro uses the Web Vital Metrics in order for you to understand what's actually happening in your users real experience. So metrics such as the first contentful paint, large contentful paint, cumulative layout shift, and the other Web Vital Metrics are tracked here in real time. So you can actually drill down like what are the different Web Vital Metrics over time as you track your application. If you go to the errors tab, you can see here the top errors that your users are experiencing. So the unexpected end of JSON input. This was the error message that was being thrown on my console when I was trying to just provide a arbitrary number as part of the minimum number of toppings filled. But other than that, you can see what browser or what version the error was found. So you can actually have a much more investigative view of what different areas your users see if they are using different browser combinations.
6. Reproducing Errors and Improving Testing with K6
To reproduce errors consistently and prevent their recurrence, use a pre-production environment for bug fix verification. Complement lab data with field data to improve testing. K6 is a powerful tool that goes beyond load testing, supporting browser testing and fault injection testing. The Browser module in K6 adds browser automation and end-to-end web testing capabilities, allowing you to simulate browser actions and collect performance metrics.
Now, since we have that error happening in production, we need to be able to reproduce this error consistently in our pre-production environment so that we can also verify that whatever bug fix that our team will implement works and this can prevent the issue from surfacing again. There might also be other errors happening that you're not aware of because the control tests or lab tests that you have created as part of your testing strategy were not able to capture this. This different errors can also slow down the performance of your application if not looked over time.
Now, complementing lab data with field data observations enables these continuous performance testing approach. You can incorporate the feedback that you get from doing field testing and improve your lab testing furthermore. One tool that can help you complement your field data is K6. Grafana K6 is a powerful developer-friendly tool designed and engineered with a focus on load testing, but it boasts capabilities beyond that use case. K6 started off as a load testing tool but it can now also support browser testing, fault injection testing, and even more.
7. Demonstration of Browser Testing with K6
Now, let's jump into a quick demonstration again. Okay, so the first thing that I've done is I've created an empty file called browser.js and then we're going to import the browser module. So to do that, let's quickly add the import statement to use the browser module. And then I'm going to create my default function. So default functions in K6 are basically the code that your virtual user will execute depending on how many virtual users you have. In this case, I didn't really set any virtual user because I just want to see it from a single-user perspective. So let's just create the default function as is.
So export default function. Once you've created the default function, we need to open up a new page and this is where the inspiration from Playwright really comes into place because you'll notice that the different methods that I'm going to use is quite similar from what Playwright provides, although it's not really 100% similar. So let's create a new page now. Browser.newpage and once I've opened a new page, let's just visit the application. So that's going to be in localhost port 3333. And since the goto method is an asynchronous operation, I need to use the await keyword. So we need to convert this to an asynchronous function. And let me just quickly add the page.close as well because this is a mandatory step. So if you're opening up a new page, you also need to close the page. So this is a very simple first step for our browser tests.
8. Mimicking Error Scenario in Browser Testing
The browser test covers the scenario of an error message notification. It mimics typing an invalid value, clicking the pizza please button, and checking if the error message is visible. Using field testing observations as feedback to browser tests is essential.
Note that this isn't really going to be displayed on the application, but you can see what it looks like. So if the quick pizza application suddenly has the error message notification, then at least my browser test will be able to cover that.
OK, so I've added a bunch of other code here, but basically this is just going to mimic the scenario where I type an invalid value and then I click the pizza please button and then it's also going to check if the error message is actually visible on the page. So let's now try and run that test again. So you can see that the application is going slow because it's trying to find that the error message is there, which at the moment is not there. But you get the point in terms of trying to use the observations from your field testing and then using that as feedback to your browser tests.