Solving your front-end performance problems can be hard, but identifying where you have performance problems in the first place can be even harder. In this workshop, Abhijeet Prasad, software engineer at Sentry.io, dives deep into UX research, browser performance APIs, and developer tools to help show you the reasons why your Vue applications may be slow. He'll help answer questions like, "What does it mean to have a fast website?" and "How do I know if my performance problem is really a problem?". By walking through different example apps, you'll be able to learn how to use and leverage core web vitals, navigation-timing APIs, and distributed tracing to better understand your performance problems.
A Different Vue into Web Performance
- Intro1 minute
- Agenda2 minutes
- Motivation3 minutes
- Web Application Types4 minutes
- Browser Operations2 minutes
- Indentifying Performance Problems5 minutes
- Factors Affecting Frontend Performance2 minutes
- Performance and Users3 minutes
- Key Takeaways2 minutes
- Visual Completeness Example3 minutes
- Measuring Performance Data2 minutes
- Measuring Asset Size3 minutes
- Web Vitals5 minutes
- Other Metrics4 minutes
- LCP Heuristics3 minutes
- Demo: Lighthouse9 minutes
- Data Distributions4 minutes
- Demo: Performance Timeline API8 minutes
- Demo: Sentry2 minutes
- Conclusion and Questions5 minutes
AI Generated Video Summary
This Workshop on Performance Monitoring covers important topics such as understanding performance and its impact on user experience, measuring asset size and page load performance, analyzing performance data and distributions, using the Performance Timeline API and Performance Observer API, and custom metrics with marks and measures. The workshop also highlights the significance of user perception and performance, the role of Google Lighthouse in performance analysis, and the use of Sentry Performance Monitoring for capturing metrics and real-world insights.
1. Introduction to Performance Monitoring
Hi everyone! I'm Abhijit, a software engineer at Sentry. Today, I want to share some lessons we've learned about performance monitoring and strategies for identifying performance problems in your applications. Feel free to ask questions throughout the session, and we'll have a dedicated Q&A section at the end.
So hi, everyone. Hope you're having a great day. My name is Abhijit. I'm a software engineer at Sentry. And so Sentry is a tool to help you kind of monitor your application code health. Typically, we have helped you with kind of identifying the errors and bugs in your application. But more recently, we've moved on to performance monitoring. So help you kind of understand the performance of your applications, whether that's a web server or a web application you built for the browser. And so I want to talk to you a little bit about kind of some of the lessons we've learned rolling out this like performance monitoring at Century and helping you hopefully get some strategies into kind of identifying the performance problems of your application. Again, if you have any kind of questions at any time, leave them in the chat, either on the Discord or in the Zoom itself, and I can take a look. But we'll have an explicit section at the end for question and answer. Sweet.
2. Understanding Performance and Its Importance
We're going to focus on performance when building Vue applications. Understanding your system and measuring performance are important. Performance is an accessibility issue and a competitive advantage. It creates a culture of caring about performance and improves user experience. We'll cover topics at a high level and go into more detail with questions.
And so as a rough agenda, basically we're going to really be focusing on performance when you're building Vue applications, which are oftentimes in the browser. So for those of you who are using like Vue and Ionic and or for like mobile or Vue plus Electron, maybe not everything in this talk will kind of apply to you. But hopefully you can take some concepts back to those platforms. And if you want to explore them more, we can chat about it afterwards.
So we're going to focus first on kind of really understanding your system, the app you're building or the website you're building, kind of its needs, because that's really important when it comes to understanding the performance of your application. We're then going to dive into kind of how we measure things, because that's important and how we interpret that data, looking at metrics, kind of thinking about lab versus field data, and we'll get into what that means. And then we'll have some small demos, nothing huge. In fact, stuff that you can probably use itself. So we're just going to cover some browser API, small tooling. And so all I ask is if you want to follow along, you don't have to, you can just see me typing in as well, have your Chrome DevTools open on a website or web application that you want to start thinking about the performance for.
Sweet. So we're going to start off by kind of thinking like, you know what? We're going to spend a while talking about performance. Why should we care about this? And maybe some of you are starting to think about the performance of your applications. But kind of how can you justify putting the time in to improve performance and work on it? Fundamentally, I like to think of performance in the web as an accessibility issue. It's directly related to user experience. If someone loads a page that you built for your web app, and it's slow, it's unusable, it has unpredictable behavior, it is fundamentally an inaccessible application. It's hard to use. It's not going to be a good experience. And we want to prevent that. We want people to use the stuff we build, we want people to have a positive experience. And so performance is a key part of that. Really, I guess another really easy point here is that it's obviously a competitive advantage. The kind of example that everybody loves to bring up. If you read any kind of books on this topic, it's like the number one thing they go to is like if you have like a, let's say you're building like an e-commerce store, and you're in checkout, if someone, it takes too long to check out or it's a bad experience in terms of performance, somebody will just go to another site and buy from there. And so it's obvious that if your site feels fast, loads fast, and is a good experience, people will stay there and use whatever you built. And this fundamentally, this like performance means happy users, which is good for the for you, the developer who built it and for them as well. A kind of underrated thing about starting to think about these topics in more detail is that really, it helps to create like a culture of performance and caring about this kind of stuff, which is important if you're working on this as like part of your job, let's say, you're a front end developer, using, you know, building sites for your company. You know, if you start paying attention to these topics, you can create a culture of your whole team, your whole company, caring about performance and, you know, that helps improve accessibility, helps probably maybe bring money to your company, only positives here. One thing I do want to note is that we're going to be covering a lot of these topics at a pretty high level because there's a lot to it. There's a lot of kind of academic literature, books written, articles about this subject. And so, we can go into more details as people ask questions. But I'm going to leave this as kind of introduction. So, hopefully, some of you get interested in some of this stuff.
3. Understanding Web App Types and Operations
4. Performance Optimization Considerations
Animations, operations, throughput, resource usage, bandwidth, latency, code performance, resource usage, and factors influencing performance on front-end applications versus traditional web servers or databases are all important considerations for optimizing performance. There are many factors outside of your control, such as user hardware, browser, network conditions, and user type. However, the way you write code and design your application can have a significant impact on performance.
I also list kind of animations here. Usually somebody isn't like animating something on the page, but user will interact with something, click a button and animation occurs. It's important for that to be performant as well.
And so now we have these operations. Hopefully it's now that's starting to kind of click in your mind. You have your web app type. How can we use both of these to identify performance problems.
And so I'm actually going to take a step back from the front end and think about what is a pretty well researched topic which is like your performance for your back end applications. Let's say you're running a server, like an express server or flask server, what are you typically thinking about? Well, typically you're thinking about throughput and your resource usage. So throughput is kind of the amount of data that's being transferred over a set period of time. And so we can consider that amount of data, we can call that bandwidth, and the amount of time it takes for that data to transfer we can refer to as latency.
And so if we think about all of these different inputs, there's quite a lot of them, right? In your front-end app performance, I'm going to name some that I just thought of and I think about, but I'm sure there's many that you can name that aren't on here. The user's hardware, the amount of CPU and memory they have access to on their laptop, the user's browser, those are things that you do not control. They load your site and they do whatever they want with it, but you don't actually control the underlying hardware. The kind of network conditions they have, if they're loading their stuff on a 3G connection, or on a mobile device with a spotty connection, that's very different than a very fast LTE wired internet connection, for example. Your app type. So we mentioned single-page application. We mentioned static. If stuff is cached or not in the browser, if stuff is cached or not on the server. So if you're using a CDN to kind of load resources, you're putting your images in the S3 bucket versus on the server itself, for example. The user type, which we'll go into more detail. It's important to remember the user type actually has a massive impact on this. And then, of course, there's the developer itself. Like the way you write the code, the way you set up your architecture, the way you actually design your application has massive consequences for performance.
5. User Perception and Performance
Performance is tied to user perception and how they interpret loading and actions. Jacob Nielsen's research shows that users perceive 0.1 seconds as instant and 1 second as the limit for flow of thought. Delays up to 1 second are acceptable, but 2-3 seconds become noticeable, and 10 seconds is a barrier to avoid. Understanding users and their usage of your web app is crucial for solving performance problems. Your website is affected by factors beyond your control, such as browsers, user conditions, and dependencies. Performance is relative and depends on user perception and application type. Visual completeness exercises can help evaluate performance.
Cool. And so let's actually talk a little bit about this user stuff. Because at the end of the day, it's really important to note that performance is actually all about people. The two fundamental operations, which was rendering a page, page load, and user interactions are heavily tied to user actions. Someone has to type something into your browser to kind of load the page. Somebody has to make some action to trigger a user interaction. So performance is very tied to how a user feels and how a user perceives what they're using. That does mean, though, that you can technically make a site feel faster even though you don't actually affect the underlying time data. just by looking at user perception and how they're interpreting stuff that's loading, stuff that's happening.
And so, you know, Jacob Nielsen, he's kind of an expert in usability studies, user research. I heavily recommend. He has this book, Usability Engineering, that if you really care more about the user side of this stuff, that you check out. And he kind of has this, he's done this, a lot of ton of great research. But one of the things I want to point out is this research on response times, which kind of show that timing data, it's not like people will notice the difference between half a second and a second. But there are kind of, there are kind of points where people will make transitions. So at around 0.1 seconds, that's kind of the limit to a user feeling that something is instantaneous. So if something takes 0.05 seconds or 0.1 seconds, to a user that will both feel like instant. And so this is like, I hover over something and it changes color. As long as it happens in around 0.1 seconds, it'll feel instant. Around a second is kind of how long it takes for the limit for users flow of thought. So people will notice a delay between 0.1 and one second, but as long as it's under one second, they don't mind the delay. Because it's understandable, you know, in your head, that you have to kind of pay a cost whenever you do some action. And it's okay to wait a little bit. Like you click to sort a table, for example, like you click the top of a column, and you sort it, you know that there's probably some kind of computation there, running some sorting, even if you don't know how it works, or what it's doing, you probably know that it's, it costs something. So you're fine for it to wait a little bit. Of course, it'd be great if everything is instantaneous, but that's not really the reality. But people will start to notice if that becomes like, two seconds or three seconds. And then 10 seconds is usually like, this is like the barrier you do not want to cross when building applications, unless it's something really expensive. Like you're doing like an export, or big operation. And if so, you need to make it very clear that this is going on, and provide some kind of loading indicator, progress bar, anything, so provide some kind of feedback to the user that this is going on. And by paying attention to these times, by paying attention to these timings, you really realize that like, hey, I don't necessarily have to go in and optimize everything, I can change the actual design of my application to feel more performant. And that goes a long way.
So some key takeaways, before we kind of move on to actually like looking at measuring this stuff and, and trying it out is, I got three here. And there are more, but these are I think the three that are most important. So these front end operations, again, rendering a page, and user interactions are fundamentally user based. So it's important to understand who your users are, and how they're using the web app that you've built, because that affects how you kind of look for performance problems and solve them. Second, your website doesn't exist in a vacuum. There's a lot of kind of factors that are out of your control, the browser, people are using the machine that they have, when they actually load up your site, the network, they have access to kind of all of those things, the user conditions, so on and so forth, even for example, the services that your site depends on if you have a back end or a CDN, those are all kind of dependencies that are just out of your direct control as a front end developer sometimes. And so it's important to keep that in mind. And so your performance of your applications are also affected by all of those dependencies as well. And so how you're using it, because it affects what we think is good or bad performance. I think like a question people often ask is like, what is slow or what is fast, but it's all relative. It's based on how people are thinking about your application. It's based on how people are using it and based on your application type as well. And I have this little note about visual completeness, because this is like an exercise that you can do. I'm going to show off an example webpage, but feel free to look at a page that you built and you can start thinking about. And so this is like, I'm sure a page most of you are familiar with, it's the kind of GitHub profile page. I just screenshotted mine as just an example. And so we have like all these various components. We have this nice image.
6. Measuring Performance and Asset Size
To determine when a page is visually complete, it depends on user needs and interaction capabilities. Having 70% of the page load quickly can be sufficient. Measuring performance is crucial for understanding the problem space and making comparisons. It helps identify issues, evaluate solutions, and prevent regressions. Performance data on the front end is high cardinality and requires context. Measuring asset size is a simple way to improve performance by reducing bandwidth usage.
We have like a nice sidebar. We have a navigation on the top. We have these pinned repositories component. We have this contributions component.
If we look at this page, how much of this page needs to load before we consider it visually complete? Which means how much of the page do we load before a user finds it useful? That's hard to answer, right? I feel like a lot of people reach for it depends. And it really does. It depends on what the user is looking to get out of the page. The way I interact with my profile page versus another person coming onto my profile different. And then priorities will change as kind of, you know, what a user gets out of a page once.
So, some people might be saying, oh, 100% of the page being kind of there means that it's visually complete. But that's not often good enough. Sometimes it also depends on if the user can interact with the page or not. So, for example, I'm sure we've had kind of experiences where we load... works because it has to load a bunch of stuff before it gets enabled. And so, both kind of the ability to interact with the page and see the page, whatever's on the viewport, is important. And it's important to understand what you think is important for your user, maybe only having like 70% of the page here is good enough. And that's your kind of goal, like if 70% of their page renders really fast, then you know that the page is usable, you know that things are there. And that's kind of your indicator, you don't have to worry about every single kind of DOM element being there and intractable, and being, like, open.
So again, I know it's confusing. It's like, oh, I have to decide all of this. And as a developer, you really do, you have to get this understanding. But don't worry, there are some heuristics you can use. Plenty of people have put research into this subject. And so let's start going into that. Let's start covering how we can start measuring this stuff, so that we can actually tell if something is slow or fast, and we can start defining what it means to be a performant app. So measuring stuff, quality, it's actually... this slide is opposite, quantitative is not the best, qualitative is the best, but it's fine if stuff is quantitative, and I'll cover what that means afterwards. But measuring stuff is really important because it kind of gives you an indication of your problem space. It also helps kind of make relative comparisons, so you can start comparing pages, you can start comparing two different apps, you can start comparing yourself with your competitors, for example.
And so that kind of understanding. And it also helps kind of understand possible solutions and how effective they are. So for example, you identify that, hey, it takes a really long time for this page to load, and it's because this metric is really poor. Well, let's say you tried to make a fix, you can kind of look directly at the impact of that metric to know how effective your fix was. And once you start fixing stuff with these metrics, you can also guard against potential regressions. So if you're keeping track of these metrics on a consistent basis, you can then prevent situations where, for example, you ship a new feature on your app, and suddenly your app is super slow. And maybe you don't realize it, because you're just so excited about whatever you built. But then you take a look and you see, oh, my user defined metrics like the web vitals I'm using, they're all down. This is really unfortunate. It's also really important to understand that when you start measuring stuff, fundamentally performance data on the front end is pretty high cardinality. And that's related to remember all of the inputs we talked about, like the network conditions, the hardware, the browser type, the type of user. It means that you can't just put like a single number oftentimes on things. You have to attach a lot of context on what is happening or what are the conditions when I'm recording this specific performance data, because that helps you put things into context. And so remember this cardinality stuff, we'll get back to it.
7. Measuring Asset Size and Page Load Performance
And there's plenty of ways you can do this. I'm not going to go into it for too much detail. We can cover it more in depth if anybody's curious. But oftentimes just measuring your size and being aware of it is really important.
Yeah. So on an asset size, yeah, I think this is being recorded, and that'll be shared afterwards. We can also chat about it in more detail on Discord if you have questions. Yeah, no problem. Cool. Let's move on to, okay. You're now measuring your assets. What else can you do? And remember, there's this idea of latency. There's this idea of this page load operation, how long it takes to render a page. And so some industry experts, really smart folks have spent some time kind of building metrics. Some of these are even built into the browser itself on what they think how you should measure page load and page load performance. And these are heuristics in a way, but they're pretty good. And they're tried and tested. And so some of these are web vitals, which hopefully some of you have heard about. But the idea of when you load a page, you can measure the duration of different things on the page according to kind of different things you prioritize. So three of these CLS, LCP and FID are considered core web vitals by the Google Chrome team, which helps kind of develop this stuff. But I included First Paint, First Contentful Paint here, because I feel like it's just as important as the other three and kind of understanding the performance of your web applications. Specifically, this is for kind of when you render a page. We'll talk about metrics for user interactions afterwards. And so First Paint, First Contentful Paint, you can kind of see from the image here. It's kind of measures the amount of time it takes to actually load something for the first time on the viewport. First Paint is for the first pixel. And First Contentful Paint is for like the first content and content is defined as stuff like images or text blobs. If something renders on the DOM, then these things will fire, basically. Basically First Paint and First Contentful Paint will be the same thing.
8. First Paint and First Contentful Paint
First Paint fires when something is painted on the page, but not enough to be considered content. For example, animations that require expensive computations can cause a discrepancy between first paint and first contentful paint.
And that's like, so that's why they're usually like bundled in together, but the cases where they're not are really interesting. Because they're oftentimes like First Paint fires because it paints something on the page, but it's not enough to be considered as content because it's not like a full DOM node or a full image or something. And that might be because I'm trying to think of some examples, but for example, let's say you're firing animations on the page. But computing the animations are really expensive for some reason. And so you're calling requestAnimationFrame and you're doing some computation to run the animation. And that's leading to like a big discrepancy between like the first kind of pixel paint, your first paint and your first contentful paint. And so, usually they're the same, you don't have to worry about it. But there are some interesting scenarios when they're different.
9. Performance Metrics and Their Significance
Then we have cumulative layout shift, which is not actually a duration, like a lot of the stuff that we're going to talk about, it's just a score. And there's a way to calculate this, some great documentation on it that you can look up. But it basically keeps track of how much the extent of how your layout shifted as your page was loading. And so we've all kind of probably had this experience like we loaded a page up, and we were ready to click on a button, and then suddenly the button moves down because something else loaded above it and we click on the wrong thing. And that's a frustrating experience, right? And so it's not like a duration related to latency, but it is related to kind of the stability and the reliability of your page, which I think is fundamentally part of performance, we want a consistent and reliable experience for our users while they're using it, right? So try to keep your CLS to zero, many strategies to do this, skeleton components, so on and so forth.
Then we got largest contentful paint, which kind of meant instead of first contentful paint, which kind of measures the rendering time of any content on the page, this is for what the based on how this kind of metric is defined, measures the render time for the largest content to appear in the viewport. The viewport is important because it's just a part of the screen that the user can see, and the largest content can vary, usually it's like an image or SVG or a large kind of text blob, but they pick kind of the largest element because it's associated with, it's associated as being the most visually defining element, which means it's the most kind of that contributes to helping the page look visually complete and can be considered the main content. Now, LCP does have some, it's not perfect, it's a heuristic, the browser has to kind of guess what the largest element is, and sometimes though the largest element isn't even the element that you care about, so you might be choosing the element correctly, but that's not the element that you care about for visual completeness on the page, and we'll go into that.
10. LCP, Timing Elements, and Google Lighthouse
LCP is a heuristic that can pick the wrong elements. For more in-depth analysis, timing the elements you care about is recommended. You can pseudo time elements in Vue by hooking onto life cycles. Google Lighthouse is a great tool for running reports on web pages. It can be run standalone or within the Chrome browser. You can run it on any website and generate a report. The WebDev site is a great resource to learn more about web development.
Like it, there's way more content to be loaded afterwards. Right. And so this is a fact where LCP is a heuristic, and it just picks the wrong elements. LCP will attempt to kind of, the algorithm will attempt to kind of fix itself. If it notices a more kind of, a larger or more important element, it'll recalculate and redefine the LCP, which is why that there's, like if you actually look at the metric itself, there's like intermediate and then final LCP values. But it could still choose wrong. And so this is why, even though it's an important metric, a great heuristic and a great way to kind of get a gut check on how your page is doing, for more in-depth analysis, or if you really are knowledgeable about your user flows and what your user's caring about on your page, I recommend actually just timing the elements that you care about the most.
And so there's a element timing spec that is in beta, I think, which is really great that you can like time kind of individual HTML elements, but that doesn't have a lot of browser support. But the great thing about kind of building stuff out with a framework like Vue is that you can pseudo time things yourself by hooking onto the Vue life cycles, mount and unmount, and so on and so forth, and just start timing stuff that you care about, like how long does it take for this component to mount on a page, when does this component mount on a page relative to others? And kind of thinking about those things is really great because it helps to put, you can kind of define, like, what your most important elements are for the page to be usable, to be visually complete, for it to have the best user experience, and start timing that. And it's not that expensive, right? You just collect the timestamp. And there's ways to do this. And we'll kind of go into that afterwards.
Cool. And so, let's actually try to see the stuff in practice by looking at Google Lighthouse, which is a really great tool. You can run it standalone, but it's actually built into the Chrome browser as well. And it actually allows you to run kind of like reports on various pages. And so, you can run this on whatever kind of website you're like and take a look. Maybe the stuff that you're building. As an example, though, I am actually going to go to the Vue docs. Vue docs, let's go to the version 3 docs. What's a nice, pretty complicated instance? Let's look at stuff, how about computer properties? Pretty complicated. It seems like there's ads, there's headers, there's complicated components, a lot of text, and some kind of versatile various things. No images, but I think that's fine to kind of represent a pretty good page. And so if you have your Chrome tools open, you can go to Lighthouse. I'm going to run this for a desktop. For now, I'm just going to click all of the categories. It's important to note that you can simulate different things here, but let's generate the report. And it's going to kind of, this is synthetic data. And it's gathering information about the page. And this should be kind of built into, you can be doing this yourself at the same time. Recommend really kind of running this on the applications that you care about. Nice thing here is that it gives us a progress bar even though it seems to be moving, but at least, you know, that's some feedback, so we know it's doing something. Last time I ran this, this didn't run this long. So now I'm wondering if something's up with the performance of Lighthouse. And also a great consequence of running demos live. And I'm not sure why it's taking so long. Uh, Google Chrome. Well, that's unfortunate. Lighthouse, Google Chrome. So let's actually kind of show off what this looks like anyway. Uh, so this is usually what a Lighthouse report will look like. All right, let's cancel and try to run it on mobile. Maybe that'll fix it here. But anyway, Lighthouse is a great thing to run if it works. I guess it didn't. Oh yeah, you can use WebDev slash measure as well. Actually, yeah why don't we give that a try actually. It's just an example. So the actual WebDev site is really great. It's a great resource to kind of learn more about different things on just about the web in general.
11. Analyzing Performance Data and Distributions
But a thing to kind of note with Lighthouse is that it is one run on your personal machine on kind of an up-to-date browser, on pretty probably solid network conditions, if you're a developer, hopefully you have pretty solid internet access, on kind of the, your persona, whatever user you're signed into, whatever kind of resources you have access to. And as we kind of discussed previously, oftentimes though the permutations of how people interact with your web applications are huge. Multiple user types, plenty of different browsers, and browser versions, plenty of different user hardwares, different kinds of laptops, and so even though this is kind of, this one like lab test may be pretty good for kind of setting the tone. It doesn't tell the whole story behind your performance. And so this is why you should be collecting data, not just from synthetic experiments. You're running stuff on Lighthouse and you're collecting one-off metrics, but you should be, right, you should be collecting data in the field as well. And so there's like Chrome has a Web Vitals library for example that a lot of like people use just under the hood. It's really great. I actually recommend checking this out. So github.com Google Chrome Web Vitals. You can even like actually look to see how, for example, something like LCP is calculated. And if you want to say use LCP but change the definition slightly for your app because you know it measures the performance the best you can do that. You can take this code and adjust it slightly and you can do that for any of the kind of metrics that this Google Chrome Web Vitals library is using. This is actually what Sentry uses also for its performance monitoring. It just uses this library under the hood. And if you start collecting this information for every page, you can start creating distributions of data and getting more confident in it. And these distributions matter. So as an example, on the right, the image that shows is a screenshot I took from Sentry. This is the product I work on to help show off. You can see that we collect, of course, we can see the average, which is important, but averages don't tell the whole story. The percentiles and the distributions of the histogram also matter. So for example, let's say we have a case where we're measuring LCP. We use this Chrome Web Vitals Library. And so we're measuring LCP for every page, as our users are actually using it out in the field, in the wild, not some simulated synthetic experiment, but we're actually seeing real users use our application and the scenarios that they're in. The histogram will tell you a lot of stories. If it's left skewed, you can probably tell that like, hey, it's more positive than it is negative. If we have some spikes in the outliers, there might be some cases there, some conditions that are leading to these spikes. If you have something like a, instead of a unimodal, you have like a bimodal distributions, which means you have two kind of spikes in the distribution, you probably have like two really common scenarios that users are hitting for some reason, that's leading to very different kind of performance results on average. And so all of those kinds of things by examining your data this way in a distribution, you can get more confident in it and you can kind of tell cooler stories and start identifying problems that way, rather than just looking at like a hard number and kind of saying, oh, that's good or bad. Even though hard numbers can be gut checks, like if you load a page in like 10 seconds, that's problematic, you need to fix it. Another really cool thing about kind of collecting your data this way in the field, running libraries, collecting it just off the browser, is that you can now involve all of this cardinality stuff that's kind of, you can kind of identify, okay, what's performance look like for 3G connections versus regular connections? So you filter for that and you can see, oh, I have a ton of people using my application through 3G or on a mobile browser, like Chrome, Android, or iOS, Safari. Like what is the consequence of that? Is there maybe like for typically, for typical users on using Chrome on like a MacBook, oh man, the performance is really great.
12. Slicing and Dicing Performance Data
The site loads really fast. LCP is wicked fast. But for users with different conditions, it's poor. Slicing and dicing performance data helps solve their problems specifically. You can send data to any analytic sink or self-host and use histograms to make decisions.
The site loads really fast. LCP is wicked fast. It looks really great. But for these kinds of users that maybe aren't, don't have access to that kind of hardware or are on older browsers, it's really poor. And so by kind of slicing and dicing your performance data that way, you can kind of focus in on that subsection of users who have different user conditions and try to solve their problems specifically. And that goes a long way. That kind of helps you build kind of... Yeah, thank you for linking that. I appreciate that. Lucas linked the Chrome Web Vitals library in the Zoom chat. And so that goes like a long way. And I just showed off Sentry as an example tool, but you can send this data to any kind of analytic sink you want. You can even self-host like, I don't know, like a Prometheus instance and collect the metrics data yourself and visualize it and render it anyway. I think it's just important to use stuff like histograms and start aggregate collecting this data so that you can kind of start making decisions on it.
13. Using the Performance Timeline API
The performance timeline API allows you to see performance-related metrics in your application. You can use the performance singleton API to list all performance entries, including information about script loading, network requests, and timing. You can filter and monitor this data to understand the impact of specific scripts or third-party tools like Google Analytics. By using the performance observer API, you can observe specific performance entries and track them over time. This user-defined code allows you to gather and analyze performance data in your application.
So these are kind of some like metrics. Pretty standard. Let's say you wanna start defining this stuff yourself. What can you do? Well, there's the performance timeline API. And to actually make this easier, I'm going to just walk through the docs because I feel like that's a great way to explain it. You can kind of see yourself.
So the performance timeline API, coincidentally, I've searched this before here. And so, these are actually browser APIs that you can kind of use to see performance-related kind of metrics in your application. So I want to see an example. I'm back in the view docs here. I'm gonna go into the console. Let's clear this. You can run this on kind of any application that you're interested in. But we have the kind of performance singleton API. And if we want to list all of our performance entries, which is part of this performance timeline API, we can do performance dot get entries. And we can see all of these, like, kind of great performance-related information, these performance entries. There's 211 of them. And so, a lot of these are links, which means they're scripts. So we can kind of see the name of the script and a bunch of great information about how long it took to load, how long it took to fetch, how long it took to do like the lookup. So you can actually see the network-related information as well. So there's a lot of links here. Oh, there's the first paint time. There's the first contentful paint time. We can see that they're basically the same here. When we have some screens, we can see that there's an XML request. So any kind of XML, or HTTP request, XML or fetch requests, we can see that it's making some requests here. I see that loads in some CSS, some scripts. And so all of this stuff you can actually filter for and grab yourself. You can grab this performance data. You can grab this timing, and you can decide how to best use it. And so this is how basically you can get information that you can see in your Lighthouse reports, that you can see in kind of your network tab you usually have on your browser through the performance kind of timeline API. You can start grabbing these and looking at them and monitoring them.
So, for example, let's say you have scripts that you really care about. You have like a couple of scripts like third-party scripts, for example, that you know will load on every page and you really care about how long that takes to fetch and actually load on the page. You can track those specifically over time, for example. Maybe that'll help you understand the impact of adding Google Analytics over time, for example. And the nice thing is, this is user-defined. This is just code you write, right? So, you can run this in your application and kind of grab all of this yourself. So, you can see that I kind of call this after the page is loaded to see all the performance entries. If there are performance entries that you care about specifically, you can kind of use performance observers to kind of observe them. So, for example, what's a good example? Let's actually just write like a snippet. I think that's a... So, I'm gonna write some code here just as a snippet. And so we have the... There's like a performance observer API you can look up docs for. Performance observer. And so this will kind of take in a function that gets called every time something is observed. And this has kind of like a list and object. I'm not going to go into too much detail because you can kind of get the idea here. But you can say that like List.getEntries.
14. Performance Observer API
The performance observer API allows you to observe specific types of events, such as scripts, and gather information related to them. You can process and aggregate the data in any way you want, and even send it to an analytics sync. I recommend reading up on the performance timeline API for more details.
And you can see this is the same API as performance.getEntries. So this is like a specific list of timeline entries. And then this entries is just an array. So you can do whatever you want to this array. You can process them in any way you can. So entries.forEach. And you can do whatever you want with these, for example. Right? Send them over to some analytics sync you have somewhere. Make some decision based on them. Do anything. Aggregate them. And the way you kind of observe is you say, oh, I want to observe like a certain type of an event. So entry type is, let's say I want to observe all of my, let's look at some examples. Like let's say I want to observe all of my scripts. Say like, okay, I'll observe all of my scripts and get all of the information related to that. And so this is the performance observer API. You can kind of use it here. I'll link the performance timeline API. I really recommend you read up on it and take a look at it if you're interested.
15. Custom Metrics with Marks and Measures
You can emit arbitrary defined marks and measures timestamps using performance.mark and performance.measure. These tools allow you to measure your own metrics aside from the defaults provided by Web Vitals. Start defining your own metrics using the performance timeline API, marks and measures, or sets of scripts and CSS files that you care about.
So other than the performance entries, there's also marks and measures. So if you want, you can actually emit arbitrary defined marks and measures timestamps just using whatever the browser gives you. So there's performance.mark, performance.mark. And you can kind of see the details for that here. And so as an example, let's try, I'm actually going to, not a huge bunch of time, but okay the reason why this failed is because you have to give it a name. And then you can see that it kind of, I gave it a name mark, and then it emitted kind of this is the time since the navigation start of the application. And so using performance.marks is a way to kind of start emitting your own metrics arbitrarily on whatever you care about. So let's say your view component mounts on a page, you emitted performance.mark, you can collect that later by a performance observer that's listening for marks. And so on and so forth. So there's a lot of tools here to kind of measure your own metrics aside from these defaults that are given from Web Vitals like we've talked about, like LCP, NFID. I would start if you're interested in just like LCP and NFID, for example, and in these kind of Web Vitals. And you measure that first, and you get some confidence. And then as you want to explore how users are using your application, you want to kind of dig into the performance problem space of your application. Start defining your own metrics, things that you think are important using the performance timeline API, marks and measures, or, like, you know, sets of scripts that you especially care about, CSS files that you especially care about, and so on and so forth.
16. Sentry Performance Monitoring and Metrics
Sentry interprets performance data by sending transactions and capturing metrics like FCP and resource loading times. These metrics can be filtered based on tags, allowing you to understand performance for specific user types or conditions. While Lighthouse provides more detailed information through lab data, instrumenting your app with performance observers and using libraries like Sentry or Web Vitals can give you real-world insights. Feel free to reach out on Discord for further assistance or to discuss specific APIs and use cases.
If you want to kind of see, I actually want to show off a little bit at the end, I know we're a little bit kind of over time, but I actually want to show off like what Sentry does. And so I'm not going to walk through, I'm just going to walk through like a whole example of adding Sentry to an application, but I can actually just show off like what... So we'll go to the view project here, performance tab, oh, actually, I think it's in my test project. And so the way kind of Sentry kind of interprets this performance data is by sending kind of transactions. And so we can see here that only the FCP was captured here because it's a relatively simple app, but we can kind of see the whole nice, a nice waterfall of information that's collected like the browser related stuff, how long it takes for each of the resources to load. And then kind of the marks and measures that are emitted. These can be user defined, are also captured here. And so this kind of helps you put it into context and you can even filter based on these different tags. This is how kind of Sentry is thinking about performance and tracking it, especially with the emphasis on being able to filter on tags, being able to kind of understand the cardinality of the data. So that you can say that like, okay, I only want Firefox users to kind of understand what their performance is, how long that their page is loading. I only want kind of users from a specific set of IPs or I only want users that is now listed here, but with a specific connection type. You can also, if it's a 3G connection, it'll list here. And so that's how Sentry kind of thinks about these problems. But reality, once you start kind of collecting this information, these vitals, these metrics really, it's up to you. There's a lot of ways to explore, to collect data and to make decisions on this stuff. And so that's pretty much it. All I have to cover. I will take a look at Discord to see if there are kind of any questions. But feel free to also put them in the Zoom chat if anybody has any questions. I have some time. Yeah, but if... I'll wait like a couple of minutes in case people have to collect their thoughts. I know I was just speaking for a long time continuously. But hopefully this serves as a great introduction to these topics so that you can kind of start diving deep into things that you find more or less interesting. Cool, and I'll also be sticking around on the Discord afterwards. I think the recording will be provided at a later time. I don't have kind of full details on what the logistics of that are, but I think that will be shared as well. Cool, yeah, great, thank you, thank you very much. I'll be on the Discord if anybody has kind of any questions or kind of wants to dive deep into kind of specific APIs, specific circumstances. They're using kind of stuff in a creative way. We see some pretty interesting use cases of kind of understanding performance and diving deep into these problems at Sentry. So I'm more than happy to help you kind of walk through stuff as well. Cool. We'll wait. So we got a question on Discord that Sentry provides a metric for performance or is similar to Lighthouse. So it's actually a great question. It's important to note that Lighthouse is lab data. It's simulated, which means it has kind of access, it can set up your kind of app in whatever conditions it thinks and kind of measure everything. And that's usually not possible when out in the field, you just someone visits your site and you're measuring stuff. So there's less information you can get out of just instrumenting your app with performance observers and the Web Vitals library, or with Sentry and trying to grab information. Lighthouse and kind of synthetic data will give you a lot more information because you have full control over the environment. But it's also important to note like some of these APIs aren't available on every browser. Some of these APIs don't work the same in every browser. They're in like various stages of development or being rolled out. So something might behave slightly differently in Firefox and then in, for example, Chrome, which is why I recommend using like a library like the Sentry SDK, which I helped build or Web Vitals, which I helped show off. So yeah, cool. I'm going to end up there. Any other questions? I'll type in a proper response. Any other questions, please reach out on Discord, October 19th web performance channel. I'll be more than happy to kind of help you folks out. Thanks, everybody. I hope you have a great one.