A Different Vue into Web Performance


Solving your front-end performance problems can be hard, but identifying where you have performance problems in the first place can be even harder. In this workshop, Abhijeet Prasad, software engineer at Sentry.io, dives deep into UX research, browser performance APIs, and developer tools to help show you the reasons why your Vue applications may be slow. He'll help answer questions like, "What does it mean to have a fast website?" and "How do I know if my performance problem is really a problem?". By walking through different example apps, you'll be able to learn how to use and leverage core web vitals, navigation-timing APIs, and distributed tracing to better understand your performance problems.


So, hi everyone, hope you're having a great day. My name is Abhijit. I'm a software engineer at Sentry. And so, Sentry is a tool to help you kind of monitor your application code health. Typically, we have helped you with kind of identifying the errors and bugs in your application. But more recently, we've moved on to performance monitoring. So help you kind of understand the performance of your applications, whether that's a web server or a web application you built for the browser. And so, I want to talk to you a little bit about kind of some of the lessons we've learned rolling out this performance monitoring at Sentry and helping you hopefully get some strategies into kind of identifying the performance problems of your application. Again, if you have any kind of questions at any time, leave them in the chat, either on the Discord or in the Zoom itself, and I can take a look. But we'll have an explicit section at the end for question and answer. Sweet. And so, as a rough agenda, basically, we're going to really be focusing on performance when you're building Vue applications, which are oftentimes in the browser. So for those of you who are using like Vue and Ionic, and or for like mobile or Vue plus Electron, maybe not everything in this talk will kind of apply to you. But hopefully, you can take some concepts back to those platforms. And if you want to explore them more, we can chat about it afterwards. So we're going to focus first on kind of really understanding your system, the app you're building or the website you're building, kind of its needs, because that's really important when it comes to understanding the performance of your application. We're going to dive into kind of how we measure things, because that's important and how we interpret that data, looking at metrics, kind of thinking about lab versus field data, and we're going to kind of we'll get into what that means. And then we'll have some small demos, nothing huge. In fact, stuff that you can probably use itself. So we're just going to cover like some browser API's and small tooling. And so all I ask is if you want to follow along, you don't have to, you can just see me typing in as well. Have your Chrome DevTools open on a website or web application that you want to start thinking about the performance for. Sweet. So we're going to start off by kind of thinking like, you know what, we're going to spend a while talking about performance. Why should we care about this? And maybe some of you are starting to think about the performance of your applications, but kind of how can you justify putting the time into improve performance and work on it? Fundamentally, I like to think of performance in the web as like an accessibility issue. It's directly related to user experience. If someone loads a page that you built for your web app and it's slow, it's unusable, it has unpredictable behavior, it is fundamentally an inaccessible application. It's hard to use. It's not going to be a good experience. And we want to prevent that. We want people to use the stuff we build. We want people to have a positive experience. And so performance is a key part of that. The other really easy point here is that it's obviously a competitive advantage. The kind of example that everybody loves to bring up, if you read any kind of books on this topic, it's like the number one thing they go to is like if you have like a, let's say you're building like an e-commerce store, and you're in checkout, if someone, it takes too long to check out or it's a bad experience in terms of performance, somebody will just go to another site and buy from there. And so it's obvious that if your site feels fast, loads fast, and is a good experience, people will stay there and use whatever you built. And this fundamentally, this like performance means happy users, which is good for the developer who built it and for them as well. A kind of underrated thing about starting to think about these topics in more detail is that really, it helps to create like a culture of performance and caring about this kind of stuff, which is important if you're working on this as like part of your job, let's say you're a front end developer, building sites for your company. If you start paying attention to these topics, you can create a culture of your whole team, your whole company, caring about performance and that helps improve accessibility, helps probably maybe bring money to your company, only positives here. One thing I do want to note is that we're going to be covering a lot of these topics at a pretty high level, because there's a lot to it. There's a lot of kind of academic literature, books written, articles about this subject. And so we can go into more details as people ask questions, but I'm going to leave this as kind of introduction. So hopefully some of you get interested in some of this stuff. So I think one of the most important things to understand first, when it comes to looking at the performance of your web application or website, is to understand your web application type, which is what kind of app are you building in the first place, because that varies. I assume most of you, if you're here, you're big fans of Vue. And so you're probably building single page applications, which is you have some empty document and a bunch of a big JavaScript bundle and some CSS. You kind of send that to your browser. That's what happens when somebody loads your page. And then the JavaScript takes care of kind of rendering DOM elements onto the page based on however you think. Kind of the other side of this is like not client driven at all. It's like completely static pages, just HTML with maybe kind of minimal JavaScript for a few interactions here and there. But that's typically what you'd see for like a marketing site or things along those lines. But the reality is like for modern applications, they often don't fit into these two buckets. They're not like purely static or just purely a single page application, even though some of them are. A lot of them nowadays are hybrid. They'll have some pages that are purely static. They'll have some pages that are server side rendered. So if you're using a framework like Next, which I highly recommend people take a look, they'll be rendering on the server, but then they'll be kind of when they're loaded in the client side in the browser, the client side stuff will run and hydrate and more things will kind of render on the page. Some apps are kind of like a mixture. Like let's say I'm using like Laravel, maybe some of my pages will just be plain blade templates. I'm just rendering the HTML and some of my pages, let's say I'm building like an admin portal or something. Maybe I want that to be a single page application. So I'll render just a plain document and then a big JavaScript bundle for that. And this is really important to understand because each of these have their own trade offs when it comes to the performance of your app. Just as a quick example, I don't want to go into too much detail. If you have a static application, it's important to understand that each kind of navigation to routes means that you have to load up a new document from the server and render new HTML every single time. This also means that you have to execute any kind of JavaScript again and again. So let's say you have third party scripts, kind of like a Google Analytics, stuff like that. You have to pay the cost of instantiating that every single time you navigate to a page. Meanwhile, for a single page application, you really only pay that cost once and then resulting navigations are all client driven. So you don't have to deal with the cost of just grabbing new data and then letting the JavaScript on your front end take care of the rest. You know, you update your Vuex state, you pass the data into your Vue components and it just will re-render. So I think we have a pretty good understanding, hopefully, of the web app types. And hopefully you're not thinking about like, okay, this is the web app type I have and that I care about for my Vue application. So let's move on to kind of the operations that we care about here. Because that's also important. It's not just about like, what kind of app you have, but what users are doing with it. So the two kind of primary operations I kind of identify here are like what it takes to render a page and your user interacting with the page. Now there are other operations, but these are probably the most important things that you should pay attention to full time when thinking about the performance of your Vue applications on your front end. And so the first one is rendering a page. It's pretty simple. You know, you navigate to a URL, you load it, does the all the fancy kind of network stuff, DNS, TCP handshake, you get the assets back, maybe that's HTML, maybe that's JavaScript, and loads onto the browser. For single page applications, it's not like it's a hard refresh page load every time. Like I mentioned before, you have kind of navigations where the client just controls the history of the application. If you're using something like Vue router, it's just pushing and popping state and updating that way. And this is really important. This is what people identify the most with when thinking about performance on a front end. How fast does it take for me to load a page? This is what stuff like SEO cares about. It's what a lot of the metrics that we're going to talk about later care about. And it's probably what you're going to spend a lot of your time optimizing for. But user interactions are still important. And actually, as like, I think, just in general, as in, in kind of like, web performance as a topic, not as investigated as the page rendering stuff, but still is something of a concern. So when I talk about user interactions, that's like, you know, you're listening for user events, a key down, a mouse down, your user fills out a form, user opens a modal, all of these kind of micro interactions. It's important to care about what their performance is. I also list kind of animations here. Usually somebody isn't like animating something on the page, but user will interact with something, click a button, and an animation occurs. It's important for that to be performant as well. And so now we have these operations. So there's now that starting to kind of click in your mind, you have your web app type, how can we use both of these to identify performance problems. And so I'm actually going to take a step back from the front end and think about what like is a pretty well researched topic, which is like your performance for your back end applications. Let's say you're running a server like an Express server or Flask server, what are you typically thinking about? Well, typically you're thinking about throughput and your resource usage. So throughput is kind of the amount of data that's being transferred over a set period of time. And so we can consider that amount of data, we can call that bandwidth, and the amount of time it takes for that data to transfer, we can refer to as latency. And then of course, resource usage, you're running a web server, maybe connected to a VSU, you care about like memory and CPU and things like that. And so let's try to take those concepts, throughput and resource usage, and map them to the front end, because, you know, what we're talking about right now. So we're mapping it over. And if we kind of think about bandwidth, which is like amount of data, that's easy, right? We're sending data to the browser for her to execute on, we got your HTML, your CSS, you got your JavaScript, but that's also things like images, videos, audio. What are some other good examples? JSON, maybe you're just sending JSON blobs, SVGs, anything really, any data that you're sending. And that matters because there's a cost to kind of hit the server for the server to serve that up, or a CDN. And for then the browser to load all of that in, they have to also kind of parse the JavaScript, they have to parse the HTML, parse the CSS, and do stuff with it. And so there's a cost to that size. On the other hand, just from bandwidth, fundamentally interconnected is latency. And so that could be like your network latency, which is kind of out of control of your browser or like how you build it. It's just like what happens in between the server connection, TCP, all those things. There's kind of latency that comes with every resource you load. And when I mean resource, I don't just mean images and videos. I also mean scripts, because oftentimes in modern web applications, you aren't just sending one JavaScript file, you're sending chunked bundles. Maybe you're lazy loading things in, you're bundle splitting. And so all those have a cost because you have to load them in. And then aside from like the resources you're grabbing and kind of like the general nature of the network, you also have to kind of think about the latency of your code, which is the actual code performance. If you write unperformant code, let's say you write a Vue app and it has a massive amount of re-renders, you have an inefficient algorithm, you're not kind of instantiating your Vuex state correctly. That has a cost. And all of these things have different kind of solutions. So it's important to identify which kind of performance problem, either with bandwidth or latency, that you want to, you care about, and then kind of target that specifically. It can be hard to say just like, I just want to fix performance and not know what you're exactly targeting. Resource usage, of course, is also a thing. You know, JavaScript, it has to be parsed and executed. And obviously it'll use resources. It's under the hood, it's allocating data, it's moving stuff around. So it has costs in terms of memory and CPU. So looking at these things, though, I want to kind of ask a hypothetical. I mean, I'm going to answer it, but for you to like, think about a little. Which is how much of this stuff is actually under your control? In terms of, I don't know, all of these factors. How much of these things, when it comes to influencing bandwidth and latency, when it comes to influencing user resource usage, how much of it do you control? How much of it is kind of out of your hands, but still things that you have to think about? This is really important because this fundamentally presents a difference from like, optimizing performance on front end applications versus something that's kind of more traditional, like on a web server or a database. There's actually a lot of stuff that is outside of your control that you can't just simply, for example, provision a new server and bump up the CPUs. And so if we think about all of these kind of different inputs, there's quite a lot of them, right? In your front end app performance, I'm going to name some that I just thought of and I think about, but I'm sure there's many that you can name that aren't on here. The user's hardware, the amount of CPU and memory they have access to on their laptop, the user's browser, those are things that you do not control. They load your site and they do whatever they want with it, but you don't actually control the underlying hardware. The kind of network conditions they have, if they're loading your stuff on a 3G connection or on a mobile device with a spotty connection, that's very different than a very fast LTE wired internet connection, for example. Your app type, so we mentioned single page application, we mentioned static. If stuff is cached or not in the browser, if stuff is cached or not on the server, so if you're using a CDN to kind of load resources, you're putting your images in an S3 bucket versus on the server itself, for example. The user type, which we'll go into more detail, but it's important to remember the user type actually has a massive impact on this. And then of course there's the developer itself, like the way you write the code, the way you set up your architecture, the way you actually design your application has massive kind of consequences for performance. Cool. And so let's actually talk a little bit about this user stuff. Because at the end of the day, it's really important to note that performance is actually all about people. The two fundamental operations, which was rendering a page, page load, and user interactions are heavily tied to user action. Someone has to type something into your browser to kind of load the page. Somebody has to make some action to trigger a user interaction. So performance is very tied to how a user feels and how a user perceives what they're using. That does mean though that you can technically make a site feel faster, even though you don't actually affect the underlying time data. Just by kind of looking at user perception and how they're interpreting stuff that's loading, stuff that's happening. And so, you know, Jacob Nielsen, he's kind of an expert in usability studies, user research. I heavily recommend he has this book, Usability Engineering, that if you really care more about the user side of this stuff, that you check out. And he kind of has this, he's done this, a lot of ton of great research. But one of the things I want to point out is this research on response times, which kind of show that timing data, it's not like people will notice the difference between half a second and a second. But there are kind of points where people will make transitions. So at around 0.1 seconds, that's kind of the limit to a user feeling that something is instantaneous. So if something takes 0.05 seconds or 0.1 seconds, to a user that'll both feel like instant. And so this is like, I hover over something and it changes color. As long as it happens in around 0.1 seconds, it'll feel instant. Around a second is kind of how long it takes for the limit for a user's flow of thought. So people will notice a delay between 0.1 and one second. But as long as it's under one second, they don't mind the delay. Because it's understandable, you know, in your head that you have to kind of pay a cost whenever you do some action. And it's okay to wait a little bit. Like you click to sort a table, for example, like you click the top of a column, and you sort it. And you know that there's probably some kind of computation, they're running some sorting. Even if you don't know how it works or what it's doing, you probably know that it costs something. So you're fine for it to wait a little bit. Of course, it'd be great if everything is instantaneous, but that's not really the reality. But people will start to notice if that becomes like two seconds or three seconds. And then 10 seconds is usually like, this is like the barrier you do not want to cross when building applications, unless it's something really expensive, like you're doing like an export or big operation. And if so, you need to make it very clear that this is going on and provide some kind of loading indicator, progress bar, anything. Find some kind of feedback to the user that this is going on. And by paying attention to these timings, you really realize that like, hey, I don't necessarily have to go in and optimize everything. I can change the actual design of my application to feel more performant. And that goes a long way. So some key takeaways before we kind of move on to actually like looking at measuring this stuff and trying it out is, I got three here. There are more, but these are I think the three that are most important. So these front end operations, again, rendering a page and user interactions are fundamentally user based. So it's important to understand who your users are and how they're using the web app that you built, because that affects how you kind of look for performance problems and solve them. Second, your website doesn't exist in a vacuum. There's a lot of kind of factors that are out of your control, the browser people are using, the machine that they have when they actually load up your site, the network they have access to, kind of all those things, the user conditions, so on and so forth. Even for example, the services that your site depends on, if you have a back end or a CDN, those are all kind of dependencies that are just out of your direct control as a front end developer sometimes. And so it's important to keep that in mind. And so your performance of your applications are also affected by all of those dependencies as well. And so how you're using it, because it affects what we think is good or bad performance. I think like a question that people often ask is like, what is slow or what is fast? But it's all relative. It's based on how people are thinking about your application. It's based on how people are using it and based on your application type as well. And I have this little note about visual completeness, because this is like an exercise that you can do. I'm going to show off an example web page, but feel free to look at a page that you built and you can start thinking about. And so this is like, I'm sure a page most of you are familiar with. It's the kind of GitHub profile page. I just screenshotted mine as just an example. And so we have like all these various components. We have this nice image. We have like a nice sidebar. We have a navigation on the top. We have these pinned repositories component, we have this contributions component. If we look at this page, how much of this page needs to load before we consider it visually complete? Which means how much of the page do we load before a user finds it useful, that they can actually do what they want? That's hard to answer, right? I feel like a lot of people reach for, it depends. And it really does. It depends on what the user is looking to get out of the page. The way I interact with my profile page versus another person coming onto my profile page is very different. And then priorities will change as kind of, you know, what a user gets out of a page wants. So some people might be saying, oh, 100% of the page being kind of there means that it's visually complete. But it's not often, that's not often good enough. Sometimes it also depends on if the user can interact with the page or not. So for example, I'm sure we've had kind of experiences where we load works because it has to load a bunch of stuff before it gets enabled, right? And so both kind of the ability to interact with the page and see the page, whatever's on the viewport is important. But it's important to understand what you think is important for your user. Maybe only having like 70% of the page here is good enough. And that's your kind of goal. Like if 70% of the page renders really fast, then you know that the page is usable, you know that things are there. And that's kind of your indicator. You don't have to worry about every single kind of DOM element being there and interactable and being like open. So again, I know it's confusing. It's like, oh, I have to decide all of this. And as a developer, you really do. You have to get this understanding. But don't worry, there are some heuristics you can use. Plenty of people have put research into this subject. And so let's start going into that. Let's start covering how we can start measuring this stuff so that we can actually tell if something is slow or fast, and we can start defining what it means to be a performant app. So measuring stuff, quality, it's actually, this slide is opposite. Quantitative is not the best. Qualitative is the best. But it's fine if stuff is quantitative. And I'll cover what that means afterwards. But measuring stuff is really important because it kind of gives you an indication of your problem space. It also helps kind of make relative comparisons. So you can start comparing pages. You can start comparing two different apps. You can start comparing yourself with your competitors, for example. And so that kind of understanding. And it also helps kind of understand possible solutions and how effective they are. So for example, you identify that, hey, it takes a really long time for this page to load. And it's because this metric is really poor. Well, let's say you try to make a fix. You can kind of look directly at the impact of that metric to know how effective your fix was. And once you start fixing stuff with these metrics, you can also guard against potential regressions. So if you're keeping track of these metrics on a consistent basis, you can then prevent situations where, for example, you ship a new feature on your app. And suddenly, your app is super slow. And maybe you don't realize it because you're just so excited about whatever you built. But then you take a look and you see, oh, all my user-defined metrics, like the web vitals I'm using, they're all down. This is really unfortunate. It's also really important to understand that when you start measuring stuff, fundamentally, performance data on the front end is pretty high cardinality. And that's related to, remember, all of the inputs we talked about, like the network conditions, the hardware, the browser type, the type of user. It means that you can't just put a single number oftentimes on things. You have to attach a lot of context on what is happening or what are the conditions when I'm recording this specific performance data, because that helps you put things into context. And so remember this cardinality stuff, we'll get back to it. And so remember, we got the bandwidth and latency, we're going to start measuring stuff. Let's actually start doing some measuring. So the easiest thing probably you can start doing today is measuring your asset size. And so that's actually how many bytes you're sending over the wire, because that represents the bandwidth. And so the less bytes you're sending over, the faster it takes to load them, the faster it takes to parse the JavaScript, to parse the CSS, for example. And there's plenty of ways you can do this. I'm not going to go into it for too much detail. We can cover it more in depth if anybody's curious. But oftentimes, just measuring your size and being aware of it is really important. So for example, for your JavaScript, there's really great tools out there to kind of measure the gzip bundle. We actually use this at Sentry to kind of measure the cost of adding new features and see kind of the trade off of like, hey, can we write something in a way that won't add a huge amount of stuff to the JavaScript bundle? And that's pretty qualitative in nature. It's just like looking at the kilobytes, for example. It's also really important to note the to analyze the bundle itself and actually check what's included, because maybe you're including unnecessary modules, you're not tree shaking properly, maybe you're just including a bunch of stuff that can be easily lazy loaded in another bundle afterwards in that main initial bundle. Because remember, there's a cost to kind of grab that initial bundle, parse it and then execute it. And so you can use the tools like Webpack Bundle Analyzer, which is actually bundled into the Vue CLI. You can look up documentation on that to actually check out what's in your bundle and say, hey, that looks strange or I can't believe that library is that big. Maybe I should switch to using a different library. And you can do this kind of with CSS, with images, video and audio. Sometimes with those kind of assets like images, it's up to if you optimize them or not. One thing people usually don't try to measure at all and forget about a lot is web fonts, but it's actually pretty important. It can sometimes be really easy to send unoptimized fonts or just really big fonts and you have to pay a cost to load all of that, of course. And so starting to measure that and thinking about that, it can go a long way. And there's other stuff. You can send anything, right? Maybe you're sending JSON blobs. This is really common for static site generators to send additional JSON. And then the kind of JavaScript side will read from that. Yeah. So on an asset size, yeah, I think this is being recorded and that'll be shared afterwards. We can also chat about it in more detail on Discord if you have questions. Yeah, no problem. Cool. Let's move on to, okay, you're now measuring your assets. What else can you do? And remember, there's this idea of latency. There's this idea of this page load operation, how long it takes to render a page. And so some industry experts, some really smart folks have spent some time kind of building metrics. Some of these are even built into the browser itself on what they think how you should measure page load and page load performance. And these are heuristics in a way, but they're pretty good and they're tried and tested. And so some of these are the editor web vitals, which hopefully some of you have heard about, but it's the idea of when you load a page, you can measure the duration of different things on the page according to kind of different things you prioritize. So three of these, CLS, LCP, and FID are considered core web vitals by the Google Chrome team, which helps kind of develop this stuff. And I included first paint, first contentful paint here because I feel like it's just as important as the other three and kind of understanding the performance of your web applications. Specifically, this is for kind of when you render a page. We'll talk about metrics for user interactions afterwards. And so first paint, first contentful paint, you can kind of see from the image here, it's kind of measures the amount of time it takes to actually load something for the first time on the viewport. First paint is for the first pixel and first contentful paint is for like the first content and content is defined as stuff like images or text blobs. If something renders on the DOM, then these things will fire, basically. Usually first paint and first contentful paint will be the same thing. And that's like, so that's why they're usually like bundled in together. But the cases where they're not are really interesting because they're oftentimes like first paint fires because it paints something on the page, but it's not enough to be considered as content because it's not like a full DOM node or a full image or something. And that might be because I'm trying to think of some good examples. But for example, let's say you're firing animations on the page, but computing the animations are really expensive for some reason. And so you're calling request animation frame and you're doing some computation to run the animation and that's leading to like a big discrepancy between like the first kind of pixel paint, your first paint and your first contentful paint. And so usually they're the same, you don't have to worry about it. But there are some interesting scenarios when they're different. Then we have cumulative layout shift, which is not actually a duration, like a lot of the stuff that we're going to talk about. It's just a score. And there's a way to calculate this. You can have some great documentation on it that you can look up. But it basically keeps track of how much the extent of how kind of your layout shifted as your page was loading. And so we've all kind of probably had this experience, like we loaded a page up and we were ready to click on a button and then suddenly the button moves down because something else loaded above it and we click on the wrong thing. And that's a frustrating experience. And so it's not like a duration related to latency, but it is related to kind of the stability and the reliability of your page, which I think is fundamentally part of performance. We want a consistent and reliable experience for our users while they're using it, right? So try to keep your CLS to zero, many strategies to do this, skeleton components, so on and so forth. Then we got largest contentful paint, which kind of meant instead of first contentful paint, which kind of measures the rendering time of any content on the page. This is for what the based on how this kind of the metric is defined, measures the render time for the largest content to appear in the viewport. And so the viewport is important because it's just a part of the screen that the user can see. And the largest content can vary. Maybe it's like an image or SVG or a large kind of text blob, but they pick kind of the largest element because it's associated with... It's associated as being the most visually defining element, which means it's the most kind of that contributes to helping the page look visually complete and can be considered the main content. Now, LCP does have some... It's not perfect. It's a heuristic. The browser has to kind of guess what the largest element is. And sometimes, though, the largest element isn't even the element that you care about. So you might be choosing the element correctly, but that's not the element that you care about for visual completeness on the page. And we'll go into that. And then finally, we have first input delay, which measures the response time when users try to interact with the viewport. So like clicking on a button or a link or a dropdown. And so this is kind of like a measure of like, are people able to interact with my page properly when it loads? Like, if your first input delay is really poor, you know that people are having a terrible experience with your app because they're not actually even... They can't interact with it at all. This is something that you should definitely focus on. If you have a bad FID, you need to fix it. This is like... It's a big red flag. All of these other things can kind of be explained. FID can't really be explained in the same way. And so three of these, the CLS, the LCP, and FID, are actually all... They're actually all part of Google's PageRank algorithm now. So it affects your site's SEO and all of this. So it is important, and people do really pay attention to it. Some other metrics that are important, but not kind of like core to define your application's performance, but I think that's useful to think about on the side, are time to first byte, time to interactive, and then total blocking time. So time to first byte is how long it takes for the browser to receive the first byte of page content. And that's more of an indicator of how kind of representative the data is that you have of something that you should be looking into. Because sometimes it just takes a long time for the browser and the server to communicate with each other because of network conditions. Who knows? Maybe something's happening, right? And maybe you have a really long time to first byte, and that'll probably result in a long LCP and a bad FID. And then you look at your metrics and you're like, oh no, my LCP is doing so bad. But if you look at your time to first byte, you can put it into context and see that, oh, OK, it's out of our control a little bit. It was because the network was behaving poorly, or it was just because the initial response was behaving weird. And not actually because we were doing something bad. But not always the case. There are some cases where you didn't design something properly, and a time to first byte is horrible. The time to interactive kind of measures like... This one's really hard because you can't measure it out in the field. It has to be done in lab conditions. You have to synthetically test your page to get time to interactive. But it's really calculation on how long it takes your page to be ready for interaction. And so if your main JavaScript thread is tied up a lot, like doing stuff, running some computations, calling third party scripts, so on and so forth, if a user tries to interact with the page, maybe the event listener fires, but it'll take a while for the event loop to kind of get to that point where it can do stuff that you fire in your event listener. And so it's important to keep this in mind. But oftentimes, it's really hard to measure outside of perfect conditions. And so, yeah. I don't really pay attention to time to interactive that much, but still a useful kind of tool to have in your toolbox. And then there's total blocking time, which is like the time between first contentful paint and time to interactive. But like the main thing to pay attention to here is it represents kind of the amount of time that the main thread was blocked. And this is really important because JavaScript is an evented, single threaded, asynchronous language. And so that means that if the single thread is blocked by some expensive computation, some expensive IO, let's say it's like doing a big operation on the DOM, it's sending out a bunch of requests, and has to fulfill a bunch of promises, for example, maybe. That can lead to a poor experience because the main thread is just so busy doing that one thing on a long kind of task that it can't fulfill other tasks, like maybe rendering stuff onto the page or sending out requests or so on and so forth. Oh, and of course, interactions. And then there's a cost. This is not as bad because usually your browser engines are really fast. Like V8 is ridiculously optimized. That's the JavaScript engine that Chrome uses, for example. But there's a cost to pay to parse and execute JavaScript. There's a cost to pay to parse CSS, to parse HTML. And it's important to remember that. It's not often going to be usually, from my experience, usually is not the bottleneck, but important to still keep that stuff in mind. And so watch out for LCP because it's a heuristic, as I mentioned before. And it might pick an element that's really not important. So the example I've shown, and hopefully all of you can see on the screen, is that actually from the Sentry site itself. And so when you load Sentry, it's a single page application built in React. I know, boo-hoo. What can you do? But the kind of, when we load in data, we have this really nice loading indicator that kind of says, please wait while we load an obnoxious amount of JavaScript. But actually the browser itself fires, you can see it fires first contentful paint, but actually picks this loading indicator as the largest contentful paint and says, okay, the page has rendered its largest contentful paint. It's done. We fired LCP. This is your LCP value. Even though this is completely wrong. It's just a loading indicator. There's way more content to be loaded afterwards, right? And so this is a fact where LCP is a heuristic and it just picks the wrong elements. LCP will attempt to kind of, the algorithm will attempt to kind of fix itself. If it notices a more kind of, a larger, a more important element, it'll recalculate and redefine the LCP, which is why that there's, like, if you actually look at the metric itself, there's like intermediate and then final LCP values. But it could still choose wrong. And so this is why, even though it's an important metric, a great heuristic and a great way to kind of get a gut check on how your page is doing, for more in-depth analysis, or if you really are knowledgeable about your user flows and what your user is caring about on your page, I recommend actually just timing the elements that you care about the most. And so there's an element timing spec that is in beta, I think, which is really great that you can like time kind of individual HTML elements, but that doesn't have a lot of, ton of browser support. But the great thing about kind of building stuff out with kind of a framework like Vue is that you can time, you can like pseudo time things yourself by hooking onto the Vue life cycles, mount and unmount, and so on and so forth, and just start timing stuff that you care about. Like, how long does it take for this component to mount on a page? When does this component mount on a page relative to others? And kind of thinking about those things is really great, because it helps to put, you can kind of define like what your most important elements are to, for the page to be usable, to be visually complete, for it to have the best user experience, and start timing that. And it's not that expensive, right? You just collect the timestamp. And there's ways to do this, and we'll kind of go into that afterwards. Cool. And so let's actually try to see the stuff in practice by looking at Google Lighthouse, which is a really great tool. You can run it standalone, but it's actually built into the Chrome browser as well. And it actually allows you to run kind of like reports on various pages. And so you can run this on whatever kind of website you're like, and take a look, maybe the stuff that you're building. As an example, though, I am actually going to go to the Vue docs. Vue docs, let's go to the version three docs. What's like a nice, pretty complicated instance. Let's like, look at stuff. How about computed properties? Pretty complicated. There's seems like there's like ads, there's headers, there's some complicated components, a lot of text, and some kind of versatile, various things. No images, but I think that's fine to kind of represent a pretty good page. And so if you have your Chrome tools open, you can go to Lighthouse. I'm going to run this for desktop. You can, for now, I'm just going to click all of the categories. It's important to note that you can like simulate different things here. But let's generate the report. And it's going to kind of, this is like synthetic data. And it's gathering information about the page. And this should be kind of built into, you can be doing this yourself at the same time. Recommend really kind of running this on the applications that you care about. Nice thing here is that it gives us a progress bar, even though it doesn't seem to be moving, but at least, you know, that's a feedback. So we know it's doing something. Do do do. Last time I ran this, this didn't run this long. So now I'm wondering if something's up with the performance of Lighthouse. Also a great consequence of running demos live. And I'm not sure why it's taking so long. Google Chrome. Well, that's unfortunate. Lighthouse, Google Chrome. So let's actually kind of show off what this looks like anyway. So do do do. This is usually what a Lighthouse report will look like. All right, let's cancel and try to run it on mobile. Maybe that'll fix it here. But anyway, Lighthouse is a great thing to run, if it works. I guess it didn't. Oh, yeah, you can use web dev slash measure as well. Actually, yeah, why don't we give that a try, actually? As just an example. So the actual web dev site is really great. It's a great resource to kind of learn more about different things on just about the web in general. But I really recommend these kind of collections on performance to check out as like going diver deeps it. Especially like it defines a lot of these web vitals we talked about, like largest contentful pane and first input delay. And it kind of talks about like optimizing like what you can do, optimizing JavaScript, lazy loading stuff. This is all stuff that I'm not going to go into too much detail, because I really want to focus on how to identify performance problems instead of actually solving them. But once you identify your performance problems, lots of techniques to help solve them. And so actually, let's go to Vue.js slash measure. Let's copy in kind of the oops. Let's run the audit. And hopefully this will kind of run audit on the side. Sweet. Okay. So we got the report. Normally if you run this on Chrome on Lighthouse, we'll get it like right here. But we can see a nice report of how kind of well our site is doing. And you can see that it has a 44 score for performance, which is not the best. But I can probably tell you that it has something to do with probably these components and these ads, maybe some third party scripts. If we look at the network, there's a lot of kind of JavaScript being loaded here. And we can actually take a look and it'll tell us like, hey, this is how your web vitals are doing on your page, your first contentful paint. And we can see some of these numbers, total blocking time, time to interactive, LCP. And it kind of gives us some recommendations on how we can import. And this is a great way to kind of gut check like, am I doing well? Am I doing poorly? Where should I be heading? That a thing to kind of note with Lighthouse is that it is one run on your personal machine, on kind of an up to date browser, on pretty probably solid network conditions. If you're a developer, hopefully you have pretty solid internet access on kind of the your persona, or whatever user you're signed into, whatever kind of resources you have access to. And as we kind of discussed previously, oftentimes, though, the permutations of how people interact with your web applications are huge. Multiple user types, plenty of different browsers and browser versions, plenty of different user hardwares, different kinds of laptops. And so even though this is kind of this one like lab test, maybe pretty good for kind of setting the tone, it doesn't tell the whole story behind your performance. And so this is why you should be collecting data, not just from synthetic experiments, you're running stuff on Lighthouse and you're collecting one off metrics. But you should be collecting data in the field as well. And so there's like Chrome has a Web Vitals library, for example, that a lot of like people use just under the hood. It's really great. I actually recommend checking this out. So github.com slash Google Chrome slash Web Vitals. You can even like actually look to see how for example, something like LCP is calculated. And if you want to say, use LCP, but change the definition slightly for your app, because you know, it measures the performance the best, you can do that you can take this code and adjust it slightly, and you can do that for any of the kind of metrics that this Google Chrome Web Vitals library is using. This is actually what Sentry uses also for its performance monitoring. It just uses this library under the hood. And if you start collecting this information for every page, you can start creating distributions of data and getting more confident in it. And these distributions matter. So as an example, on the kind of right, the image that shows is a screenshot I took from Sentry. This is like, kind of things, the product I work on to help show off, you can see that like, we collect, of course, we can see like the average, which is important. But averages don't tell the whole story. The kind of percentiles and the distributions of like the histogram also matter. So for example, let's say we have a case where we're measuring LCP. We use this, we use this kind of Chrome Web Vitals library. And so we're measuring kind of LCP for every page, as our users are actually using it out in the field, in the wild, not some simulated synthetic experiment, but we're actually seeing you real users use and use our application and the scenarios that they're in. The histogram will tell you a lot of stories. If it's left skewed, you can probably tell that like, hey, you know, it's kind of more positive than it is negative. If we have some spikes in the outliers, there might be some cases there, some conditions that are leading to these spikes. If you have something like a, instead of a unimodal, you have like a bimodal distributions, which means you have two kind of spikes in the distribution, you probably have like two really common scenarios that users are hitting, for some reason, that's leading to very different kind of performance results on average. And so all of those kind of things by examining your data this way, in a distribution, you can get more confident in it, and you can kind of tell cooler stories and start identifying problems that way, rather than just looking at like a hard number and kind of saying, oh, that's good or bad, even though hard numbers can be gut checks, like if you load a page in like 10 seconds, that's problematic, you need to fix it. Another really cool thing about kind of collecting your data this way, in the field, running libraries, collecting it just off the browser, is that you can now involve all of this cardinality stuff that's kind of, you can kind of identify, okay, what's performance look like for 3G connections versus regular connections. So you filter for that, and you can see, oh, I have a ton of people using my application through 3G or on a mobile browser, like Chrome Android or iOS Safari, like, what is the consequence of that? So maybe like, for typical users on using Chrome on like a MacBook, oh, man, the performance is really great, the site loads really fast, LCP is wicked fast, it looks really great. But for these kind of users that maybe don't have access to that kind of hardware, or are on older browsers, it's really poor. And so by kind of slicing and dicing your performance data that way, you can kind of focus in on that subsection of users who, you know, have different user conditions, and try to solve their problems specifically. And that goes a long way, that kind of helps you build kind of, yeah, thank you for linking that, appreciate that. Lucas linked the Chrome Web Vitals library in the Zoom chat. And so that goes like a long way. And, you know, I just showed off Sentry as an example tool. But you can send this data to any kind of analytics sync you want. You can even self host like, I don't know, like a Prometheus instance and collect the metrics data yourself, and visualize it and render it anyway. I think it's just important to use stuff like histograms, and start aggregate collecting this data so that you can kind of start making decisions on it. So these are kind of some like metrics, pretty standard. Let's say you want to start defining this stuff yourself, what can you do? Well, there's the performance timeline API. And to actually make this easier, I'm going to just walk through the docs, because I feel like that's a great way to explain it, you can kind of see yourself. So the performance timeline API, coincidentally, I've searched this before here. And so these are actually browser APIs that you can kind of use to see performance related kind of metrics in your application. So want to see an example. I'm back in the view docs here. I'm going to go into the console. Let's clear this, you can run this on kind of any application that you're interested in. But we have the kind of performance singleton API. And if we want to list all of our performance entries, which is part of this performance timeline API, we do performance dot get entries. And we can see all of these like kind of great performance related information, these performance entries, there's there's 211 of them. And so a lot of these are links, which means their scripts, so we can kind of see the name of the script, and a bunch of great information about how long it took to load how long to fetch, how long it took to do like the lookup. So you can actually see the network related information as well. So there's a lot of links here. Oh, there's the first paint time. There's the first contentful paint time, we can see that they're basically the same here. When we have some screens, we can see that makes that there's an XML request. So any kind of XML, or HTTP request, sorry, XML or fetch requests. We can see that it's making some some requests here. I see the loads in some CSS, some scripts. And so all of this stuff you can actually filter for and grab yourself, you can grab this performance data, you can grab this timing, and you can decide how to best use it. And so this is how basically you can get information that you can see in your lighthouse reports that you can see in kind of your network tab you usually have on your browser. Through the performance, kind of timeline API, you can start grabbing these and looking at them and monitoring them. So for example, let's say you have scripts that you really care about, you have like a couple of scripts, like third party scripts, for example, that you know, will load on every page. And you want to really care about like how long that takes to fetch and actually load on the page, you can track those specifically over time, for example. Maybe that'll help you understand kind of the impact of adding Google Analytics over time, for example. And the nice thing is this is user defined, like this is just code you write, right? So you can run this in your application, and kind of grab all of this yourself. If you want to kind of use, like, so you can see that I kind of call this after the page is loaded, to see all the performance entries. If there are like performance entries that you care about specifically, you can kind of use performance observers to kind of observe them. So for example, what's a good example? It's actually just right like a snippet. I think that's a so gonna write some code here, just as a snippet. And so we have the there's like a performance observer API, you can look up docs for performance observer. And so this will kind of take in a function that gets called every every time something is observed. And this has kind of like a list and object. I'm not going to go into too much detail, because you can kind of get the idea here. But you can say that like list.get entries, and you can see this is the same API as performance.get entries. So this is like a specific list of timeline entries. And then this entries is just an array. So you can do whatever you want to this array, you can process them in any way you can. So entries.for each, and you can do whatever you want with with these, for example, right? Send them over to some analytics sync you have somewhere, make some decision based on them, do anything, aggregate them. And the way you kind of observe is you say, Oh, I want to observe like a certain type of event. So entry type is, let's say I want to observe all of my let's look at some examples like, let's say I want to observe all of my scripts. Say like, okay, I'll observe all my scripts and get all of the information related to that. And so this is the performance observer API. You can kind of use it here. I'll link the kind of the performance timeline API, I really recommend you read up on it and take a look at it if you're interested. So other than kind of the performance entries, there's also marks and measures. So if you want, you can actually emit kind of arbitrary defined marks and measures timestamps, just using whatever the browser gives you. So there's performance.mark. And you can kind of see the details for that here. And so as an example, let's try, I'm actually going to not a huge bunch of time, but okay, the reason why this failed is because you have to give it a name. And then you can see that kind of I gave it a name mark and then it emitted kind of this is the time since the navigation start of the application. And so using performance.marks is a way to kind of start emitting your own metrics arbitrarily on whatever you care about. So let's say your view component mounts on a page, you emit a performance.mark. You can collect that later by a performance observer that's listening for marks. And so on and so forth. So there's a lot of tools here to kind of measure your own metrics, aside from these defaults that are given from web vitals, like we've talked about, like LCP and FID. I would start if you're interested in just like LCP and FID, for example, and in these kind of web vitals, and you measure that first and you get some confidence. And then as you want to explore, you know, how users are using your application, you want to kind of dig into the performance problem space of your application, start defining your own metrics, things that you think are important, using the performance timeline API marks and measures, or like, you know, sets of scripts that you especially care about, CSS files that you especially care about, and so on and so forth. If you want to kind of see, I actually want to show off a little bit at the end. I know we're a little bit kind of over time, but I actually want to show off like what Sentry does. And so I'm not going to walk through, I'm just going to walk through like a whole example of adding Sentry to an application. But I can actually just show off like what, so we'll go to the view project here, performance tab, oh, actually, I think it's in my test project. And so the way kind of Sentry kind of interprets this performance data is by sending kind of transactions. And so we can see here that only the FCP was captured here because it's a relatively simple app. But we can kind of see the whole nice, a nice waterfall of information that's collected like the browser related stuff, how long it takes for each of the resources to load. And then kind of the marks and measures that are emitted, these can be user defined, are also captured here. And so this kind of helps you put into context and you can even filter based on these different tags. This is how kind of Sentry is thinking about performance and tracking it, especially with the emphasis on being able to filter for on tags, being able to kind of understand the cardinality of your data so that you can say that like, okay, I only want Firefox users to kind of understand what their performance is, how long that their page is loading. I only want kind of users from a specific set of IPs or I only want users that is not listed here, but with a specific connection type. You can also, if it's a 3G connection, it'll list here. And so that's how Sentry kind of thinks about these problems. But reality, once you start kind of collecting this information, these vitals, these metrics, really it's up to you. There's a lot of ways to explore, to collect data and to make decisions on this stuff. And so that's pretty much it. All I have to cover. I will take a look at Discord to see if there are kind of any questions, but feel free to also put them in the Zoom chat if anybody has any questions. I have some time. Yeah, but if I wait like a couple minutes in case people have to collect their thoughts. I know I was just speaking for a long time continuously, but hopefully this serves as a great introduction to these topics so that you can kind of start diving deep into things that you find more or less interesting. Cool. And I'll also be sticking around on the Discord afterwards. I think the recording will be provided at a later time. I don't have kind of full details on what the logistics of that are. But I think that will be shared as well. Cool. Yeah, great. Thank you. Thank you very much. I'll be on the Discord if anybody has kind of any questions or kind of wants to dive deep into kind of specific APIs, specific circumstances. They're using kind of stuff in a creative way. We've seen some pretty interesting use cases of kind of understanding performance and diving deep into these problems at Sentry. So I'm more than happy to help you kind of walk through stuff as well. Cool. We'll wait. Cool. So I got a question on Discord that Sentry provides the metrics for performance or is similar to Lighthouse. So it's actually a great question. It's important to note that Lighthouse is lab data. It's simulated, which means it has kind of access. It can set up your kind of app in whatever conditions it thinks and kind of measure everything. And that's usually not possible when out in the field, you just someone visits your site and you're measuring stuff. So there is less information you can get out of just instrumenting your app with performance observers and the Web Vitals library or with Sentry and trying to grab information. Lighthouse and kind of synthetic data will give you a lot more information because you have full control over the environment. But it's also important to note like some of these APIs aren't available on every browser. Some of these APIs don't work the same in every browser. They're in like various stages of development or being rolled out. So something might behave slightly differently in Firefox and then in, for example, Chrome, which is why I recommend using like a library like the Sentry SDK, which I helped build or Web Vitals, which I helped show off. So yeah, cool. I'm going to end up there. Any other questions, I'll type in a proper response. Any other questions, please reach out on Discord, October 19th Web Performance Channel. I'll be more than happy to kind of help you folks out. Thanks, everybody. I hope you have a great one. See ya.
72 min
19 Oct, 2021

Watch more workshops on topic

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career