A Different Vue into Web Performance


Solving your front-end performance problems can be hard, but identifying where you have performance problems in the first place can be even harder. In this workshop, Abhijeet Prasad, software engineer at Sentry.io, dives deep into UX research, browser performance APIs, and developer tools to help show you the reasons why your Vue applications may be slow. He'll help answer questions like, "What does it mean to have a fast website?" and "How do I know if my performance problem is really a problem?". By walking through different example apps, you'll be able to learn how to use and leverage core web vitals, navigation-timing APIs, and distributed tracing to better understand your performance problems.

72 min
19 Oct, 2021


Sign in or register to post your comment.

AI Generated Video Summary

This Workshop on Performance Monitoring covers important topics such as understanding performance and its impact on user experience, measuring asset size and page load performance, analyzing performance data and distributions, using the Performance Timeline API and Performance Observer API, and custom metrics with marks and measures. The workshop also highlights the significance of user perception and performance, the role of Google Lighthouse in performance analysis, and the use of Sentry Performance Monitoring for capturing metrics and real-world insights.

1. Introduction to Performance Monitoring

Short description:

Hi everyone! I'm Abhijit, a software engineer at Sentry. Today, I want to share some lessons we've learned about performance monitoring and strategies for identifying performance problems in your applications. Feel free to ask questions throughout the session, and we'll have a dedicated Q&A section at the end.

So hi, everyone. Hope you're having a great day. My name is Abhijit. I'm a software engineer at Sentry. And so Sentry is a tool to help you kind of monitor your application code health. Typically, we have helped you with kind of identifying the errors and bugs in your application. But more recently, we've moved on to performance monitoring. So help you kind of understand the performance of your applications, whether that's a web server or a web application you built for the browser. And so I want to talk to you a little bit about kind of some of the lessons we've learned rolling out this like performance monitoring at Century and helping you hopefully get some strategies into kind of identifying the performance problems of your application. Again, if you have any kind of questions at any time, leave them in the chat, either on the Discord or in the Zoom itself, and I can take a look. But we'll have an explicit section at the end for question and answer. Sweet.

2. Understanding Performance and Its Importance

Short description:

We're going to focus on performance when building Vue applications. Understanding your system and measuring performance are important. Performance is an accessibility issue and a competitive advantage. It creates a culture of caring about performance and improves user experience. We'll cover topics at a high level and go into more detail with questions.

And so as a rough agenda, basically we're going to really be focusing on performance when you're building Vue applications, which are oftentimes in the browser. So for those of you who are using like Vue and Ionic and or for like mobile or Vue plus Electron, maybe not everything in this talk will kind of apply to you. But hopefully you can take some concepts back to those platforms. And if you want to explore them more, we can chat about it afterwards.

So we're going to focus first on kind of really understanding your system, the app you're building or the website you're building, kind of its needs, because that's really important when it comes to understanding the performance of your application. We're then going to dive into kind of how we measure things, because that's important and how we interpret that data, looking at metrics, kind of thinking about lab versus field data, and we'll get into what that means. And then we'll have some small demos, nothing huge. In fact, stuff that you can probably use itself. So we're just going to cover some browser API, small tooling. And so all I ask is if you want to follow along, you don't have to, you can just see me typing in as well, have your Chrome DevTools open on a website or web application that you want to start thinking about the performance for.

Sweet. So we're going to start off by kind of thinking like, you know what? We're going to spend a while talking about performance. Why should we care about this? And maybe some of you are starting to think about the performance of your applications. But kind of how can you justify putting the time in to improve performance and work on it? Fundamentally, I like to think of performance in the web as an accessibility issue. It's directly related to user experience. If someone loads a page that you built for your web app, and it's slow, it's unusable, it has unpredictable behavior, it is fundamentally an inaccessible application. It's hard to use. It's not going to be a good experience. And we want to prevent that. We want people to use the stuff we build, we want people to have a positive experience. And so performance is a key part of that. Really, I guess another really easy point here is that it's obviously a competitive advantage. The kind of example that everybody loves to bring up. If you read any kind of books on this topic, it's like the number one thing they go to is like if you have like a, let's say you're building like an e-commerce store, and you're in checkout, if someone, it takes too long to check out or it's a bad experience in terms of performance, somebody will just go to another site and buy from there. And so it's obvious that if your site feels fast, loads fast, and is a good experience, people will stay there and use whatever you built. And this fundamentally, this like performance means happy users, which is good for the for you, the developer who built it and for them as well. A kind of underrated thing about starting to think about these topics in more detail is that really, it helps to create like a culture of performance and caring about this kind of stuff, which is important if you're working on this as like part of your job, let's say, you're a front end developer, using, you know, building sites for your company. You know, if you start paying attention to these topics, you can create a culture of your whole team, your whole company, caring about performance and, you know, that helps improve accessibility, helps probably maybe bring money to your company, only positives here. One thing I do want to note is that we're going to be covering a lot of these topics at a pretty high level because there's a lot to it. There's a lot of kind of academic literature, books written, articles about this subject. And so, we can go into more details as people ask questions. But I'm going to leave this as kind of introduction. So, hopefully, some of you get interested in some of this stuff.

3. Understanding Web App Types and Operations

Short description:

Understanding your web application type is crucial for performance monitoring. Modern applications often don't fit into purely static or single page application categories. Hybrid apps, like those using Nuxt or Laravel, have their own trade-offs. Static apps require loading new documents and executing JavaScript for each navigation, while single page apps only pay the cost once. The primary operations to consider are rendering a page and user interactions. Rendering a page involves network stuff and asset loading, while user interactions include listening for events like key/mouse down, form filling, and modal opening. Both aspects are important for optimizing performance.

So, I think the one of the most important things to understand first when it comes to looking at the performance of your web application or website is to understand your web application type, which is what kind of app are you building in the first place? Because that varies. I assume most of you, if you're here, you're big fans of Vue. And so, you're probably building single page applications, which is you have some empty document and a bunch of a big JavaScript bundle and some CSS. You kind of send that to browser. That's what happens when somebody loads your page, and then the JavaScript takes care of kind of rendering DOM elements onto the page based on however you think.

And kind of the other side of this is not client driven at all. It's completely static pages, just HTML with maybe kind of minimal JavaScript for a few interactions here and there. But that's typically what you see for a marketing site or things along those lines. But the reality is for modern applications, they often don't fit into these two buckets. They're not purely static, or just purely a single page application. Even though some of them are. A lot of them nowadays are hybrid. They'll have some pages that are purely static, they'll have some pages that are server side rendered. So if you're using a framework like Nuxt, which I highly recommend people take a look, they'll be rendering on the server but then there'll be kind of when they're loaded in client side in the browser. You know, the client side stuff will run and hydrate, and more things will kind of render on the page. Some apps are kind of like a mixture like let's say I'm using like Laravel, maybe some of my pages will just be plain blade templates, I'm just rendering the HTML. And some of my pages, let's say I'm building like a, like an admin portal or something, maybe I want that to be a single page application. So I'll render just a plain document and then a big JavaScript bundle for that. And this is really important to understand, because each of these have their own trade-offs when it comes to the performance of your app. Just as a quick example, I don't want to go into too much detail, if you have a static application, it's important to understand that each kind of navigation to routes means that you have to load up a new document from the server and render new HTML every single time. This also means that you have to execute any kind of JavaScript again and again. So let's say you have third-party scripts, kind of like a Google Analytics, stuff like that. You have to pay the cost of instantiating that every single time you navigate to a page. Meanwhile, for a single page application, you really only pay that cost once. And then resulting navigations are all client-driven, so you don't have to deal with the cost of just grabbing new data and then letting the JavaScript on your front end take care of the rest. You update your viewXState, you pass the data into your view components, and it just will re-render. So I think we have a pretty good understanding, hopefully, of the web app types. And hopefully you're not thinking about like, okay, this is the web app type I have and that I care about for my view application. So let's move on to kind of the operations that we care about here. Because that's also important. It's not just about like what kind of app you have, but what users are doing with it. So the two kind of primary operations I kind of identify here are what it takes to render a page and your user interact user interacting with the page. Now there are other operations, but these are probably the most important and things that you should pay attention to full-time. When thinking about the performance of your view applications on your front end. And so the first one is rendering a page. It's pretty simple, you know, you navigate to a URL, you load it, does the all the fancy kind of network stuff, DNS, TCP handshake, you get the assets back. Maybe that's HTML. Maybe that's JavaScript and it loads onto the browser. For single page applications, it's not like it's a hard, refreshed page every time. Like I mentioned before, you have navigations where the client just controls the history of the application. If you're using something like view router, it's just pushing and popping state and updating that way. And this is really important. This is what people identify the most with when thinking about performance on your front end. How fast does it take for me to load a page? This is what stuff like SEO cares about. It's what a lot of the metrics that we're going to talk about later care about, and it's probably what you're going to spend a lot of your time optimizing for. But user interactions are still important. And actually, I think just in general as in kind of like web performance, as a topic, not as investigated as the page rendering stuff. But still is something of a concern. So when I talk about user interactions, that's like, you know, you're listening for user events, a key down, a mouse down, a user fills out a form, a user opens a modal, all of these kind of micro-interactions. It's important to care about what their performance is.

4. Performance Optimization Considerations

Short description:

Animations, operations, throughput, resource usage, bandwidth, latency, code performance, resource usage, and factors influencing performance on front-end applications versus traditional web servers or databases are all important considerations for optimizing performance. There are many factors outside of your control, such as user hardware, browser, network conditions, and user type. However, the way you write code and design your application can have a significant impact on performance.

I also list kind of animations here. Usually somebody isn't like animating something on the page, but user will interact with something, click a button and animation occurs. It's important for that to be performant as well.

And so now we have these operations. Hopefully it's now that's starting to kind of click in your mind. You have your web app type. How can we use both of these to identify performance problems.

And so I'm actually going to take a step back from the front end and think about what is a pretty well researched topic which is like your performance for your back end applications. Let's say you're running a server, like an express server or flask server, what are you typically thinking about? Well, typically you're thinking about throughput and your resource usage. So throughput is kind of the amount of data that's being transferred over a set period of time. And so we can consider that amount of data, we can call that bandwidth, and the amount of time it takes for that data to transfer we can refer to as latency.

And then of course resource usage, you're running a web server, it's maybe connected to a database, you care about memory and CPU and things like that. And so let's try to take those concepts, throughput and resource usage, and map them to the front end because it's what we're talking about right now. So we're mapping it over, and if we kind of think about bandwidth which is like amount of data, that's easy, right? We're sending data to the browser for it to execute on, we got your HTML, your CSS, you got your JavaScript, but that's also things like images, videos, audio. What are some other good examples? JSON, maybe you're just sending JSON blobs, SVGs, anything really. Any data that you're sending. And that matters because there's a cost to kind of hit the server for the server to serve that up or a CDN, and for then the browser to load all of that in, they have to also kind of parse the JavaScript, they have to parse HTML, parse the CSS, and do stuff with it. And so there's a cost to that size.

On the other hand, just from bandwidth, fundamentally interconnected is latency. And so that could be like your network latency, which is kind of out of control of either browser or like how you build it, it's just like what happens in between, the server connection, TCP, all of those things. There's kind of latency that comes with every resource you load, and when I mean resource, I don't just mean images and videos, I also mean scripts, because oftentimes in modern web applications, you aren't just sending one JavaScript file. You're sending chunked bundles. Maybe you're lazy loading things in, you're bundle splitting. And so all those have a cost because you have to load them in.

And then aside from like the resources you're grabbing and the kind of like the general nature of the network, you also have to kind of think about the latency of your code, which is the actual code performance. If you write unperformant code, let's say you write a Vue app and it has a massive amount of re-renders, you have an inefficient algorithm, you're not kind of instantiating your Vuex state correctly, that has a cost. And all of these things have different kinds of solutions. So it's important to identify which kind of performance problem either with bandwidth or latency that you care about. And then kind of target that specifically, it can be hard to say, just like, I just want to fix performance and not know what you're exactly targeting. Resource usage, of course, is also a thing. You know, JavaScript, it has to be parsed and executed. And obviously, it'll use resources. It's under the hood, it's allocating data. It's moving stuff around. So it has costs in terms of memory and CPU. So looking at these things, though, I want to kind of ask a hypothetical. I mean, I'm going to answer it, but for you to like, think about a little, which is how much of this stuff is actually under your control? In terms of, I don't know, all these factors, how much of these things when it comes to influencing bandwidth and latency, when it comes to influencing user resource usage, how much of it do you control? How much of it is kind of out of your hands, but still things that you have to think about? This is really important, because this fundamentally presents a difference from optimizing performance on front-end applications versus something that's kind of more traditional, like on a web server or a database, is there's actually a lot of stuff that is outside of your control, that you can't just simply, for example, provision a new server and bump up the CPUs.

And so if we think about all of these different inputs, there's quite a lot of them, right? In your front-end app performance, I'm going to name some that I just thought of and I think about, but I'm sure there's many that you can name that aren't on here. The user's hardware, the amount of CPU and memory they have access to on their laptop, the user's browser, those are things that you do not control. They load your site and they do whatever they want with it, but you don't actually control the underlying hardware. The kind of network conditions they have, if they're loading their stuff on a 3G connection, or on a mobile device with a spotty connection, that's very different than a very fast LTE wired internet connection, for example. Your app type. So we mentioned single-page application. We mentioned static. If stuff is cached or not in the browser, if stuff is cached or not on the server. So if you're using a CDN to kind of load resources, you're putting your images in the S3 bucket versus on the server itself, for example. The user type, which we'll go into more detail. It's important to remember the user type actually has a massive impact on this. And then, of course, there's the developer itself. Like the way you write the code, the way you set up your architecture, the way you actually design your application has massive consequences for performance.

5. User Perception and Performance

Short description:

Performance is tied to user perception and how they interpret loading and actions. Jacob Nielsen's research shows that users perceive 0.1 seconds as instant and 1 second as the limit for flow of thought. Delays up to 1 second are acceptable, but 2-3 seconds become noticeable, and 10 seconds is a barrier to avoid. Understanding users and their usage of your web app is crucial for solving performance problems. Your website is affected by factors beyond your control, such as browsers, user conditions, and dependencies. Performance is relative and depends on user perception and application type. Visual completeness exercises can help evaluate performance.

Cool. And so let's actually talk a little bit about this user stuff. Because at the end of the day, it's really important to note that performance is actually all about people. The two fundamental operations, which was rendering a page, page load, and user interactions are heavily tied to user actions. Someone has to type something into your browser to kind of load the page. Somebody has to make some action to trigger a user interaction. So performance is very tied to how a user feels and how a user perceives what they're using. That does mean, though, that you can technically make a site feel faster even though you don't actually affect the underlying time data. just by looking at user perception and how they're interpreting stuff that's loading, stuff that's happening.

And so, you know, Jacob Nielsen, he's kind of an expert in usability studies, user research. I heavily recommend. He has this book, Usability Engineering, that if you really care more about the user side of this stuff, that you check out. And he kind of has this, he's done this, a lot of ton of great research. But one of the things I want to point out is this research on response times, which kind of show that timing data, it's not like people will notice the difference between half a second and a second. But there are kind of, there are kind of points where people will make transitions. So at around 0.1 seconds, that's kind of the limit to a user feeling that something is instantaneous. So if something takes 0.05 seconds or 0.1 seconds, to a user that will both feel like instant. And so this is like, I hover over something and it changes color. As long as it happens in around 0.1 seconds, it'll feel instant. Around a second is kind of how long it takes for the limit for users flow of thought. So people will notice a delay between 0.1 and one second, but as long as it's under one second, they don't mind the delay. Because it's understandable, you know, in your head, that you have to kind of pay a cost whenever you do some action. And it's okay to wait a little bit. Like you click to sort a table, for example, like you click the top of a column, and you sort it, you know that there's probably some kind of computation there, running some sorting, even if you don't know how it works, or what it's doing, you probably know that it's, it costs something. So you're fine for it to wait a little bit. Of course, it'd be great if everything is instantaneous, but that's not really the reality. But people will start to notice if that becomes like, two seconds or three seconds. And then 10 seconds is usually like, this is like the barrier you do not want to cross when building applications, unless it's something really expensive. Like you're doing like an export, or big operation. And if so, you need to make it very clear that this is going on, and provide some kind of loading indicator, progress bar, anything, so provide some kind of feedback to the user that this is going on. And by paying attention to these times, by paying attention to these timings, you really realize that like, hey, I don't necessarily have to go in and optimize everything, I can change the actual design of my application to feel more performant. And that goes a long way.

So some key takeaways, before we kind of move on to actually like looking at measuring this stuff and, and trying it out is, I got three here. And there are more, but these are I think the three that are most important. So these front end operations, again, rendering a page, and user interactions are fundamentally user based. So it's important to understand who your users are, and how they're using the web app that you've built, because that affects how you kind of look for performance problems and solve them. Second, your website doesn't exist in a vacuum. There's a lot of kind of factors that are out of your control, the browser, people are using the machine that they have, when they actually load up your site, the network, they have access to kind of all of those things, the user conditions, so on and so forth, even for example, the services that your site depends on if you have a back end or a CDN, those are all kind of dependencies that are just out of your direct control as a front end developer sometimes. And so it's important to keep that in mind. And so your performance of your applications are also affected by all of those dependencies as well. And so how you're using it, because it affects what we think is good or bad performance. I think like a question people often ask is like, what is slow or what is fast, but it's all relative. It's based on how people are thinking about your application. It's based on how people are using it and based on your application type as well. And I have this little note about visual completeness, because this is like an exercise that you can do. I'm going to show off an example webpage, but feel free to look at a page that you built and you can start thinking about. And so this is like, I'm sure a page most of you are familiar with, it's the kind of GitHub profile page. I just screenshotted mine as just an example. And so we have like all these various components. We have this nice image.

6. Measuring Performance and Asset Size

Short description:

To determine when a page is visually complete, it depends on user needs and interaction capabilities. Having 70% of the page load quickly can be sufficient. Measuring performance is crucial for understanding the problem space and making comparisons. It helps identify issues, evaluate solutions, and prevent regressions. Performance data on the front end is high cardinality and requires context. Measuring asset size is a simple way to improve performance by reducing bandwidth usage.

We have like a nice sidebar. We have a navigation on the top. We have these pinned repositories component. We have this contributions component.

If we look at this page, how much of this page needs to load before we consider it visually complete? Which means how much of the page do we load before a user finds it useful? That's hard to answer, right? I feel like a lot of people reach for it depends. And it really does. It depends on what the user is looking to get out of the page. The way I interact with my profile page versus another person coming onto my profile different. And then priorities will change as kind of, you know, what a user gets out of a page once.

So, some people might be saying, oh, 100% of the page being kind of there means that it's visually complete. But that's not often good enough. Sometimes it also depends on if the user can interact with the page or not. So, for example, I'm sure we've had kind of experiences where we load... works because it has to load a bunch of stuff before it gets enabled. And so, both kind of the ability to interact with the page and see the page, whatever's on the viewport, is important. And it's important to understand what you think is important for your user, maybe only having like 70% of the page here is good enough. And that's your kind of goal, like if 70% of their page renders really fast, then you know that the page is usable, you know that things are there. And that's kind of your indicator, you don't have to worry about every single kind of DOM element being there and intractable, and being, like, open.

So again, I know it's confusing. It's like, oh, I have to decide all of this. And as a developer, you really do, you have to get this understanding. But don't worry, there are some heuristics you can use. Plenty of people have put research into this subject. And so let's start going into that. Let's start covering how we can start measuring this stuff, so that we can actually tell if something is slow or fast, and we can start defining what it means to be a performant app. So measuring stuff, quality, it's actually... this slide is opposite, quantitative is not the best, qualitative is the best, but it's fine if stuff is quantitative, and I'll cover what that means afterwards. But measuring stuff is really important because it kind of gives you an indication of your problem space. It also helps kind of make relative comparisons, so you can start comparing pages, you can start comparing two different apps, you can start comparing yourself with your competitors, for example.

And so that kind of understanding. And it also helps kind of understand possible solutions and how effective they are. So for example, you identify that, hey, it takes a really long time for this page to load, and it's because this metric is really poor. Well, let's say you tried to make a fix, you can kind of look directly at the impact of that metric to know how effective your fix was. And once you start fixing stuff with these metrics, you can also guard against potential regressions. So if you're keeping track of these metrics on a consistent basis, you can then prevent situations where, for example, you ship a new feature on your app, and suddenly your app is super slow. And maybe you don't realize it, because you're just so excited about whatever you built. But then you take a look and you see, oh, my user defined metrics like the web vitals I'm using, they're all down. This is really unfortunate. It's also really important to understand that when you start measuring stuff, fundamentally performance data on the front end is pretty high cardinality. And that's related to remember all of the inputs we talked about, like the network conditions, the hardware, the browser type, the type of user. It means that you can't just put like a single number oftentimes on things. You have to attach a lot of context on what is happening or what are the conditions when I'm recording this specific performance data, because that helps you put things into context. And so remember this cardinality stuff, we'll get back to it.

And so remember we got the bandwidth and latency? We're going to start measuring stuff. Let's actually start doing some measuring. So the easiest thing, probably you can start doing today is measuring your asset size. And so that's actually how many bytes you're sending over the wire because that represents the bandwidth. And so the less bytes you're sending over, the faster it takes to load them, the faster it takes to parse the JavaScript, to parse the CSS, for example. And there's plenty of ways you can do this.

7. Measuring Asset Size and Page Load Performance

Short description:

There are various ways to measure asset size, including JavaScript, CSS, images, video, audio, web fonts, and even JSON blobs. It's important to be aware of the size of these assets and optimize them when possible. Tools like Webpack bundle analyzer can help analyze and identify unnecessary modules or large libraries. Measuring asset size can greatly impact performance. Additionally, metrics like First Paint and First Contentful Paint are essential for understanding page load performance.

And there's plenty of ways you can do this. I'm not going to go into it for too much detail. We can cover it more in depth if anybody's curious. But oftentimes just measuring your size and being aware of it is really important.

So, for example, for your JavaScript, there's really great tools out there to kind of measure the G Zip bundle. We actually use this at Sentry to kind of measure the cost of adding new features. And kind of the tradeoff of like, hey, can we write something in a way that won't add a huge amount of stuff to the JavaScript bundle? And that's pretty qualitative in nature. It's just like looking at the kilobytes, for example.

It's also really important to note the to analyze the bundle itself and actually check what's included. Because maybe you're including unnecessary modules. You're not tree shaking properly. Maybe you're just including a bunch of stuff that can be easily lazy loaded in another bundle afterwards in that main initial bundle. Remember, there's a cost to kind of grab that initial bundle, parse it, and then execute it. So you can use the tools like Webpack bundle analyzer, which is actually bundled into the Vue CLI. You can look up documentation on that to actually check out what's in your bundle and say, hey, that looks strange, or I can't believe that library is that big. Maybe I should switch to using a different library. And you can do this kind of with CSS, with images, video, and audio. Oftentimes with those kind of assets, like images, it's up to if you optimize them or not. One thing people usually don't try to measure at all and forget about a lot is web fonts, but it's actually pretty important. It can sometimes be really easy to send unoptimized fonts or just really big fonts, and you have to pay a cost to load all of that, of course. And so starting to measure that and thinking about that, it can go a long way. And there's other stuff. You can send anything, right? Maybe you're sending JSON blobs. This is really common for static site generators to send additional JSON. And then the kind of JavaScript side will read from that.

Yeah. So on an asset size, yeah, I think this is being recorded, and that'll be shared afterwards. We can also chat about it in more detail on Discord if you have questions. Yeah, no problem. Cool. Let's move on to, okay. You're now measuring your assets. What else can you do? And remember, there's this idea of latency. There's this idea of this page load operation, how long it takes to render a page. And so some industry experts, really smart folks have spent some time kind of building metrics. Some of these are even built into the browser itself on what they think how you should measure page load and page load performance. And these are heuristics in a way, but they're pretty good. And they're tried and tested. And so some of these are web vitals, which hopefully some of you have heard about. But the idea of when you load a page, you can measure the duration of different things on the page according to kind of different things you prioritize. So three of these CLS, LCP and FID are considered core web vitals by the Google Chrome team, which helps kind of develop this stuff. But I included First Paint, First Contentful Paint here, because I feel like it's just as important as the other three and kind of understanding the performance of your web applications. Specifically, this is for kind of when you render a page. We'll talk about metrics for user interactions afterwards. And so First Paint, First Contentful Paint, you can kind of see from the image here. It's kind of measures the amount of time it takes to actually load something for the first time on the viewport. First Paint is for the first pixel. And First Contentful Paint is for like the first content and content is defined as stuff like images or text blobs. If something renders on the DOM, then these things will fire, basically. Basically First Paint and First Contentful Paint will be the same thing.

8. First Paint and First Contentful Paint

Short description:

First Paint fires when something is painted on the page, but not enough to be considered content. For example, animations that require expensive computations can cause a discrepancy between first paint and first contentful paint.

And that's like, so that's why they're usually like bundled in together, but the cases where they're not are really interesting. Because they're oftentimes like First Paint fires because it paints something on the page, but it's not enough to be considered as content because it's not like a full DOM node or a full image or something. And that might be because I'm trying to think of some examples, but for example, let's say you're firing animations on the page. But computing the animations are really expensive for some reason. And so you're calling requestAnimationFrame and you're doing some computation to run the animation. And that's leading to like a big discrepancy between like the first kind of pixel paint, your first paint and your first contentful paint. And so, usually they're the same, you don't have to worry about it. But there are some interesting scenarios when they're different.

9. Performance Metrics and Their Significance

Short description:

Cumulative layout shift (CLS) measures how much your layout shifts as your page loads. Keep CLS to zero for a stable and reliable user experience. Largest contentful paint (LCP) measures the render time for the largest content in the viewport. First input delay (FID) measures the response time for user interactions. Time to first byte, time to interactive, and total blocking time are additional metrics to consider. LCP is a heuristic and may pick an irrelevant element. Watch out for it. JavaScript execution has a cost, but is usually not a bottleneck.

Then we have cumulative layout shift, which is not actually a duration, like a lot of the stuff that we're going to talk about, it's just a score. And there's a way to calculate this, some great documentation on it that you can look up. But it basically keeps track of how much the extent of how your layout shifted as your page was loading. And so we've all kind of probably had this experience like we loaded a page up, and we were ready to click on a button, and then suddenly the button moves down because something else loaded above it and we click on the wrong thing. And that's a frustrating experience, right? And so it's not like a duration related to latency, but it is related to kind of the stability and the reliability of your page, which I think is fundamentally part of performance, we want a consistent and reliable experience for our users while they're using it, right? So try to keep your CLS to zero, many strategies to do this, skeleton components, so on and so forth.

Then we got largest contentful paint, which kind of meant instead of first contentful paint, which kind of measures the rendering time of any content on the page, this is for what the based on how this kind of metric is defined, measures the render time for the largest content to appear in the viewport. The viewport is important because it's just a part of the screen that the user can see, and the largest content can vary, usually it's like an image or SVG or a large kind of text blob, but they pick kind of the largest element because it's associated with, it's associated as being the most visually defining element, which means it's the most kind of that contributes to helping the page look visually complete and can be considered the main content. Now, LCP does have some, it's not perfect, it's a heuristic, the browser has to kind of guess what the largest element is, and sometimes though the largest element isn't even the element that you care about, so you might be choosing the element correctly, but that's not the element that you care about for visual completeness on the page, and we'll go into that.

And then finally we have first input delay, which measures the response time when users try to interact with the viewport, so like clicking on a button or a link or a dropdown, and so this is kind of like a measure of like, are people able to interact with my page properly when it loads, like if your first input delay is really poor, you know that people are having a terrible experience with your app, because they're not actually even, they can't like interact with it at all. This is something that you should definitely focus on. If you have a bad FID, you need to fix it, like, this is a big red flag. All of these other things can kind of be explained. FID can't really be explained in the same way. And so three of these, the CLS, the LCP, and FID are actually all, they're actually all part of like Google's page rack algorithm now. So it affects your site's SEO and all of this, so it is important and people do really pay attention to it. Some other metrics that are important but not kind of like core to define your application's performance, but I think that's useful to think about on the side, are time to first byte, time to interactive, and then total blocking time. So time to first byte is like how long it takes for the browser to receive the first byte of page content. And that's more of an indicator of how kind of representative the data is that you have of something that you should be looking into. Because sometimes it just takes a long time for the browser and the server to communicate with each other because of network conditions, who knows, maybe something's happening, right? And maybe you have a really long time to first byte And that'll probably result in a long LCP and a bad FID. And then you look at the metrics and you're like, oh, no, my LCP's doing so bad. But if you look at your time to first byte, you can put into context and see that like, okay, out of our control a little bit. It was because like the network was behaving poorly, or it was just because of the initial response was behaving weird. And not actually because like, we were doing something bad. But that's not always the case. There are some cases where you know, you didn't design something properly. And a time to first byte is horrible. The time to interactive kind of measures like, this one's really hard because you can't measure it out in the field, it has to be done in like lab conditions, you have to like, synthetically test your page to get time to interactive, but it's really a calculation on how long it takes your page to be ready for interaction. And so if your main JavaScript thread is tied up a lot, like doing stuff, running some computations, calling third party scripts, so on and so forth. If a user tries to interact with the page, maybe the event listener fires but it'll take a while for the event loop to kind of get to that point where it can like, do stuff that you fire in your event listener. And so it's important to keep this in mind. But oftentimes it's really hard to measure like, outside of like perfect conditions. And so yeah, I don't really pay attention to time to interact with that much. But it's still a useful kind of tool to have in your toolbox. And then there's total blocking time, which is like the time between first contentful paint and time to interactive. But like, the main thing to pay attention to here is it represents kind of the, the amount of time that the main thread was blocked. And this is really important, because JavaScript is an evented, single threaded asynchronous language. And so that means that this if the single thread is blocked by some expensive computation, some expensive IO, let's say it's like doing a big Dom operation on the DOM, it's sending out a bunch of requests. And has to fulfill a bunch of promises, for example, maybe that can lead to a poor experience because the main thread is just so busy doing that one thing on that long kind of task that it can't fulfill other tasks, like maybe rendering stuff onto the page or sending out requests or so on and so forth. And, of course, interactions. And then there's like the there's a cost, it's not as bad because usually your kind of browser engines are really fast, like V8 is ridiculously optimized. That's the JavaScript engine that Chrome uses, for example, but there's a cost to pay to parse and execute JavaScript. There's a cost to pay to parse CSS, to parse HTML, and it's important to remember that. It's not often gonna be usually from my experience, usually is not the bottleneck, but important to still keep that stuff in mind. And so watch out for LCP because it's a heuristic, as I mentioned before, and it might pick an element that's really not important, so the example I've shown, and hopefully all of you can see on the screen, is that actually from like the Sentry site itself, and so when you load Sentry, it's a single-page application built in React. I know, boo-hoo. What can you do? But the kind of, when we load in data, we have this really nice loading indicator that kind of says, please wait while we load an obnoxious amount of JavaScript. But actually, the browser itself fires, you can see that it fires first contentful paint, but it actually picks this loading indicator as the largest contentful paint and says like, OK, the page has rendered its largest contentful paint, it's done. We've fired LCP. This is your LCP value. Even though this is completely wrong, like there's, it's just a loading indicator.

10. LCP, Timing Elements, and Google Lighthouse

Short description:

LCP is a heuristic that can pick the wrong elements. For more in-depth analysis, timing the elements you care about is recommended. You can pseudo time elements in Vue by hooking onto life cycles. Google Lighthouse is a great tool for running reports on web pages. It can be run standalone or within the Chrome browser. You can run it on any website and generate a report. The WebDev site is a great resource to learn more about web development.

Like it, there's way more content to be loaded afterwards. Right. And so this is a fact where LCP is a heuristic, and it just picks the wrong elements. LCP will attempt to kind of, the algorithm will attempt to kind of fix itself. If it notices a more kind of, a larger or more important element, it'll recalculate and redefine the LCP, which is why that there's, like if you actually look at the metric itself, there's like intermediate and then final LCP values. But it could still choose wrong. And so this is why, even though it's an important metric, a great heuristic and a great way to kind of get a gut check on how your page is doing, for more in-depth analysis, or if you really are knowledgeable about your user flows and what your user's caring about on your page, I recommend actually just timing the elements that you care about the most.

And so there's a element timing spec that is in beta, I think, which is really great that you can like time kind of individual HTML elements, but that doesn't have a lot of browser support. But the great thing about kind of building stuff out with a framework like Vue is that you can pseudo time things yourself by hooking onto the Vue life cycles, mount and unmount, and so on and so forth, and just start timing stuff that you care about, like how long does it take for this component to mount on a page, when does this component mount on a page relative to others? And kind of thinking about those things is really great because it helps to put, you can kind of define, like, what your most important elements are for the page to be usable, to be visually complete, for it to have the best user experience, and start timing that. And it's not that expensive, right? You just collect the timestamp. And there's ways to do this. And we'll kind of go into that afterwards.

Cool. And so, let's actually try to see the stuff in practice by looking at Google Lighthouse, which is a really great tool. You can run it standalone, but it's actually built into the Chrome browser as well. And it actually allows you to run kind of like reports on various pages. And so, you can run this on whatever kind of website you're like and take a look. Maybe the stuff that you're building. As an example, though, I am actually going to go to the Vue docs. Vue docs, let's go to the version 3 docs. What's a nice, pretty complicated instance? Let's look at stuff, how about computer properties? Pretty complicated. It seems like there's ads, there's headers, there's complicated components, a lot of text, and some kind of versatile various things. No images, but I think that's fine to kind of represent a pretty good page. And so if you have your Chrome tools open, you can go to Lighthouse. I'm going to run this for a desktop. For now, I'm just going to click all of the categories. It's important to note that you can simulate different things here, but let's generate the report. And it's going to kind of, this is synthetic data. And it's gathering information about the page. And this should be kind of built into, you can be doing this yourself at the same time. Recommend really kind of running this on the applications that you care about. Nice thing here is that it gives us a progress bar even though it seems to be moving, but at least, you know, that's some feedback, so we know it's doing something. Last time I ran this, this didn't run this long. So now I'm wondering if something's up with the performance of Lighthouse. And also a great consequence of running demos live. And I'm not sure why it's taking so long. Uh, Google Chrome. Well, that's unfortunate. Lighthouse, Google Chrome. So let's actually kind of show off what this looks like anyway. Uh, so this is usually what a Lighthouse report will look like. All right, let's cancel and try to run it on mobile. Maybe that'll fix it here. But anyway, Lighthouse is a great thing to run if it works. I guess it didn't. Oh yeah, you can use WebDev slash measure as well. Actually, yeah why don't we give that a try actually. It's just an example. So the actual WebDev site is really great. It's a great resource to kind of learn more about different things on just about the web in general.

11. Analyzing Performance Data and Distributions

Short description:

I recommend checking out collections on performance that provide insights into web vitals like largest contentful paint and first input delay. Optimizing JavaScript and lazy loading are important, but I'll focus on identifying performance problems. We can use Vue.js slash Measure and run an audit to get a report on our site's performance. Lighthouse provides recommendations and helps us understand how our web vitals are performing. However, Lighthouse tests are limited to specific conditions, so collecting data in the field is crucial. Chrome's Web Vitals library is a great tool for collecting data and creating distributions to gain more confidence in performance analysis. Histograms can reveal valuable insights, such as positive or negative skewness and spikes in outliers. By examining data distributions, we can identify common scenarios and problems, rather than relying solely on hard numbers. Collecting data in the field allows us to analyze performance for different user types, browsers, and network conditions.

But I really recommend these kind of collections on performance to check out as going dive or deeps it. Especially it defines a lot of these web vitals you talked about like the largest contentful pane and first input delay. And it kind of talks about optimizing what you can do. Optimizing JavaScript, lazy loading stuff. This is all stuff that I'm not going to go into too much detail because I really want to focus on how to identify performance problems instead of actually solving them. But once you identify your performance problems, lots of techniques to help solve them.

And so actually let's go to Vue.js slash Measure. Let's copy in kind of the oops. Let's run the audit. And hopefully this will kind of run audit on the side. Oh, sweet. Okay. So we got the report. Normally if you run this on Chrome on Lighthouse, we'll get it like right here, but we can see a nice report of how kind of well our site is doing. And you can see that it has a 44 score for performance, which is not the best, but I can probably tell you that it has something to do with probably these components and these ads, and maybe some third-party scripts. If we look at the network, there's a lot of JavaScript being loaded here, and we can actually take a look and it'll tell us like, hey, this is how your web vitals are doing on your page, your first contentful paint. And we can see some of these numbers, total blocking time, time to interactive, LCP. And it kind of gives us some recommendations on how we can import. And this is a great way to kind of gut check, like, am I doing well, am I doing poorly? Where should I be heading?

But a thing to kind of note with Lighthouse is that it is one run on your personal machine on kind of an up-to-date browser, on pretty probably solid network conditions, if you're a developer, hopefully you have pretty solid internet access, on kind of the, your persona, whatever user you're signed into, whatever kind of resources you have access to. And as we kind of discussed previously, oftentimes though the permutations of how people interact with your web applications are huge. Multiple user types, plenty of different browsers, and browser versions, plenty of different user hardwares, different kinds of laptops, and so even though this is kind of, this one like lab test may be pretty good for kind of setting the tone. It doesn't tell the whole story behind your performance. And so this is why you should be collecting data, not just from synthetic experiments. You're running stuff on Lighthouse and you're collecting one-off metrics, but you should be, right, you should be collecting data in the field as well. And so there's like Chrome has a Web Vitals library for example that a lot of like people use just under the hood. It's really great. I actually recommend checking this out. So github.com Google Chrome Web Vitals. You can even like actually look to see how, for example, something like LCP is calculated. And if you want to say use LCP but change the definition slightly for your app because you know it measures the performance the best you can do that. You can take this code and adjust it slightly and you can do that for any of the kind of metrics that this Google Chrome Web Vitals library is using. This is actually what Sentry uses also for its performance monitoring. It just uses this library under the hood. And if you start collecting this information for every page, you can start creating distributions of data and getting more confident in it. And these distributions matter. So as an example, on the right, the image that shows is a screenshot I took from Sentry. This is the product I work on to help show off. You can see that we collect, of course, we can see the average, which is important, but averages don't tell the whole story. The percentiles and the distributions of the histogram also matter. So for example, let's say we have a case where we're measuring LCP. We use this Chrome Web Vitals Library. And so we're measuring LCP for every page, as our users are actually using it out in the field, in the wild, not some simulated synthetic experiment, but we're actually seeing real users use our application and the scenarios that they're in. The histogram will tell you a lot of stories. If it's left skewed, you can probably tell that like, hey, it's more positive than it is negative. If we have some spikes in the outliers, there might be some cases there, some conditions that are leading to these spikes. If you have something like a, instead of a unimodal, you have like a bimodal distributions, which means you have two kind of spikes in the distribution, you probably have like two really common scenarios that users are hitting for some reason, that's leading to very different kind of performance results on average. And so all of those kinds of things by examining your data this way in a distribution, you can get more confident in it and you can kind of tell cooler stories and start identifying problems that way, rather than just looking at like a hard number and kind of saying, oh, that's good or bad. Even though hard numbers can be gut checks, like if you load a page in like 10 seconds, that's problematic, you need to fix it. Another really cool thing about kind of collecting your data this way in the field, running libraries, collecting it just off the browser, is that you can now involve all of this cardinality stuff that's kind of, you can kind of identify, okay, what's performance look like for 3G connections versus regular connections? So you filter for that and you can see, oh, I have a ton of people using my application through 3G or on a mobile browser, like Chrome, Android, or iOS, Safari. Like what is the consequence of that? Is there maybe like for typically, for typical users on using Chrome on like a MacBook, oh man, the performance is really great.

12. Slicing and Dicing Performance Data

Short description:

The site loads really fast. LCP is wicked fast. But for users with different conditions, it's poor. Slicing and dicing performance data helps solve their problems specifically. You can send data to any analytic sink or self-host and use histograms to make decisions.

The site loads really fast. LCP is wicked fast. It looks really great. But for these kinds of users that maybe aren't, don't have access to that kind of hardware or are on older browsers, it's really poor. And so by kind of slicing and dicing your performance data that way, you can kind of focus in on that subsection of users who have different user conditions and try to solve their problems specifically. And that goes a long way. That kind of helps you build kind of... Yeah, thank you for linking that. I appreciate that. Lucas linked the Chrome Web Vitals library in the Zoom chat. And so that goes like a long way. And I just showed off Sentry as an example tool, but you can send this data to any kind of analytic sink you want. You can even self-host like, I don't know, like a Prometheus instance and collect the metrics data yourself and visualize it and render it anyway. I think it's just important to use stuff like histograms and start aggregate collecting this data so that you can kind of start making decisions on it.

13. Using the Performance Timeline API

Short description:

The performance timeline API allows you to see performance-related metrics in your application. You can use the performance singleton API to list all performance entries, including information about script loading, network requests, and timing. You can filter and monitor this data to understand the impact of specific scripts or third-party tools like Google Analytics. By using the performance observer API, you can observe specific performance entries and track them over time. This user-defined code allows you to gather and analyze performance data in your application.

So these are kind of some like metrics. Pretty standard. Let's say you wanna start defining this stuff yourself. What can you do? Well, there's the performance timeline API. And to actually make this easier, I'm going to just walk through the docs because I feel like that's a great way to explain it. You can kind of see yourself.

So the performance timeline API, coincidentally, I've searched this before here. And so, these are actually browser APIs that you can kind of use to see performance-related kind of metrics in your application. So I want to see an example. I'm back in the view docs here. I'm gonna go into the console. Let's clear this. You can run this on kind of any application that you're interested in. But we have the kind of performance singleton API. And if we want to list all of our performance entries, which is part of this performance timeline API, we can do performance dot get entries. And we can see all of these, like, kind of great performance-related information, these performance entries. There's 211 of them. And so, a lot of these are links, which means they're scripts. So we can kind of see the name of the script and a bunch of great information about how long it took to load, how long it took to fetch, how long it took to do like the lookup. So you can actually see the network-related information as well. So there's a lot of links here. Oh, there's the first paint time. There's the first contentful paint time. We can see that they're basically the same here. When we have some screens, we can see that there's an XML request. So any kind of XML, or HTTP request, XML or fetch requests, we can see that it's making some requests here. I see that loads in some CSS, some scripts. And so all of this stuff you can actually filter for and grab yourself. You can grab this performance data. You can grab this timing, and you can decide how to best use it. And so this is how basically you can get information that you can see in your Lighthouse reports, that you can see in kind of your network tab you usually have on your browser through the performance kind of timeline API. You can start grabbing these and looking at them and monitoring them.

So, for example, let's say you have scripts that you really care about. You have like a couple of scripts like third-party scripts, for example, that you know will load on every page and you really care about how long that takes to fetch and actually load on the page. You can track those specifically over time, for example. Maybe that'll help you understand the impact of adding Google Analytics over time, for example. And the nice thing is, this is user-defined. This is just code you write, right? So, you can run this in your application and kind of grab all of this yourself. So, you can see that I kind of call this after the page is loaded to see all the performance entries. If there are performance entries that you care about specifically, you can kind of use performance observers to kind of observe them. So, for example, what's a good example? Let's actually just write like a snippet. I think that's a... So, I'm gonna write some code here just as a snippet. And so we have the... There's like a performance observer API you can look up docs for. Performance observer. And so this will kind of take in a function that gets called every time something is observed. And this has kind of like a list and object. I'm not going to go into too much detail because you can kind of get the idea here. But you can say that like List.getEntries.

14. Performance Observer API

Short description:

The performance observer API allows you to observe specific types of events, such as scripts, and gather information related to them. You can process and aggregate the data in any way you want, and even send it to an analytics sync. I recommend reading up on the performance timeline API for more details.

And you can see this is the same API as performance.getEntries. So this is like a specific list of timeline entries. And then this entries is just an array. So you can do whatever you want to this array. You can process them in any way you can. So entries.forEach. And you can do whatever you want with these, for example. Right? Send them over to some analytics sync you have somewhere. Make some decision based on them. Do anything. Aggregate them. And the way you kind of observe is you say, oh, I want to observe like a certain type of an event. So entry type is, let's say I want to observe all of my, let's look at some examples. Like let's say I want to observe all of my scripts. Say like, okay, I'll observe all of my scripts and get all of the information related to that. And so this is the performance observer API. You can kind of use it here. I'll link the performance timeline API. I really recommend you read up on it and take a look at it if you're interested.

15. Custom Metrics with Marks and Measures

Short description:

You can emit arbitrary defined marks and measures timestamps using performance.mark and performance.measure. These tools allow you to measure your own metrics aside from the defaults provided by Web Vitals. Start defining your own metrics using the performance timeline API, marks and measures, or sets of scripts and CSS files that you care about.

So other than the performance entries, there's also marks and measures. So if you want, you can actually emit arbitrary defined marks and measures timestamps just using whatever the browser gives you. So there's performance.mark, performance.mark. And you can kind of see the details for that here. And so as an example, let's try, I'm actually going to, not a huge bunch of time, but okay the reason why this failed is because you have to give it a name. And then you can see that it kind of, I gave it a name mark, and then it emitted kind of this is the time since the navigation start of the application. And so using performance.marks is a way to kind of start emitting your own metrics arbitrarily on whatever you care about. So let's say your view component mounts on a page, you emitted performance.mark, you can collect that later by a performance observer that's listening for marks. And so on and so forth. So there's a lot of tools here to kind of measure your own metrics aside from these defaults that are given from Web Vitals like we've talked about, like LCP, NFID. I would start if you're interested in just like LCP and NFID, for example, and in these kind of Web Vitals. And you measure that first, and you get some confidence. And then as you want to explore how users are using your application, you want to kind of dig into the performance problem space of your application. Start defining your own metrics, things that you think are important using the performance timeline API, marks and measures, or, like, you know, sets of scripts that you especially care about, CSS files that you especially care about, and so on and so forth.

16. Sentry Performance Monitoring and Metrics

Short description:

Sentry interprets performance data by sending transactions and capturing metrics like FCP and resource loading times. These metrics can be filtered based on tags, allowing you to understand performance for specific user types or conditions. While Lighthouse provides more detailed information through lab data, instrumenting your app with performance observers and using libraries like Sentry or Web Vitals can give you real-world insights. Feel free to reach out on Discord for further assistance or to discuss specific APIs and use cases.

If you want to kind of see, I actually want to show off a little bit at the end, I know we're a little bit kind of over time, but I actually want to show off like what Sentry does. And so I'm not going to walk through, I'm just going to walk through like a whole example of adding Sentry to an application, but I can actually just show off like what... So we'll go to the view project here, performance tab, oh, actually, I think it's in my test project. And so the way kind of Sentry kind of interprets this performance data is by sending kind of transactions. And so we can see here that only the FCP was captured here because it's a relatively simple app, but we can kind of see the whole nice, a nice waterfall of information that's collected like the browser related stuff, how long it takes for each of the resources to load. And then kind of the marks and measures that are emitted. These can be user defined, are also captured here. And so this kind of helps you put it into context and you can even filter based on these different tags. This is how kind of Sentry is thinking about performance and tracking it, especially with the emphasis on being able to filter on tags, being able to kind of understand the cardinality of the data. So that you can say that like, okay, I only want Firefox users to kind of understand what their performance is, how long that their page is loading. I only want kind of users from a specific set of IPs or I only want users that is now listed here, but with a specific connection type. You can also, if it's a 3G connection, it'll list here. And so that's how Sentry kind of thinks about these problems. But reality, once you start kind of collecting this information, these vitals, these metrics really, it's up to you. There's a lot of ways to explore, to collect data and to make decisions on this stuff. And so that's pretty much it. All I have to cover. I will take a look at Discord to see if there are kind of any questions. But feel free to also put them in the Zoom chat if anybody has any questions. I have some time. Yeah, but if... I'll wait like a couple of minutes in case people have to collect their thoughts. I know I was just speaking for a long time continuously. But hopefully this serves as a great introduction to these topics so that you can kind of start diving deep into things that you find more or less interesting. Cool, and I'll also be sticking around on the Discord afterwards. I think the recording will be provided at a later time. I don't have kind of full details on what the logistics of that are, but I think that will be shared as well. Cool, yeah, great, thank you, thank you very much. I'll be on the Discord if anybody has kind of any questions or kind of wants to dive deep into kind of specific APIs, specific circumstances. They're using kind of stuff in a creative way. We see some pretty interesting use cases of kind of understanding performance and diving deep into these problems at Sentry. So I'm more than happy to help you kind of walk through stuff as well. Cool. We'll wait. So we got a question on Discord that Sentry provides a metric for performance or is similar to Lighthouse. So it's actually a great question. It's important to note that Lighthouse is lab data. It's simulated, which means it has kind of access, it can set up your kind of app in whatever conditions it thinks and kind of measure everything. And that's usually not possible when out in the field, you just someone visits your site and you're measuring stuff. So there's less information you can get out of just instrumenting your app with performance observers and the Web Vitals library, or with Sentry and trying to grab information. Lighthouse and kind of synthetic data will give you a lot more information because you have full control over the environment. But it's also important to note like some of these APIs aren't available on every browser. Some of these APIs don't work the same in every browser. They're in like various stages of development or being rolled out. So something might behave slightly differently in Firefox and then in, for example, Chrome, which is why I recommend using like a library like the Sentry SDK, which I helped build or Web Vitals, which I helped show off. So yeah, cool. I'm going to end up there. Any other questions? I'll type in a proper response. Any other questions, please reach out on Discord, October 19th web performance channel. I'll be more than happy to kind of help you folks out. Thanks, everybody. I hope you have a great one.

Watch more workshops on topic

Vue.js London Live 2021Vue.js London Live 2021
169 min
Vue3: Modern Frontend App Development
Workshop Free
The Vue3 has been released in mid-2020. Besides many improvements and optimizations, the main feature of Vue3 brings is the Composition API – a new way to write and reuse reactive code. Let's learn more about how to use Composition API efficiently.
Besides core Vue3 features we'll explain examples of how to use popular libraries with Vue3.
Table of contents:
- Introduction to Vue3
- Composition API
- Core libraries
- Vue3 ecosystem
IDE of choice (Inellij or VSC) installed
Nodejs + NPM

Vue.js London Live 2021Vue.js London Live 2021
117 min
Using Nitro – Building an App with the Latest Nuxt Rendering Engine
We'll build a Nuxt project together from scratch using Nitro, the new Nuxt rendering engine, and Nuxt Bridge. We'll explore some of the ways that you can use and deploy Nitro, whilst building a application together with some of the real-world constraints you'd face when deploying an app for your enterprise. Along the way, fire your questions at me and I'll do my best to answer them.

Vue.js London Live 2021Vue.js London Live 2021
177 min
Building Vue forms with VeeValidate
In this workshop, you will learn how to use vee-validate to handle form validation, manage form values and handle submissions effectively. We will start from the basics with a simple login form all the way to using the composition API and building repeatable and multistep forms.
Table of contents:
- Introduction to vee-validate
- Building a basic form with vee-validate components
- Handling validation and form submissions
- Building validatable input components with the composition API
- Field Arrays and repeatable inputs
- Building a multistep form
VSCode setup and an empty Vite + Vue project.

Vue.js London 2023Vue.js London 2023
138 min
TresJS create 3D experiences declaratively with Vue Components
- Intro 3D 
- Intro WebGL
- ThreeJS
- Why TresJS
- Installation or Stackblitz setup 
- Core Basics
- Setting up the Canvas
- Scene
- Camera
- Adding an object
- Geometries
- Arguments
- Props
- Slots
- The Loop
- UseRenderLoop composable
- Before and After rendering callbacks
- Basic Animations
- Materials
- Basic Material
- Normal Material
- Toon Material
- Lambert Material
- Standard and Physical Material
- Metalness, roughness 
- Lights
- AmbientLight
- DirectionalLight
- PointLights
- Shadows
- Textures
- Loading textures with useTextures
- Tips and tricks
- Misc
- Orbit Controls
- Loading models with Cientos
- Debugging your scene
- Performance
Vue.js London Live 2021Vue.js London Live 2021
116 min
Building full-stack GraphQL applications with Hasura and Vue 3
The frontend ecosystem moves at a breakneck pace. This workshop is intended to equip participants with an understanding of the state of the Vue 3 + GraphQL ecosystem, exploring that ecosystem – hands on, and through the lens of full-stack application development.
Table of contents
- Participants will use Hasura to build out a realtime GraphQL API backed Postgres. Together we'll walk through consuming it from a frontend and making the front-end reactive, subscribed to data changes.
- Additionally, we will look at commonly-used tools in the Vue GraphQL stack (such as Apollo Client and Urql), discuss some lesser-known alternatives, and touch on problems frequently encountered when starting out.
- Multiple patterns for managing stateful data and their tradeoffs will be outlined during the workshop, and a basic implementation for each pattern discussed will be shown.
Workshop level
NOTE: No prior experience with GraphQL is necessary, but may be helpful to aid understanding. The fundamentals will be covered.

Vue.js London Live 2021Vue.js London Live 2021
89 min
Building for Web and Native with Ionic & Vue
When building an app, there are many options choices developers need to make. Is it a web app? Does need to be a native app? What should I use for UI? In this workshop will look at how to make use of Ionic for building your app and how to deploy it to not only the web, but native as well.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Vue.js London Live 2021Vue.js London Live 2021
34 min
Everything Beyond State Management in Stores with Pinia
When we think about Vuex, Pinia, or stores in general we often think about state management and the Flux patterns but not only do stores not always follow the Flux pattern, there is so much more about stores that make them worth using! Plugins, Devtools, server-side rendering, TypeScript integrations... Let's dive into everything beyond state management with Pinia with practical examples about plugins and Devtools to get the most out of your stores.
Vue.js London Live 2021Vue.js London Live 2021
20 min
One Year Into Vue 3
Vue 3 may still sound new to many users, but it's actually been released for over a year already. How did Vue 3 evolve during this period? Why did it take so long for the ecosystem to catch up? What did we learn from this process? What's coming next? We will discuss these questions in this talk!

Vue.js London Live 2021Vue.js London Live 2021
8 min
Utilising Rust from Vue with WebAssembly
Rust is a new language for writing high-performance code, that can be compiled to WebAssembly, and run within the browser. In this talk you will be taken through how you can integrate Rust, within a Vue application, in a way that's painless and easy. With examples on how to interact with Rust from JavaScript, and some of the gotchas to be aware of.
Vue.js London Live 2021Vue.js London Live 2021
24 min
Local State and Server Cache: Finding a Balance
How many times did you implement the same flow in your application: check, if data is already fetched from the server, if yes - render the data, if not - fetch this data and then render it? I think I've done it more than ten times myself and I've seen the question about this flow more than fifty times. Unfortunately, our go-to state management library, Vuex, doesn't provide any solution for this.
For GraphQL-based application, there was an alternative to use Apollo client that provided tools for working with the cache. But what if you use REST? Luckily, now we have a Vue alternative to a react-query library that provides a nice solution for working with server cache. In this talk, I will explain the distinction between local application state and local server cache and do some live coding to show how to work with the latter.