Core Web Vitals - What, Why and How?

Rate this content
Bookmark

Performance can make or break a website, but how can you quantify that? In this session we will look at the Core Web Vitals as a way to measure performance on the web. Specifically, we'll go through the history of web performance measurements, where the new metrics come from and how they are measured.

27 min
15 Jun, 2021

Video Summary and Transcription

This Talk provides an introduction to the core of Vitals and its contribution to Google search. It discusses the evolution of website performance metrics and the need to consider factors beyond the time to first byte. The concept of Core Web Vitals is introduced, consisting of three metrics: Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift. The upcoming Page Experience signal, launching in May 2021, will combine Core Web Vitals with existing ranking signals. The Talk also addresses challenges in measuring performance and provides insights on layout stability and visual completeness.

Available in Español

1. Introduction to Vitals

Short description:

Hello and welcome to my session about the core of Vitals. We'll talk about web performance, the core of Vitals, and how it contributes to a Google search. Website performance is about quantifying if a website is fast and delightful for users. It has evolved over time and continues to evolve as our understanding of web performance changes.

Hello and welcome to my session about the core of Vitals, their what, why, and how, more specifically. So this is a testing conference, and I'm always a little humble to speak at testing conferences because I'm not that much into the testing space anymore. I do write tests when I write my code, but you all are probably more experts here than I am. Nonetheless, testing your website performance is an important thing, and the core of Vitals are a tool to accomplish exactly that.

So I think it makes sense to discuss these things. I'll look with you at three different things tonight. First thing first, we'll talk about web performance, or what website performance actually is. We'll talk about the core of Vitals, and then we also will talk about how the core of Vitals will contribute to a Google search in the form of the page experience signal launching in May. So there are some SEO implications or search engine optimization implications from this as well.

So let's start with what is website performance. Intuitively, we all know the answer to this question is a website fast and delightful to use or not. But if you want to compare that between sites, and maybe even between different versions of the same site, it becomes a lot more tricky, because you want something that you can compare and track over time, and intuitive measurements don't really help and don't really tick that box. The goal is to quantify it, to have some sort of number or metric that we can get that tells us if a website is fast and delightful for a user to use or not. As we will see in this talk, this has evolved over time and continues to evolve even today as our understanding of what makes a website fast and performant and delightful for users the web changes and the kind of websites we build are changing. There won't be an easy answer. That's kind of like the spoiler alert. But let's have a look at this.

2. Quantifying Web Page Performance

Short description:

One of the earliest metrics to quantify web page performance is the time to first byte. However, this metric is no longer sufficient to determine if a website is fast and delightful. The website architecture has changed, and bandwidth and connection speeds are not the main bottleneck anymore. A better metric is the overall completeness of the response. For example, a slower website that delivers a more complete response is considered better than a faster website that delivers an incomplete response. Time to first byte is still useful in identifying connection issues, but other factors such as rendering speed should also be considered.

How could we quantify web page performance? One of the earliest metrics has probably been the time to first byte. We would measure how long it takes for the first byte from the server to come back to our computer or device and actually the browser can then start parsing and then eventually rendering the page.

And historically, this has made a lot of sense. So classical websites, like here, this example.com case, our browser would make a request, the web server would respond with the HTML, and then the content would be visible in the browser. There are huge differences and there are a few things and factors that we can influence as website owners and developers to make sure that this is still fast. Like we make sure that our server is fast, has enough memory, has enough capacity, has good network bandwidth. We can also make sure that the server is close enough, physically close enough, because it just physically takes time for data to like electrical or light impulses to travel. If I'm here in Switzerland, the server is in Australia, then this might take a while until the data has made its way to Australia and comes back. It might be lost on the way and then has to be retransmitted. So this can take a significantly longer time than when the server is, for instance, in my own city, I'm living nearby a data center. So maybe if it's like located there, then it's literally just taking like basically no time at all. It's going to be really, really quick. And thus the time to first byte will be a lot shorter than it would be with a server in Australia.

But is this an exhaustive good metric? Is this all we need to quantify if a website is fast and delightful? No. And that's partially because the website architecture has changed over time, but also because bandwidth and connection speeds are not necessarily the biggest bottleneck anymore. So let's look at two websites. I open both websites on the same machine at the same physical location at the same time. I have maybe like, I have two machines next to each other going to the same internet connection, it doesn't really matter. I go to A.example.com and B.example.com, and we assume that these are completely separate servers and completely separate web applications. So these requests go out and A.example.com takes a while. Maybe it's like a classical PHP or Java or Python or Ruby program that needs to run on the server. Maybe it is a server side rendered application that needs to talk to a bunch of backends and APIs and databases to actually fetch the data and then compile the HTML before sending it over the wire, doesn't really matter. The point is it takes a moment, it doesn't matter how long this moment is, it just takes a moment. Whereas B.example.com, on the other hand, has received the request, immediately responds back, and the time to first byte has arrived. And it's HTML, it says load this piece of JavaScript. And now, the next second, B.example.com responds with the full HTML, it has done all the things that it needs to do on the server, and my browser shows me the website, whereas with B.example.com, we are not at the stage where we get the App.js, which then comes back and then probably starts rendering or starts running the JavaScript. Once the JavaScript starts running, it discovers, oh, we need to make this bunch of API requests and these come back, all while the browser still has nothing substantial to show to the user. Now, which of these two websites is better, more delightful and faster, according to a user looking at both browser windows? Well, very clearly, A.example.com, but if you remember, originally, according to the metric of time to first bytes, A.example.com was the slower one, it took longer until we received the first byte of the response, but as we received it, the response was more complete than the other response. So time to first byte is not good enough these days and it has not really been a useful metric. It is still relatively useful because it helps you, if you see, like, oh my website is slow and you see like, actually the rendering itself is really fast and we don't have to wait that much until things are being painted, it's just the connection time and the time it takes for the data to go over the wire and come back, then that's the bottleneck that you need to fix and you can fix that by using a CDN or something.

3. Evaluating Website Speed Metrics

Short description:

Looking only at the time to first byte is not sufficient to determine website speed. Many metrics, such as speed index and first content full paint, are used to evaluate when elements start to appear and how long it takes for visual completion.

But the point being still, that metric is not sufficient. If you just look at time to first byte and say, like, what? My server responded in 0.1 seconds and the data was there in like 5, 0.5 seconds. How can this be slow? You might miss out. And that's why we have looked at many, many metrics, for instance, speed index where we try to figure out, okay, so not just like, when is the website there? When is the network part of it mostly done? But when does things like, do things start to pop up and how long does it take over time to near visual completion? Then we looked at first content full paint. How long does it take to get the first bit of the content actually visible on the browser window, excuse me.

4. Metrics and Interactivity

Short description:

We had tons of metrics to track different aspects of web performance. It's not just about when stuff starts to show up. For example, imagine you have an online pizza shop. The menu loads quickly, but if you add pizzas to your cart too quickly, it's not delightful. We also consider metrics like time to interact.

And then we had tons of other metrics over time to track different aspects. But it's not just about when does stuff start to show up. Imagine you were having an online shop for pizzas, you want pizza delivered to your place. It shows up really quickly. The menu is there in like no time, in the blink of an eye, fantastic. But then you're like, I want this pizza, I want this pizza, hello, I want this pizza. You click, click, click, click, click. And after five seconds, suddenly you have a hundred pizzas in your cart. That's not delightful either. We also looked at other metrics like time to interact. When can I actually start to interact with the content, and a lot of other metrics.

5. Evolving Performance Metrics and Challenges

Short description:

Performance metrics have evolved over time, becoming more complex and difficult to communicate. Measuring performance is challenging, as metrics need to be stable, sensitive, and reflective of the user experience. It's important to find a metric that is comparable over time and stable, rather than too sensitive or rough. Additionally, metrics should be able to be generated in lab settings and gather real user data to account for different devices and connections. As performance understanding changes, metrics may fall out of favor, leading to fluctuations in scores and potential discomfort when reporting to stakeholders. One approach is to focus on obtaining vital signs for the web.

And this has evolved over time in random intervals, it seems. At some point, someone was like, actually, you know what, this metric doesn't really reflect what we looked for or what the users experienced. We came up with this new metric. And then someone else was like, that's a great metric, but it also needs to basically take into account this other aspect, as I said, for instance, interactivity. And thus, we have not only one metric that we look at, but we look at a set of metrics. Which unfortunately, also makes this more complicated, not only to understand, but also to communicate with others.

So if you are communicating with users, and you say, like, you have a Lighthouse score of 100 out of 100, that really doesn't mean that much to them. However, they might say like, oh, it's only 80 out of 100 possible points. That still does not really say much. Because are these last 20 points really a problem? Or is that just like cosmetic? So measuring performance is actually a big challenge, as it turns out. One thing is that we want metrics to be relatively stable, but also fine enough and sensitive enough to really spot problems when they occur. We had a metric called first meaningful pain that tried to figure out when the meaningful part of the content was showing up. And that usually was very janky. So that means that you would have the same website, you would measure three times, you would get three different results without changing anything in the circumstances. So that's not really helpful. You want a metric that is comparable over time and more or less stable. It will never be 100% stable, but you don't want to pick two sensitive ones. You don't want to pick two rough ones where you don't like time to first bite. It's a very, very broad, rough metric. Doesn't really reflect things either. And also, these metrics really need to reflect the actual user experience. I said that already with the time to interact if we needed a way to actually track these things as well. Also, we would like data to be able to be generated in lab settings, where we can run these things automated and with things that are not necessarily already public. If we can only gather real user data, that is a little tricky. But it would also be nice to get real user data, to get a feeling for how this actually looks like on real people's machines and connections. Because we, with our high-end computers and good, stable fast internet connection land, might not be the target group of our website and then people on phones, on flaky connections might have a very, very different experience. So it would be cool if our metrics could be measured in both contexts.

Another big thing was that as we were changing the metrics, because our understanding of performance changed over time, you might end up working on improving a metric and then finding that this metric fell out of favour and then we have looked at other metrics and now you're like, ooh, our score is constantly increasing and then it drops and you're like, why? Have we done something wrong? Which also puts you in the uncomfortable position that whoever you're reporting to might ask for these metrics and then if your metrics are now lower than they have been as you started the initiative to make things faster, that might not be very comfortable. So we needed to change a few things. And we figured out that one approach would be to basically get like vital signs for the web.

6. Introduction to Core Web Vitals

Short description:

Core web vitals are three metrics that help web developers, testers, and SEOs measure and improve user experience in terms of performance. These metrics are already available in various Google tools, such as Lighthouse, Chrome DevTools, PageSpeed Insights, and Google Search Console. The thresholds for judging website performance are based on field data and are updated roughly every year.

These are called core web vitals. But what are these? Well, they are basically three metrics for web developers and testers and SEOs to look at to figure out how the user experience in terms of performance is on each page and to measure that and to work reliably on improving them as well.

The data is already in pretty much all the tools that we provide at Google. So it's in Lighthouse, which is in the Chrome DevTools. It's PageSpeed Insights. It's in the Google Search Console web vitals report. There's WebPageTest as well. And there's also the JavaScript APIs in Chrome that you can use to measure these things and the real user setting.

The goals for the thresholds that we are using to judge if websites are doing well, need improvement or doing poorly, are based on field data that we gathered and analyzed. So these metrics can and are already being achieved by lots of websites. Even though you might not necessarily hit the targets yet, you can achieve them. That is definitely possible. And also to fix the goalposts, moving goalpost issue, we will roughly update them every year. So we published them in May last year, 2020. And we will probably give an update on them at Google IO this year as well. And then it will again, roughly be a yearly cadence for us to review the thresholds and the metrics.

7. Metrics of Core Web Vitals

Short description:

The core web vitals consist of three metrics: Largest Contentful Paint measures visual completeness, with a good measure being less than 2.5 seconds. The first input delay measures how long it takes for the page to respond to user inputs, with a target of under 100 milliseconds. Cumulative layout shift measures the stability of page content, with a value below 0.1 considered acceptable.

So what are these three metrics that make up the core web vitals? The first thing measures visual completeness. So how long does it take until I actually see what I care about? Like what's the main content? How long does it take to show up? That's measured by the Largest Contentful Paint metric. It's basically the visual loading time that we had measured with other things beforehand. And a good measure would be less than 2.5 seconds. If you are getting your main content visible within 4 seconds, that is in the needs improvement area. Everything that takes longer than 4 seconds has an impact on how users perceive your site. So we would recommend to make the website faster then.

The second metric, as I hinted at with the Pizza Shop example, is how long does it take until I can actually interact with the web application. We measure that in the first input delay. You want to make sure that the page response to user inputs, you can encompass that by using less JavaScript or shifting it off the main thread or delaying it otherwise. And you want to be under 100 milliseconds because that feels very immediate or basically to the human brain, whereas the 300 millisecond delay feels slightly sluggish, but still relatively instantaneous. Everything that takes longer will eventually cause a disconnect or a feeling of disconnect between the action and the reaction. So that's something that you want to keep an eye on as well.

Last but not least, we also want to make sure that the page content is visually stable. What does that mean? Well, it means how much does it move around? You probably know this if you're on your phone or on your computer, and you see like the website shows you a button and you want to interact with that button. But then before you can click on that button, something else moves and then like a new thing is there and you click on that and you're like, oh no, I didn't want to interact with this. Why did that happen? We measure that with a new metric called cumulative layout shift. And it's basically how much of the content shifted and by how much did it shift. So that value should be below 0.1 because everything between 0.1 and 0.25 is considered okay. Everything that is more than that is definitely considered a problem. What are these values? To be fair, I don't really have found the unit that I should use because it's not really percent but it's basically the way you calculate it is how much of the page is affected by the shift and by how much does it shift. So in this case, for instance, we have a website that has two halves, the gray half and the green half. After a while, a button pops into the middle of the page which means the entire lower half shifts. So 50% of the page is affected by the shift. The button and some spacing that it introduces is roughly 14% of the page. So it is affected by 14%. So we can multiply the 50% that's the affected area by the 14% that shifts. And that gives us 7% it's not really percent but like 0.07 is the value we get if we multiply 0.5 with 0.14. So 0.07 that would be within the acceptable range actually. Assuming that this button would now be on the top of the page, everything on the page would shift.

8. Core Web Vitals and Page Experience

Short description:

Learn about Core Web Vitals, test different page versions, integrate it into your automated flow, and check for any issues using the Search Console. The upcoming Page Experience signal, launching in May 2021, will combine existing ranking signals with Core Web Vitals measurements. AMP will no longer be required for top stories carousel. Don't worry about the Page Experience update, but ensure there are no issues with Core Web Vitals, mobile friendliness, or safe browsing. Connect with us on Twitter or check out our documentation for more information.

So a 100% of it would shift. It would probably still be shifting by 14% so that would be 0.14 that is above the threshold already. So you can see that either large amounts of space that are taken after the fact or after the first rendering, or a shift that moves everything on the page, both of these are being taken into account as problematic shifts anyway.

What can you do in regards to the Core Web Vitals? Web.dev.vitals has lots of information. Learn what you want to learn about these metrics, understand what they do, how they are measured and how you can improve on them. It's very, very useful to know how this works. Test all your different page versions. If you have a mobile desktop and AMP page version of your content then test all three of these or any combinations. Do look into integrating it into your automated flow because that's really, really helpful. It also has impact on Google Search or will have impact on Google Search.

There will be a new signal called Page Experience. In May 2021, we want to launch this new ranking signal that is comprised or composed from existing ranking signals. We take things like mobile friendliness, safe browsing, HTTPS and intrusive interstitials which are already ranking factors and remove the proprietary page speed variation that we use to measure in ranking with the Core Web Vitals measurements and combine these into a signal called the Page Experience signal. Page speed and mobile user experience are not new ranking factors. They have been beforehand so it's not something that we need to super much worry about. It is just something to be aware of. And both the page speed using the Core Web Vitals and the others that I just mentioned will form this new Page Experience signal that will happen in May.

One of the upsides is that once this Page Experience signal has launched, we know how fast different pages are and we will no longer require AMP to be—well, AMP is no longer a requirement to show up in the top stories carousel. If you are a new site and want to have your articles in the top stories carousel, once the Page Experience signal has launched, AMP will not be a requirement to show up in there anymore.

What can you do for these things? If people in your organization are worried about the Page Experience update, don't be worried about it. It's not like a huge update. It is an update, but it's not like the biggest we've done. Do check your pages. Make sure that there's no Core Web Vitals issues, that there are no mobile friendliness issues, no safe browsing issues. You can use the Search Console. That's a free tool we put out there. You can go to search.google.com slash console. Sign up, get a feeling for how your pages are doing, and use the Core Web Vitals report as well as the Mobile Friendliness Test to figure out where there's areas for improvement and work on those. If you want to learn more, feel free to ping us on Twitter at googlesearchc or ping me on Twitter at geekonoa. You can also check out our documentation at developers.google.com slash search, which has a load of information.

QnA

YouTube Channel and Q&A

Short description:

And we also run a YouTube channel with regular office hours on youtube.com slash google search central. Thank you so much for watching and listening. We're going to go into the questions from our audience. The first question is from our guest, Yanni. He's asking about visual completeness within 2 1⁄2 seconds and the first input within 100 milliseconds. He wants to know if it's feasible to have effective input if the main content is not loaded yet and what exactly is measured by the first input metric.

And we also run a YouTube channel with regular office hours on youtube.com slash google search central. With that, I'd like to say thank you so much for watching and listening and bring on your questions.

I'm really excited to hear what you are up to. Martin, hey, thanks for joining us. How are you doing? Hi there. Yeah, oh, I'm doing pretty well. I mean, all things considered, still doing well, I guess. Yeah, how are you doing? Good. Happy to hear that. Yeah, very well, very well. I can't complain. I mean, it's been a lovely two days here at TestJS Summit. So anything that happens in life, you forget with such nice conference days.

We're going to go into the questions from our audience. So pack yourself to your seat. We're going to go... Okay, the first question is from our guest, Yanni. And he's asking, visual completeness within 2 1⁄2 seconds, but first input within 100 milliseconds. Is it really feasible to have any sort of effective input in it if the main content is not loaded yet? Or does the first input metric measure the time between the user clicking and the input showing up? He doesn't quite understand what exactly is measured there.

That is a brilliant question. So the general idea with the FID is roughly. So what problem are we trying to address? The problem we're trying to address with this is the fact that lots of pages have JavaScript that blocks the main thread, which effectively means that you can't actually use the main thread for anything else than actually executing the JavaScript that has been loaded into the document. So basically, the FID 100 milliseconds is looking at main thread activity and how early on we could potentially get the main thread to do stuff. You have to understand that also the painting and the layouting and all of that stuff does not happen on the map. Actually with the layouting, I'm not even 100% sure and I'm also not sure if every browser does it the same. But at least the painting part, which has a lot of influence on the largest content for paint, for instance, that happens separately, especially the painting, because it's on the GPU and I'm pretty sure most of the browsers are actually doing layouting on a separate thread as well, but any JavaScript can potentially block the main thread, which is also where the interactions are processed and the inputs are processed. So you want to get off the main thread as early as possible, or you might want to buffer things out or you might want to offload lots of work into like longer running work in JavaScript web workers, for instance, which is by the way, a heavily underutilized technique. So that's pretty much what we are trying to figure out here. How fast does your website potentially respond to inputs? Not necessarily like from interaction to something happening, but basically when would you be able to start the processing inputs? If you think about it, if you have a website that starts displaying things and maybe it display things yet, but you still have like a bunch of sections that are roughly taking up space already because just the painting hasn't happened yet and you want to scroll because you know you want to be somewhere in the middle of the page and you scroll and nothing happens, that's not necessarily a great experience. But generally for what you are trying to do, I think it is fine to not necessarily worry too much about this, worry more about the delay between, and that's hard to measure unfortunately automatically, which is why this metric is potentially going to be improved in the future, between actually interaction happening and your code responding because that's where the psyche really is.

Understanding CLS and Layout Stability

Short description:

The CLS metric is not time-bound and can be challenging for single-page applications. Layout shifts during navigation can result in high CLS scores, even when there is no visual instability. Feedback on CLS is being collected through a survey, as the metric is due for a major rework. Page shifts caused by privacy/cookie notifications and banners can affect layout stability, so it's important to avoid shifting elements for a good score.

Again, not really easy to express that in the metrics unfortunately yet. Well, you'll get there one day Martin, I know you will. I'm not smart enough for that, that's something that they need to work on, the smart people. Just act, just play the role.

Sure, sure, I'm on it. Next question, and I think that's the last question we have time for, is from Autogibbon, is the CLS metric time-bound? In example, I've seen some websites shift all the time while they're using it and others spend the first few seconds of loading and balancing stuff around.

That's actually, it is not time-bound as far as I'm aware. That's actually one of the biggest complaints about this, because single-page applications currently have a bit of a hard time with CLS because technically when you navigate from one view to another, you have a huge layout shift, right? Pretty much everything on the page is affected and it shifts by a lot. So as the CLS is measured throughout page lifetime in the user's browser, you might see high CLS scores when it isn't really visual instability. It's just the way that single-page applications happen to work. There is currently, if you go to Chrome devs, I think it's the account name. So basically the Chrome developers Twitter account, you will find the link to our survey where you can give feedback on problems to CLS, because I know that that metric is definitely in for a major rework because there is no time boxing or time bounding on CLS that can actually cause high CLS values where they are not really user experience problem. Yeah, it feels like cheating. Yeah, like I said, that's all the time we have for our face to face Q&A. Oh, we have one more. What a quick one Martin. The question is from Saf. How does page shift handle things like privacy slash cookie notifications, banners and stuff? So I had a look at that and it depends on how it's implemented. If it is implemented outside of the rest of the layout flow. So if you're basically absolutely positioning things on top of it without other things shifting, that's not a layout shift. Oftentimes many solutions that are on the market unfortunately implemented in a way that it does shift things around and then that is a problem. So don't shift things around if you want to have a good score. If you want Martin's approval, don't shift things around. Well, thanks Martin. For the rest of the questions, you're going to have to go to Martin speaker room. He's going to go to the spatial chat, click the link below in the timetable and you'll find Martin there. Martin, thanks. Yes! Love to see you again. Thank you very much. Hopping over to spatial chat. Thanks a lot.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Our understanding of performance & user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.
JSNation 2023JSNation 2023
26 min
When Optimizations Backfire
Top Content
Ever loaded a font from the Google Fonts CDN? Or added the loading=lazy attribute onto an image? These optimizations are recommended all over the web – but, sometimes, they make your app not faster but slower.
In this talk, Ivan will show when some common performance optimizations backfire – and what we need to do to avoid that.

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.