High-Speed Web Applications: Beyond the Basics

Rate this content
Bookmark

Knowing how to run performance tests on your web application properly is one thing, and putting those metrics to good use is another. And both these aspects are crucial to the overall success of your performance optimization efforts. However, it can be quite an endeavor at times for it means you need to have a precise understanding of all the ins and outs of both performance data and performance tooling. This talk will shed light on how to overcome this challenge and walk you through the pitfalls and tricks of the trade of Chrome DevTools, providing you with a complete roadmap for performance analysis and optimization.

30 min
20 Jun, 2022

Video Summary and Transcription

This talk covers the latest features in Chrome DevTools, including network tab analysis, performance tab optimization, and user flows. It discusses optimizing HTTP requests with fetch priority to improve loading time. The performance tab provides insights on frame drops, long tasks, and the importance of minimizing total blocking time. The talk also highlights the optimization of page rendering and introduces user flows in Chrome DevTools.

Available in Español

1. Introduction to Chrome DevTools

Short description:

Hello and welcome to my talk, High Speed Web Applications Beyond the Basics. I will cover the latest features in Chrome DevTools, including network tab analysis, performance tab optimization, and user flows. Let's start with the network tab, where you can analyze HTTP requests and use fetch priority to optimize content. In the performance tab, I'll demonstrate optimizations using content visibility and scheduling. Lastly, I'll introduce user flows and pitch the latest tools for measuring runtime performance.

Hello and welcome to my talk, High Speed Web Applications Beyond the Basics, a talk about the latest and greatest features in Chrome DevTools. Let me quickly introduce myself. My name is Michael, Michael Lutke. Very hard to read, write and pronounce, so let's stick with Michael. What I do is, I do consultings, trainings and workshops in the field of performance optimizations, Angular and reactive programming. I also run a company that is named Pushbased. You can visit it, just click on the link in my slides.

But now let's see what is on the agenda. First of all I will talk about the network tab. I will show you what you can see in the network tab and then I will try to look at some latest features. One of the cool features that are shipped in Chrome is fetch priority and I will use fetch priority to optimize the largest content for paint with an image as well as with http requests. Later on I will show you how to look at the performance tab. This is not really easy because a lot of information and I hope or let's say I promise that after the talk you will be able to at least have a little bit more understanding on what you will see there and what to look at. To demonstrate some optimizations in the performance tab, I will use content visibility, one very nice cutting edge CSS feature and I will also introduce you to scheduling and chunking of work in the main thread.

At the very end of my talk, some really, really exciting stuff I want to talk about user flows. User flow is basically a new tool that is at the moment only available in Canary Chrome, and it enables us to completely new ways, how to measure runtime performance in the browser. In the end, I will pitch to you the latest and coolest tools on user flow, how to use them and also how to integrate that stuff in your CI. With no further pauses, I will jump right into network analysis and the Network tab. So what you see here in this tool is first of all, I selected the Network tab and then you have a lot of information present. A lot of information about all the HTTP requests that are done from your application. And if you have a closer look on the right part of that slide here, you will see the waterfall diagram. In the waterfall diagram, you basically see a time bar chart thingy that displays all our HTTP requests, their start, their end, and what time they are made up of. If you hover over one of those tabs, you will see the request timing. And the request timing can show you some information about connection time, how big the amount of data was, and all the other times and durations that were required to make up the whole receiving of that data. In this slide you see a column that basically tells us about the priority of HTTP requests. We can see that some of those HTTP requests are more important, have a higher priority than others, and I want to leverage one of the latest features Fetch Priority to demonstrate what you can achieve with priority in your application. Without more information on the Network tab, I will straight go into practice and show you how we can change all the requests that are done and how we can improve them. One of the first things I want to improve, also visible in the network tab of course, is the connection time. In this slide you see at the very top an un-optimized version of two HTTP requests to two different domains and as you can see there is an orange block that connects and then a blue block that downloads, another orange block that connects and another blue block that downloads. So if we leverage the pre-connect attribute on our links, we can basically tell the browser, look to those two API endpoints we will fire requests in the future, so why don't you just set up the connection right at the start of the application and then we can save the connection time later on.

2. Optimizing HTTP Requests with Fetch Priority

Short description:

This section discusses the parallelization of connection locks, the priority of HTTP requests, and the use of fetch priority to optimize the largest contentful paint of an image. The example demonstrates the improvement in load time and the importance of having the largest contentful paint at the beginning. The next optimization involves leveraging fetch priority in HTTP requests.

This is demonstrated in the lower part of the picture and you can see that both connection locks are now parallelized at the very beginning and the whole chart is a lot shorter.

The next thing, and this is the fancy new cool stuff, is the priority of those HTTP requests. Again, in this chart you see an unoptimized version at the top, some execution of script, some fetching a resource A, fetching a resource B, and then rendering stuff.

Of course, rendering an image is more important than executing some script or fetching some resources that are used later on. So, the first thing that we do is we should make all the yellow scripting blocks asynchronously and non-blocking. This can be achieved by the defer, the preload, or the prefetch attributes. Deferring scripts just means move that script to the very end of the queue and go on with processing, with parsing of your HTML. And preloading and prefetching means basically that I try to get data at the very beginning of let's say the part that is not visible in the page. So preloading would be preloading resource sources that are accessed at a later point in time on this very page. And prefetching could mean preloading some stuff that is used after a navigation.

With those three things we can already go far but there is another really really fancy and very very helpful feature fetch priority. So with fetch priority we can basically determine on which of my HTTP requests have more priority than others and I want to use it to update the largest contentful paint of an image. If we look at this code snippet here we see two links that fetch some hero images and one of those two images is more important than the other one. So normally just by the order of HTML content we would first fetch hero image 1 and later on hero image 2. But now with fetch priority it can tell the browser that the second image, even if it is later on in time has more priority than the first one and the browser would switch execution of those two HTTP requests and fetch the second one earlier in time.

How would that look in practice? So, I took ObservableHQ as a dummy website and what we see here is a video image, or like I said, a small image of a video that we'll start to play later on and this is definitely the largest contentful pane, the most important part the user should see at the beginning. By applying some tweaks to the HTML and using prefetch, we end up with the following improvement. So what you see at the top is the first line of this movie strip shows us the default page and the second line of this movie strip shows what is the outcome of my optimization. There are two things different. First of all, the whole chart is way shorter now. I basically went from total 7 seconds to 4.5 seconds. But the really important and interesting part here is the largest content for paint is now present at the very beginning. So I went from 7 seconds of the largest content for paint which you can see here at the top to 2.5 seconds. This is also what is visible here in the detailed diagram at the bottom. And you can see that the image is really the first thing visible and then after that there is some fetching. But the image is always visible and gives a very nice user experience for users that want to consume this video or at least want to see a first sneak peak.

The next optimization that I want to do is I want to use or leverage fetch priority in HTTP requests. So when you use the fetch API you now can also give it an importance of this HTTP request and this is done by just applying another configuration as you see here. With this technique let's see what I did in practice with it. If we have a look at the page we see two different dynamic contents on the page.

3. Optimizing Load with Fetch Priority

Short description:

We can optimize the order of HTTP requests using fetch priority, ensuring that critical content is fetched first. This feature allows us to improve the loading time of our web applications.

We see a list of movies and a side menu with a section that is made up of dynamic fetched menu items. And as you can see those HTTP requests are fired quite late in time. So when I apply fetch priority and you have a look on the next slide, you can see that I moved this stuff to the very beginning and I was also able to basically shift the order of those two HTTP requests so that the images of the movie list is fetched first and after that the dynamic list in the side menu. Pretty cool, pretty exciting stuff. Everything that you saw is basically possible with this new cool feature Fetch Priority.

4. Performance Tab and Long Tasks

Short description:

Next, we'll dive into the performance tab, which provides valuable insights but can be complex to interpret. We'll explore frame drops, long tasks, and the importance of allowing the browser to process user interactions quickly. Long tasks are identified by red areas or red triangles, and the overview at the top shows the frames per second rate. We'll aim to minimize long tasks and total blocking time to improve performance.

What's next? Next is the performance tab. The performance tab is one of the most insightful, but also most complicated charts to read when it comes to performance tools. In the next slide, I want to give you a sneak peek on what you can look at and also how to improve that stuff. Let's start with what a frame drop or a long task is. First of all, a user always wants to interact with your page. Interaction means clicking, scrolling, or any other stuff that could happen to the keyboard or over the keyboard. One of the most important parts is to give the browser the chance to process those interactions whenever it is needed first or let's say as fast as possible. If you look at the chart, you see grey boxes and those grey boxes are so called tasks. A task is basically a unit of work that the browser needs to process before it can do anything else, for example, reacting to a user input. We can spot those long tasks, tasks that took too long to block user input or the processing of user input by this red area or by the small red triangle that you can see on the top right here. Another place where you can spot that stuff is also the overview at the very top, there you see these red bars and the green squiggles and those two things basically tell us A where our long tasks are and B how's the frames per second rate and if the frames per second are consistent you can assume that also our tasks are not too much blocking. At the very bottom number 3 you see an overview in this case a total time of our long tasks and their blocking time and total blocking time is one of the heaviest rated measures in for example the lighthouse score and we should always try to reduce long tasks or total blocking time to a minimum.

5. Understanding Single Tasks and Long Tasks

Short description:

Let's now zoom in and understand how a single task looks. The grey box marks the task, and the color indicates the type of work. We see the details of what was scripted, layouted, or painted. Long tasks are marked with a red triangle and indicate overtime. We aim to eliminate tasks longer than 50 milliseconds.

Let's now zoom in a little more and understand how one single task can look like. So in this picture we see one task in detail. We see that at the very top is a grey box. The grey box marks of course the stuff but what we also see is the type of work. Yellow, purple or green. Scripting, layouting or painting. Below all of that you can see the details. What exactly got scripted? Or what exactly got layouted or painted? In this slide you see that this task is marked as a long task and we see the overtime area in this red dashed, or not dashed, but sharpaed lines and then we also see the long task flag, the red triangle at the top right corner of every gray box of every task that is marked as a long task. This information is very important for us because this is what we need to get rid of. What we need to get rid of is everything that is longer than 50 milliseconds. As you can see here, 50 milliseconds is an okay-ish long task and everything that is over 50 milliseconds is basically the overtime of a task.

6. Optimizing Page Relay Outing and Paint

Short description:

With the latest browser features available in Edge and Chrome, we can optimize page relay outing and paint. Lab measurements show significant improvements in paint and layouting times. Field data demonstrates the impact of optimizing rendering time and introduces scheduling and the frame budget to reduce total blocking time. Optimizing total blocking time and input delay is also showcased, along with the exciting new feature of User Flows in Chrome DevTools.

With all that information clicked, we jump to page relay outing and paint. This is the purple and green stuff that I want to show you how to optimize and I want to use latest browser features available in Edge and also in Chrome. In this slideshow here, you can see from CanIUse where it is supported and I have told you already it is supported unfortunately only in edge and chrome, but all other browsers are working heavily to get that shipped.

Now as we understand where you can use it, let's see what could be the potential impact. This here is a lab measurement of taking one page in an unoptimized state, one page optimized with content visibility or nodes on screen, which means all content is visible within the page and then all content off screen, which means it is somewhere below your screen size, not visible to the user at the moment. If we look at the numbers, the top numbers are in green, paint, so we can go from an unoptimized with six milliseconds paint to on screen optimized one millisecond paint and off screen really, really nice to 0.1 milliseconds off screen. This is really an interesting impact, I would say a tremendous impact. Even cooler for layouting, this is the lower part here in this slide, you see 11 milliseconds update layer tree and paint with the optimization everything on screen 0.5 milliseconds and later on everything off screen it's only 61 microseconds, which is a really, really interesting number and heavily dramatic impact.

As lab measures are nice to learn and understand but what we are really interested in is the field data. So let's see what I achieved in the wild. Optimizing rendering time is the first thing I want to show you and in the next slide we see again from observable HQ at the top some animation and layouting work that is present here. And the longest task in this layouting work took basically 260 milliseconds. Of course a long task because it is longer than 50 milliseconds and with my application of content visibility I was able to went down to 15 milliseconds for the same work done in the same website. So this is a tremendous improvement and really nice to see what is possible with just one or two slight changes in your application. The next thing that I want to demonstrate or introduce to you is scheduling and the frame budget. This is mostly important to get rid of scripting or to at least reduce the total blocking time of scripting. What we see in this slide here is scheduling of work and how it could improve input to NextPaint, total time to interactive and total blocking time. Imagine there is a button click and this button click would cause some work and instead of executing that work right away I take this package of work and move it into the next task into the next gray box and execute it later on in time. In this very example I used animation frame to do the update because it was a visual update that caused some pixels to change but this is basically also possible with a lot of other scheduling APIs. So what is marked here is the scheduling moment in time and the scheduling duration. Let's see what theoretically would improve. What we see here in the hot pink dashed horizontal line is the next possible moment when the browser could process user interaction. This is a very nice improvement and it also increased time to interactive by tremendous amounts as you can see from this first bracket here and we also reduced total blocking time by 50 milliseconds because every task is ok below 50 milliseconds and now we made two instead of one task. Pretty amazing improvements and this is just the theory. In practice, optimizing total blocking time and input delay is the next thing that I want to show you and in this very example I want to demonstrate again the movies application and the bootstrapping of that application. If we look at this diagram here we basically see one huge task that is processing some JavaScript files and then executing the framework. After some optimizations on the very left we still have a little bit of a big task because optimizing a webpack bundle and its compilation is not that easy but everything that is framework land is now optimized and we can see a lot of hot pink dashed vertical lines and those basically are all separated tasks that in between give the browser the opportunity to process user input and as you can see all of those tasks are no long tasks, so pretty, pretty amazing improvements that we could achieve with scheduling and junking. The last and most exciting thing that I can demonstrate to you is User Flows. User flows is one of the fancy new features that Chrome DevTools will ship, at the moment it is only accessible in Chrome Canary, but you have to know that there is an open source library that you can use and install already today and run all that new stuff fully stable in your CI or from your CLI. The link here is github.com slash push dash based slash user flow.

7. User Flows and Chrome Lighthouse

Short description:

What are user flows? Chrome Lighthouse now enables three measurement modes: navigation, time span, and snapshot. The time span measurement mode allows you to record user interactions within a specific duration of time. A user flow report in Chrome DevTools shows multiple steps, such as ordering a coffee online. The report viewer displays the navigation to the coffee cart application and the details of the recorded time span for selecting a coffee.

Please have a look, very interesting. So what is user flows? If you imagine that Chrome has this tool that's called Chrome Lighthouse and Lighthouse so far was only able to measure bootstrap performance that measured the moment when you navigated to a page the first time, and this was always a cold navigation and was always as I said, limited to bootstrap performance only. With version 9 something. Lighthouse now enables 3 measurement modes, navigation, time span and snapshot.

Navigation is basically the default Lighthouse measurement that was present ever since. So any measurement was a navigation measurement or navigation mode until version 9. The second very cool measurement mode and for me most exciting one is the time span measurement mode, where you can start and stop recording of a duration of time and within that duration of time you can run some user interactions. For example, fully automated with Puppeteer. And at the very end we see snapshot and the snapshot is basically a way to take a so-called snapshot of your page at any moment in time. And it is very useful to determine accessibility measures and other static stuff to a later point in time, not only at navigation.

Let's see how this could look in practice. What you see here and in a second I will show it to you live is a user flow report, basically a report that looks quite similar to a Lighthouse report, but as you can see has multiple different steps. And what I did here is I basically ordered a coffee online. Let me quickly exit the slides and demonstrate to you how this report looks in real life. I will open up Chrome dev tools and I will hopefully, if I don't make a mistake, be able to drag and drop the report directly here. As you can see, pop. So this is the normal lighthouse report viewer and it already supports user flows. The first one, let me click on summary is a navigation to this coffee cart application and I will open the application for you just that you can see a very primitive application. I can select a coffee, click on that stuff, could enter some user data and then order basically a coffee. I see the confirmation message at the bottom and that's it. What I wanted to record. So let's go back and let's have a look at this. What I did was a navigation to this page and from the numbers you can see this is a default lighthouse score. I can click on it and I see the full details. I have all my web vitals here, some images, the time, the tree map and all the diagnostics visible at the bottom. As a later point in time, I recorded the time span of selecting one coffee. So from a non selected coffee to hovering over a coffee and then clicking that coffee and select it. This is recorded here. As you can see it provides us a reduced number of our recording. We see total blocking time, cumulative layout shifts and other stuff.

8. Userflow Insights and Conclusion

Short description:

After selecting a coffee, I wanted to ensure accessibility, CO terms, and best practices were met. The numbers show some reduction, but still provide valuable insights. There are more measurements for checkout and order submission. I recommend checking out the GitHub link for Userflow. Thank you for your time.

And if I scroll more down, we also see all the detailed recordings. The last one is selecting a coffee snapshot. So after I selected a coffee, I wanted to make sure, is all the accessibility still given? Are CO terms met, is best practice still a thing? And as you can see, those numbers here show us that it is reduced, but it still gives us a lot of insights on what we can do with those new tools.

Of course, there are more measurements, again, another time spent for checkout, another snapshot for checkout, another time spent for the order submission and another snapshot for order submission. Really nice tools. I highly recommend you that you check out the link from before on GitHub push-based with a dash slash user flow, and you can directly use it already in your project, CLI, or even CI.

Let me jump back to the slides and open them full screen. And let me say, thanks for your time. This is the very end of this very small, dense and brief talk about some latest and greatest features. I know it was quite a lot in, lot in some minimum time. So if you have any question, feel free to shoot me an email, michael.latki8-push-based.io. I'm also on Twitter and more most, probably more active on Twitter than on any other platform. And again, the GitHub link to the latest and coolest feature, Userflow, please check it out. And again, thanks a lot for your time and see you later. Enjoy the rest of the conference.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Top Content
Few developers enjoy debugging, and debugging can be complex for modern web apps because of the multiple frameworks, languages, and libraries used. But, developer tools have come a long way in making the process easier. In this talk, Jecelyn will dig into the modern state of debugging, improvements in DevTools, and how you can use them to reliably debug your apps.
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Our understanding of performance & user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion