Visualising Front-End Performance Bottlenecks

Rate this content
Bookmark

There are many ways to measure web performance, but the most important thing is to measure what actually matters to users. This talk is about how to measure, analyze and fix slow running JavaScript code using browser APIs.

34 min
17 Jun, 2021

Video Summary and Transcription

React's web-based tools allow for independent learning. Dazzone, a sports streaming service, faces challenges with low memory and CPU targets. Measuring, analyzing, and fixing performance issues is crucial. Virtualization improves rendering efficiency and performance. The application is now much faster with significantly less jank.

Available in Español

1. Introduction to React and Web Performance

Short description:

React's web-based tools allow you to design and learn on your own. There are many resources available on the Flutter website to learn more about React. Rich, a front-end engineer at Dazzone, will talk about web performance. Dazzone is a live and on-demand sports streaming service available in nine markets worldwide. They provide access to sports through smart TVs, game consoles, and set-top boxes. The team faces challenges with low memory and CPU targets, resource competition, and maintaining smooth interactions. Fast websites lead to a better user experience, as proven by data from the Cloudflare Getting Faster white paper.

How to are you learning React on your own? React's web-based tools allow you to design and learn on your own. These tools are only available to users who are set to work on their own project. If you're looking for a way to learn more about the languages and teachings of React, there are many resources available on the Flutter website.

Now I'll turn it over to Rich to show you how to do that on your own.

Hello. Before we get started, I would just like to give a shout out to the organizers and the fellow speakers at React Summit. It's been a really great conference so far and you're all doing a great job. Today I'll be talking about web performance. My name is Rich. I'm known as Richie McColt on GitHub and Twitter. I'm a front-end engineer and I work at a company called Dazzone in London. I'm from Glasgow, Scotland originally. You can probably tell from the accent. And before I start talking about web performance, I'll just give a quick introduction into what we do at Dazzone. So Dazzone is a live and on-demand sports streaming service. And we're live in nine markets around the world, and we're providing millions of customers with access to the sports that they would like to watch. The team I'm working in is responsible for living-room devices. And what living-room devices is, can be broken down into three categories. So we have smart TVs, which is like Samsung, Toshiba Panasonic TVs. We've got games consoles, which is PS4, PS5, Xbox. And also set-top boxes, which is Comcast, Fire TV and Sky Q. And so to put performance in the context of what we do at The Zone, the kind of problems that we face are to do with low memory and CPU targets. Resource competition on the main thread. For example, you have playback running in the background, and the user is trying to navigate content. At the same time, maintaining smooth, 60fps interactions when the customers are navigating content. And the data doesn't lie, right? Fast websites equals a better user experience. This is what customers prefer, and it's something that we should be striving towards. So this data from the Cloudflare Getting Faster white paper demonstrates some kind of useful quotes. Conversions go down 7% due to just one additional second of load time. And 39% of users stop engaging with a website if images take too long to load.

2. Understanding Performance Measurement and Analysis

Short description:

Speedy software is the difference between smooth integration into your life and reluctance. Today, we'll cover measuring, analyzing, and fixing performance issues. We'll touch on user time, ZPI, and have a live demo. The measure-analyze-fix cycle involves measuring, analyzing, proposing fixes, and repeating until satisfied. We have a demo application using the SpaceX API to render launches. To profile, we'll use Chrome's Performance panel and focus on the main thread.

And another great quote from Craig Maud's essay on fast software is that, to me, speedy software is a difference between an application smoothly integrated into your life and one called upon with great reluctance. And as users of software, we can definitely relate to this at one point or another.

So today, what we'll be covering is understanding the method of measure analyzing and fixing. We'll briefly touch on the user time and ZPI that the browser provides. We'll have a live demo measuring and analyzing slow run in JavaScript code. I'll introduce the rendering technique of virtualization. I'll speak briefly about the performance problems we face at the zone. And finally, I'll fix the performance bottleneck that we have in the demo.

The measure analyzing fix cycle is a methodology that I tend to use when doing performance audits. So before we can analyze and fix a problem, we first have to measure. Measure will give us the baseline that we need to analyze the problem, really. Once we analyze the problem, we can then propose a fix. Once we fix, we then measure again. And this cycle repeats until we've got a new number that we're happy with when we measure.

I've got a demo, small front end application. It's available on this GitHub URL. The set up instructions are on the Read Me. So if you fancy following along, then please feel free. So I'll switch over to the demo here that I've got running. And this application is just a small front end application that uses the SpaceX API to render a list of launches from new to old. An interaction that we'll be profiling is changing the order of launches. So if I click this button, I'm now viewing the oldest launches from SpaceX.

So to profile this thing, what we'll do is go into Developer Tools in Chrome and we'll use the Performance panel. And we'll come to the CPU option here and click six time slow down. We'll then come over here and press record. And what we'll do is we'll just interact with this feature a few times just to get some data and stop the recording. Okay, great. So what we can see here is there's quite a lot going on. What we're really concerned about is this section here. So this section is the main thread.

3. Understanding Main Thread Tasks

Short description:

The browser performs different tasks to display content. Yellow indicates JavaScript execution, purple represents layout calculation, and green signifies paint or compositing.

And for those who aren't aware of what happens on the main thread, I'll quickly jump over to a different slide, and we can explain the theory behind it. So this image here is from Paul Lewis's blog post called the anatomy of a frame. And essentially, the browser performs a different set of tasks at any given time to get stuff on the screen. So when we're profiling and we see any yellow, we know that JavaScript has been executed. Anytime we see purple, we know the style of layout has been calculated. And anytime we see green, we know that there's some paint or compositing to the screen happening.

4. Understanding Performance Measurement

Short description:

Let's switch back to the demo and analyze the interaction. We can measure performance using the user timings API provided by the browser. This API allows us to create timestamps and calculate durations. By measuring against the production build, we can get accurate numbers. Let's jump into the demo and create a baseline by creating a timestamp inside the on-click event of the button.

So let's switch back over to that demo. Now that we understand a little bit about what the colors mean. If we were to analyze this interaction. We can see here that we've got a click event. There's some yellow, which means there's JavaScript being executed. There's some purple, which means there's style. Some layout. And finally, there's a little bit of green, which means the paint update happens on the screen.

So let's quickly jump into how we measure stuff. But before we do that, I want to quickly touch on this topic, which is we always want to measure against the production build. The reason for this is that development libraries such as React have coded and development built that isn't in the production build. Which means that the code that users or customers might be experiencing isn't the same as what you would be working with in a development build. So always measure the production builds just to get a real sense of the numbers.

So with that being said, how do we measure? That's the first question we have to answer. That brings me on to the user timings API. And the user timings API is provided by the browser and we can access it through window performance, which is an object. There are a few useful methods on window performance, such as mark. And what mark does is it creates a high resolution timestamp. And we can associate that timestamp with a name. And we can later access these timestamps using window performance measure. And this example, I've created two timestamps, one create mark start and the other create mark end. We can then later access these, using window performance measure. And window performance measure creates a new timestamp, but it calculates the duration between a start and an end. So window performance measure is how we visualize stuff in developer tools.

OK, so now we know a little bit about frames, we know how to measure. Let's jump into the demo and start trying to create a baseline. So the first thing that we could do is if we think about the interaction that we were profiling, when we click that button, the state changes and the order of the cards update on the screen. So the first thing we could do is we could create the first timestamp, which would be the start inside the on-click event of the button. So I'll jump over to VS Code here and we can just have a look at the code. So this is the demo application locally.

5. Analyzing Transitions and Measuring Performance

Short description:

The latest launch list component renders a list of launches. We mark the start and end of transitions using a custom hook. By comparing the previous and new order, we can identify state changes. Let's analyze and measure this in developer tools.

It's got the latest launch list component, which is the functional component that has the button and renders a list of launches. So inside here inside the on-click, let's mark the start. So that's the first thing we have to do. In order to mark the end, we have to use some instrumentation code. And this instrumentation code is just a custom hook that we'll use to store the value from the previous render inside the ref. I'll copy this custom hook. And the way that we will use this is like so. We have an effect that runs after render that compares the previous order, which we've used in the custom hook, and we compare it against a new order prop. If those two values are different, we know the state has changed, and we can mark the end of that transition. So if I copy and paste this, and just put it here, that should give us something to analyze and measure in developer tools. So now let's switch back over to the demo. That's reloaded, so I'll just clear this recording. Press record again, and again I'll just interact, I'll just change the order of launches a few times.

6. Understanding Baseline and Instrumentation

Short description:

In the user timing section, we've added a new measure to understand the baseline. The combination of JavaScript style layout and paint for this interaction is roughly 634 milliseconds, excluding the instrumentation code. The baseline we're working with is around 500 to 550 milliseconds.

Okay, so there's a lot of stuff going on here. Let's take this one for example. As you can see, in the user timing section, we've now got a new measure. This is the measure that we've just added just to get some understanding of what the baseline is. Armed with the theory that we had for frames, we can see that this combination of JavaScript style layout and paint is roughly 634 milliseconds for that interaction. Also, remember that we added the instrumentation code. We want to subtract the instrumentation code from that number. So this here will be that instrumentation code, around 58 milliseconds. So if we remove 58 from 634, we're roughly around 500 to 550 milliseconds. So that is the kind of baseline that we're working with.

7. Understanding DOM and Transition

Short description:

During the transition, we change the order of cards and have 102 items in the DOM, causing a lot of work for the browser.

So if we think about what happens during that interaction, we're changing the order of cards, right? So the question is, how many cards or launches do we have in the DOM? One quick way to do this would be to change over to the Elements tab and look for the containing element. So if we look at this element here, this has got the list of launches. If I click those three dots store as a global variable, it means that I can now access that element and I can read some properties on it. For example, child element count, which will give me the amount of launches that we've got. So what we're dealing with here is 102 items in the DOM. So this is obviously causing a lot of work for the browser to do every time that we do that transition.

8. Introduction to Virtualization

Short description:

Virtualization, also known as windowing, is an efficient technique for rendering large lists of content. By only rendering the items that are visible on the screen, virtualization improves performance and provides a smooth user experience.

So how could we mitigate this? How could we fix this problem? That brings me on to the technique of virtualization. So virtualization, also known as windowing, is a way to efficiently render a large list of content. I actually personally think windowing is a better term for this concept. If we look at this image here from the web dev article, which I highly recommend reading, this really illustrates the concept. So the user is only looking at maybe a certain amount of items on screen at any given time. But also there are some off screen items that the user doesn't see initially. Let's see for example, the user likes to scroll fast. So the window would then change over a subset of that list. And it's an efficient way to kind of render content like if you've got a large list.

9. Introduction to Virtualization at the Zone

Short description:

We adopted virtualization to improve the user experience on low-end devices by eliminating lag and jank caused by lazy loading. Virtualization allows us to change the window of what the user sees when scrolling, resulting in smoother interactions.

So before we apply this technique in the demo, I'll take a little detour around the work we were doing at the zone. So we adopted the virtualization at the zone. And before we adopted it, we were using a lazy loading approach of rendering content. And what you can see from this GIF is that when you're going up and down the content, we are just changing the window of what the user sees at any given time. The problem we had before was that as a user was interacting and going up and down the content, we were lazily loading content in and changing the DOM and updating new elements. The problem with that on low-end devices is that it actually causes significant lag and significant jank in the user experience. So we wanted to remove that in order to provide a good user experience across all these different devices.

10. Applying Virtualization with Masonic Library

Short description:

Virtualization is a powerful technique that improves rendering efficiency and performance across different devices. It addresses the constraints of TV platforms, such as focus management and hardware variations. By adopting virtualization, we achieved significant reductions in JavaScript execution, style and layout, and paint, resulting in smoother interactions and improved user experience. To apply virtualization to our demo, we'll use the Masonic library, which offers a simple API for rendering cards with customizable width.

Virtualization really helped here. This is just the vertical example. Here is the horizontal example without any animation. As you can see, there's only maybe five items horizontally on screen at any given time. Off-screen items, there's maybe two on either side. If the user's going fast, for example, they have a remote and they hold down right or hold down left, then the rendering is quite efficient because all we're doing is just changing the subset of the view.

It's also worth speaking about the constraints on TV and how they're slightly different than the constraints you may be faced with on mobile web. For example, on TV, one of the big problems is focus management and maintaining consistent focus throughout different pages. Another one is the wide range of hardware specifications. For example, some of these devices have really low memory, low CPU, and some have okay memory CPU. And also we're dealing with a wide range of browser engines. I think the oldest browser that we support is Safari 6, which is quite old. And obviously that means that there's a lack of modern web standards. At the same time, there's also a lack of developer tools, which means that it's really tricky to navigate this landscape when working with performance. And when we are building this front end application at the zone, we are writing it once and we're running it everywhere across these different devices. So any technique that we use, such as virtualization, has to prove effective across all these different devices.

Some data for the before and after virtualization. I took this profile using the Fire TV with playback and the interaction was going from the top of the screen to the bottom. So I was pressing on the keypad going up and down just to get to the bottom of the content. And what we could see by adopting virtualization were a 34% reduction in JavaScript execution, a 43% reduction in style and layout, and a 59% reduction in paint. Which is quite a significant improvement in terms of user experience. It was a lot less janky, interactions were a lot smoother, and we were able to maintain 60 frames per second in these interactions.

So now we've spoken about virtualization and what the technique is, why don't we try and apply that to this demo, given that we've got a large list of content. The library that I'll use is called Masonic, and in the React ecosystem there are a very wide range of virtualization libraries. The reason I chose Masonic for this is that it's really simple just to get started with. It has a render prop API, and we just tell the component what width we want, and change the way that we render the cards. So I'll copy this. I'll just change the way that we render the cards. So if I move this up here, save that, and I'll jump back over to the demo application on local host. I'll just close the console, clear the recording, and I'll press record and I'll just interact with this a little bit more.

11. Measuring Performance and Conclusion

Short description:

The application is now much faster, with significantly less jank. The DOM is updated quickly and efficiently, rendering only the items on screen at any given time. We measured the production build, which took around 46 milliseconds for the interaction. Virtualization has provided significant improvements, and there are further techniques that can be explored. Overall, we discussed the measure, analyze, and fix cycle, the user time API, and the benefits of virtualization for rendering large lists of content.

And actually what I notice is it's actually faster. There's a lot less jank than what we were experiencing before. So let me stop recording.

Okay, so let me zoom in on this one. We're now looking at about 97 milliseconds for that interaction, which is quite a drastic improvement from 550 milliseconds. So the application still works as expected. It's just a list of content. The only difference is if we look at the DOM and we just inspect the contain an element, what we're dealing with here is not 102 launches. We're dealing with 1, 2, 3, 4, 5, 6. So there's only some items on screen at any given time, but it's just officially rendered and the DOM nodes stay consistent in terms of the count of DOM nodes.

Okay, so that's quite a good improvement. I'm happy with that. The next thing we should probably do before we finish is measure the production build just to get an understanding of what users may experience. So what I'll do is I'll just run the build command and then I'll do npm run serve, which will start a localhost on Port 5000, I believe. So I'll switch back over here, come to Port 5000, and it loads the application. So I'll just clear this recording. I'll press record and I'll just interact with it a bit more. So the images have just taken a little while to load. But we can see from the interaction that actually the DOM is updating quite fast. So we can just profile this here. You can see it's now taken around 46 milliseconds to do that interaction, which given that we were doing around 550 on a development build, 46 milliseconds is OK. Virtualization did offer significant improvements in this case. There are further improvements that we could do. I don't really have any more time to kind of dive into those, but there are further techniques that we could use. So we've measured and analyzed, we've got a baseline, which is around 46 milliseconds, and we've measured the production build. So to conclude, really what we spoke about, as the measure, analyze, and fix cycle, how we can use that to identify performance problems. We've got user time as API, which is just a way to measure. And I've introduced the virtualization technique, which is extremely useful for rendering large lists of content And that's it. So I hope this was interesting. I hope you learned something.

QnA

Questions and Monitoring Performance

Short description:

If you have any questions or comments, please feel free to reach out to me on Twitter and GitHub at Richard McCall. We do have a question from G Halpern, who's asking about the usage of performance.measure with only one parameter. Another question from Mike is about premature optimization and how to tackle it. I see that there is a JavaScript API for monitoring performance on sites, which falls into two main categories: lab tools and ROM tools.

If you have any questions or comments, please feel free to reach out to me on Twitter and GitHub at Richard McCall. And I hope you enjoy the rest of the conference. Thank you. Stay safe. Goodbye.

Excellent. Excellent. Thank you so much for giving this talk and being here with us today for the questions, and I've seen that a few people are already quite active submitting questions, so let's get right into it.

We do have a question. We do have a question from G Halpern, who's asking, In your demo, you used performance.mark to define different timestamps and then performance.measure to calculate the time in between. But what does performance.measure do when only given one parameter, which is the name? Well, performance.measures for a start and an end. So if you only give it one, you aren't really measuring anything, right? You've got to have some sort of start and end point to measure the code. So if you only give it an end, then you've got to ask yourself, What is it you're measuring? I mean, okay. That's a fair question.

Also, we got a question from Mike. What is your opinion with all the performance hunting that you invested in and where everything absolutely made sense? But what's your opinion on premature optimization? If you believe it's a problem, how do you actually then go about tackling it? I guess it comes in a few flavors, premature optimization. Like there's the case where you're trying to optimize code that you haven't measured yet, which is probably one of the big problems. Like I've said already a few times in this talk, it's all about measuring and getting the data in the first place. Once you've measured and you've got the data, then you can start optimizing stuff. If you feel like you're optimizing code before you've measured, then how do you know what the improvements were if you never measured at the start? I think always measure is really the key takeaway.

Which brings me to a question of my own. I see that you have done this very specifically in the developer tools. But I see that there basically is a JavaScript API. How would you go about monitoring performance on your sites? Is there tools that you can use? Are there solutions that are pre-made, or do you have to tinker your own? Yeah. Good question. In performance, the tooling falls into two main categories for me anyway. So you've got lab tools and you've got ROM tools. And so lab tools are something like Lighthouse or React Profiler or DevTools, for example, which is testing the performance in a very constrained environment that you control. And what I mean by that is you can change one variable and see how it changes and keep doing that until you've got something that may be working. That's the lab environment.

ROM Monitoring Tools and Strategies

Short description:

For ROM monitoring, there are tools like Speedcurve, New Relic, and Sentry. However, browser support for the performance measure API may vary. It's important to consider the browser compatibility, especially for older browsers. Finding a strategy that combines lab and ROM monitoring is crucial.

For a ROM environment, which is like real user monitoring, there's a few tools that you could do, right? What I've shown here is kind of the performance API in the window. But with that, you have to be careful just in terms of browser support. So for us on TV, it's kind of not the best given that we support really old browsers and they don't all support the performance measure API. So that's something to be aware of. You can do that if you're only supporting modern browsers. But there are other tools like Speedcurve, which is quite good. That's like a really robust ROM monitoring option. But obviously it does come at a cost. And there are similar products out there that do similar things like New Relic supports performance monitoring on the front end there. Also Sentry does the same. So there are a few options, but really it's lab and ROM monitoring. So you have to basically find a strategy that optimally involves somewhat both sides of that, I guess.

Combining Virtualization and SEO Friendliness

Short description:

Is it possible to combine virtualization and SEO friendliness? Pre-rendering or server-side rendering a version with all the content just for Googlebot is one way. It requires testing to avoid pitfalls. Automatic instrumentation can be achieved through the window performance API.

Also, Ojo is asking, is it possible to somehow combine virtualization and SEO friendliness in the page? I want less work on main thread, but Googlebot should see all content on the site. It's a very good question. I don't really have any experience with that and trying to make that work with Google JavaScript crawler. I don't know how robust a JavaScript crawler would be to something like virtualization. Maybe somebody out there already knows the answer to that. That's a good question. It's worth investigating. I think that question might also be pointing in my direction because I actually work in the Google search relations team and I am very familiar with Googlebot and its JavaScript crawling abilities. There's multiple ways. One way would be to dynamic render, which means pre-render or server-side render a version with all the content just for Googlebot. That is acceptable as well. Also if it's off the main thread, we should be able to see at least some of it. Do check it in the testing tools to see if we are seeing your content, and alternatively, you would have to find ways of presenting the content to Googlebot like dynamic rendering, server-side rendering things. It is possible. It just requires a little bit of testing to make sure that there's no pitfalls and surprises. I know that our web worker implementation isn't perfect, but bringing it back to the audience, Layard asks, any suggestion on how to do automatic instrumentation? And I think we kind of covered this, but do you have any other things that you want to say on that topic as well?

Lighthouse, Performance Analysis, and Good Habits

Short description:

Running Lighthouse on a schedule can help identify performance issues and ensure the correct build is shipped. Automating Lighthouse reporting with serverless setup is recommended. To analyze and improve performance, gather data using tools like web page tests. Identify performance problems and propose solutions based on data points. Google provides valuable resources, including the Google Summit and Chrome Summit talks. Building good habits in React workflow can help avoid performance problems in the long term.

So that's something you might want to take a look at. I noticed that recently, when I was trying to run Lighthouse on a daily schedule against some of the TV applications that we've got, and it was hitting the authentication page that says sign up and sign in, and I noticed that they still had some of the React payments on our Edge environment. And I was like, that's strange. So it seems as though some of these performance measures had actually shown up in Lighthouse. But that was also how I realized that they were shipping the wrong build to that environment, so two surprising issues.

That's cool. So you can run Lighthouse in the schedule way automated as well. Yeah, I'm actually working on a blog post just now which is setting up a serverless Lighthouse reporting so you can just run it on a schedule and get a little notification that says, hey, your performance is 98 or whatever. Lighthouse is really good. I'm really impressed with Lighthouse over the last few releases. Also, your talk was quite hands-on, and that's amazing. But there is a question from Bri that came up in my mind as well. I had that as well. What additional resources would you recommend for someone getting started in analyzing and improving performance? We've talked a little bit about how you can automate this and how you can do this in a hands-on way. What resources would you recommend to someone who's just starting out and trying to understand what they're looking at?

Yeah, I guess the first thing, before you even start looking at trying to fix any performance problems, is getting the data in the first place. It depends on how you control the environment, but there are tools like web page tests, which offer the option to do bulk testing. That would be useful for getting data in the first place. Really what you want to do is, once you've got these key data points, is start trying to analyze the data and break down where the problems might be coming from, what pages are the performance coming from, because really what you want to do is have a performance problem in the first place. You don't want to start trying to improve the performance if there's no problem. For example, if you're working in a business, you have to make sure that this performance problem is impacting the bottom line for the business. You have to get data points, and then work out where the problems are, and then take that to the business and go, hey, the performance problem is here, I propose that we do this, this, this, and this, then we can try and fix it. That's how you would approach it from a business point of view. From a purpose point of view, trying to get maybe involved and just understanding how to do it. There's lots of great resources, like Google is probably the best place for any performance resource that I found in my career. Like the Google Summit that they do, the Chrome Summit. All the talks that they do every once a year, there's always something new to look at from those talks, so I would recommend that. Cool. Also, coming into avoiding performance problems from the get-go, Ana is asking, are there any good habits which we would recommend someone new to React to build into their workflow to help them basically have good performance in the long term and not run into performance problems in the first place? I guess, and a React kind of point of view, really, you're typically measuring either kind of the page load time or interactions. So page load time, you can kind of do that whenever you want, really. If your page loads fast and there's nothing to complain about.

Optimizing Performance in React

Short description:

To optimize performance in React, it's important to understand the interactions and what happens under the hood. Looking deeper into the code is necessary for performance improvements. Always measure the production build for accurate performance analysis.

If your page loads fast and there's nothing to complain about. In terms of working in React and working on performance as you go, I would say think of the interactions that you're building and then just try and understand what's actually happening under the hood, I think is quite useful. Like a lot of people say that you shouldn't have to look under the hood, but I actually disagree with that. I think if you're talking about performance, then you do have to kind of dive a bit deeper and understand interactions and what goes on. And like I say always measure a production build. So don't measure like I've done in the demo, just measuring the development build. Always measure the production build.

Measuring Performance and Virtualization

Short description:

In terms of measuring app performance on customer TVs, the speaker explains that they currently analyze interactions in a lab environment with various devices. However, they do not measure performance on customer TVs in the production environment due to browser limitations and the risk involved. Another question is raised about the need for virtualization with a small number of elements. The speaker clarifies that virtualization is beneficial for optimizing CPU and memory usage, especially on lower-end devices, and recommends measuring performance to determine the appropriate approach. The section concludes with gratitude for the talk and Q&A, emphasizing the importance of addressing performance issues.

Speaking of production build, that's an interesting question from Matthias. How, or do you actually measure the performance of your apps on customer TVs as well? I mean, you want to be in the production environment as much as possible, and that would be customers' TVs, right? Do you do that?

Yeah. No, we don't do that, but it's something that we're looking at more, not in a real user monitoring point of view, but more in a lab. So like we've got like a test device lab with all these different devices, and what we want to do is start analyzing interactions in a lab environment, but it's also a real device environment. So it's a bit of a kind of hybrid approach, but in production, no. Because like I say, some of these browsers don't support some of the stuff they were trying to do, like in modern web performance, marking and measuring. So it's just a bit too risky for some of these environments given that it's kind of build once and it's shut down.

Yeah. Sounds, yeah, fair point. One more question from Spidey, or Spidern, is in 100 elements from your demo, isn't that still a very small number of elements to need virtualization? I feel like we are solving most of the mistakes we do by virtualization until it's not enough. Do you think that's a good practice?

Well, I mean, the demo is really just to illustrate the concept, maybe a hundred elements. I mean, it depends, right? If you're thinking about a Moto G4, like a really standard average phone, then if you've got a hundred React components on screen at any given time, and you're trying to change the order of them at any given time, you're really putting the browser under a lot of strain. So virtualization just solves that technique of only having, say, six items in the DOM, which is actually a lot more stable in terms of CPU and also with memory. So, I mean, I guess really what you want to do is measure. You wouldn't know if a hundred elements is a lot or too little unless you measured. And from what we've seen in the demo, even measuring on a kind of modern laptop with six times slowdown, the browser was going through a lot just to do that update and it was quite janky. So imagine what that would be like on a lower-end device. I see. All right. Thank you so much, Richie. I think we should head over to the mentee, Paul, that you gave us and see what the audience has said there. So let's check that out real quick. Okay. Also, thank you again. Thanks again so much for A, the fantastic talk. I think we're not seeing enough performance talks, to be honest, because performance still is a big issue. And also, B, for all the Q&A. I know it has been a lot and I hope that you have a fantastic day. Thanks a lot for coming. Perfect. Thank you, Martin. Enjoy the rest of the conference, everyone. Bye.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Few developers enjoy debugging, and debugging can be complex for modern web apps because of the multiple frameworks, languages, and libraries used. But, developer tools have come a long way in making the process easier. In this talk, Jecelyn will dig into the modern state of debugging, improvements in DevTools, and how you can use them to reliably debug your apps.
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Our understanding of performance & user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.