Next.js and other wrapping React frameworks provide great power in building larger applications. But with great power comes great performance responsibility - and if you don’t pay attention, it’s easy to add multiple seconds of loading penalty on all of your pages. Eek! Let’s walk through a case study of how a few hours of performance debugging improved both load and parse times for the Centered app by several hundred percent each. We’ll learn not just why those performance problems happen, but how to diagnose and fix them. Hooray, performance! ⚡️
Power Fixing React Performance Woes
AI Generated Video Summary
This Talk discusses various strategies to improve React performance, including lazy loading iframes, analyzing and optimizing bundles, fixing barrel exports and tree shaking, removing dead code, and caching expensive computations. The speaker shares their experience in identifying and addressing performance issues in a real-world application. They also highlight the importance of regularly auditing webpack and bundle analyzers, using tools like Knip to find unused code, and contributing improvements to open source libraries.
1. Introduction to React Performance
Hello, and welcome to Power Fixing React Performance Woos. Web performance is awesome. Modern frameworks like SvelteKit and Nuxt and Next and Remix and Nastro make good choices for performance. I'm going to walk you through a series of five improvements I made to the popular center.app website. The first improvement is addressing 81-iframe embeds.
Hello, and welcome to Power Fixing React Performance Woos, with me, Josh Goldberg. I'm an open source maintainer. I work in the TypeScript ecosystem and I wrote a book, Learning TypeScript published by O'Reilly, but we're not here to talk about all that.
We're here to talk about web performance, power fixing things. Web performance is awesome. If you're not convinced, highly recommend web.dev. Why speed matters? Summarizing its points, speed is important for retaining your users, they're more likely to stay. Improving conversions, that's good for the money. It's not good for your user experience because people don't like slow web pages, fun fact. It's an accessibility point because people with limited hardware and or bandwidth often can't use or have trouble using really bloated old slow web pages. Do not want.
Modern frameworks like SvelteKit and Nuxt and Next and Remix and Nastro and all these do make a lot of good choices for you. So if you're using something like, say, NextJS, which we'll see later, it oftentimes is set up to make good performance the built-in, the default, which actually makes it harder to write slow web pages. But not impossible. They don't prevent you from introducing performance aggressions. Even if you're doing everything right, it's still possible over time for things to creep in. I'm going to walk you through a series of five improvements I made, only some of which actually touch React code to the popular center.app website.
Now, this is from a perfectly good, respectable team. They did nothing wrong, except they just didn't have the time to focus on performance, which meant then that some performance problems did creep into the app, which I was able to help with. Normally, when I tackle a performance issue, it's in four phases. Identification, seeing what's wrong, ideally with something I can measure. Investigation, looking into what the root cause is. Implementation, ideally of a fix. And confirmation that the fix actually fixed the thing that we wanted it to.
The first of these is a real quick one, 81-iframe embeds. I've seen this very rarely, so it was really cool to come up here. When you look at the center.apps slash quotes page, prior to the fixes, it would take forever. Look at how slow this was. And the root cause was, we'll see soon, that it had a lot of iframes. But the effect, the symptom, was that it took forever and felt slow.
2. Identifying the Issue with Iframes
And I had a clue because I had seen a lot of tweets show up on a page and take a while before. So just looking through the dev tools, we see a recording here of me confirming that, yes, it is what I suspected that there are a lot of iframes on this page. And fun fact about iframes. We can see here that there are quite a few of them.
And I had a clue because I had seen a lot of tweets show up on a page and take a while before. So just looking through the dev tools, we see a recording here of me confirming that, yes, it is what I suspected that there are a lot of iframes on this page. And fun fact about iframes. We can see here that there are quite a few of them. Each iframe is like a page within a page. So when you have 84 of them or 81 of them, that's quite a few pages. When this recording was taken yesterday, it was actually more iframes than when I'd initially done the investigation. It was a total of 94. So that's quite the slowdown. And they're all showing up at the same time, meaning they're all loading at the same time, which is why the page froze up and took a while to load. Boom.
3. Fixing Iframe Rendering with Lazy Loading
In implementing a fix for rendering multiple iframes, lazy loading was used. Only a subset of the iframes is initially rendered, and more are loaded as the user interacts with the page. Lazy loading improved performance by reducing the initial load time. This approach is not specific to React and can be applied to other web development projects. It's important to optimize apps for performance, even if they were initially well-crafted. Lazy loading is a recommended strategy for rendering large amounts of content, especially when only a portion of it is initially visible to users.
In implementing a fix, I first found where the iframes are rendered, which is this general-use cards component. It's simplified here, but in essence, it loads in card data using a hook, and then for each piece of that data, stored in an array, it would map into this card component, rendering as a child component. And that card component calls to React Twitter Embed, which is a perfectly good popular NPM package that embeds a tweet as an iframe.
That's the standard way to use Twitter's external tweet embedding features, especially since they went all private only or cost-only for their APIs. So, large numbers, dozens, almost 100 iframes all rendering at once. The strategy that I would often take in a situation like this is lazy loading. This is a simplification of the fix we implemented with lazy loading. First, we dot-slice the cards, so that only card 0 through, in this case starting at 6, render at once. Then, every time a card loads, on load, we increment or add a little bit to an extra counter, saying we can additionally load this many more cards. So, after the first few cards load we can keep loading more and more.
Now, 6 is an arbitrary number, but it worked well here because that's roughly the maximum number of iframes anyone would see upon first loading the page. In theory we could have based it on the page's viewport or some such, but I didn't have the time, I was just doing this for fun. And, just confirming, much faster for recording. It's still loading the same number of iframes, it's just waiting to load. It's being lazy in loading all but the first 6. So, yay, that felt nice. And as we'll see in the remaining four investigations, not much React-specific stuff here. But, it is general good web principles. So, a few takeaways. One, unoptimized apps are, in my experience, the most fun to investigate because they might be totally well-crafted, they just haven't had the time to do those low-hanging fruit, those much more straightforward wins for performance. Two, this code was probably totally fine when it was first written. I imagine when the page was first implemented it probably only had 6 or 12 quotes at most, not ideal but not anywhere near nearly 100 iframes. And lastly, lazy loading is awesome, highly recommended as a strategy. If you have a whole bunch of stuff you want to show and only some of it is initially visible to the users, maybe wait to render the rest of it until a second or two. Let's move on.
Hidden embedded images. This was fun. So, I did run a performance score, which is the standard DevTools, hey, how's the performance? within the general Lighthouse family of checks for a page. And it came with a 36 score, which is not ideal, it's in the red. And going down the suggested opportunities for growth, which I'd highly recommend looking into if you ever get a performance score in the red or yellow, the one that first stood out to me was, total size 26 and a half thousand kib, or roughly two dozen megabytes.".
4. Analyzing the Bundles and Identifying the Issue
Wow, that's a lot of code being loaded by the page. I used the Webpack Bundle Analyzer to analyze the bundles and chunks of JavaScript in the app. This helped me identify the issue with the Gcal features illustration.js file, which was the biggest part of any chunk by an order of magnitude. It contained base64-encoded images and unused code.
Wow, that's a lot of code, that's a lot of stuff being sent over the wire. Why was so much stuff being loaded? Why was so much stuff being loaded by the page? Well, I popped open this great tool called the Webpack Bundle Analyzer. Because Centered App is written in Next.js, we could use the really nice straightforward Next.js slash Bundle Analyzer integration, which made it relatively straightforward to plop open the require Next slash Bundle Analyzer and run it if process and analyze is true. In other words, I followed the docs and then ran this command, MPM run build. This created a local production build of the app analyzing the bundles or the generated chunks of JavaScript.
5. Analyzing the Issue with Gcal Features
And my favorite part of the tool is it comes with a nice visualization. Gcal features illustration.js was the biggest thing on the page, turning the app chunk into a nine megabyte monstrosity. The file contained base64-encoded images, which is not ideal for performance. I deleted the file, resulting in a much improved page chunk and a decrease in Largest Contentful Paint from 17.6 seconds to 13.2 seconds.
And my favorite part of the tool is it comes with a nice visualization. And this visualization showed Gcal features illustration.js being by far the biggest thing on the page, the biggest part of any chunk by an order of magnitude, multiple megabytes. It turned the biggest, most important chunk, the app chunk, into a nine megabyte monstrosity. Huge. I'd never seen anything this big from something checked into the repository. I love it.
This is really cool to me. So I looked at the file and saw that it had a whole bunch of images embedded in as base64. Now, base64 is a way of encoding an image or some piece of data as a string. And it's totally reasonable to use it for small images. But if you have one that encodes to millions of characters, if it's a multiple megabyte image, base64 encoding it into your SVGs inside your React components is not generally a good idea for performance. It might have been a nice, quick way to prototype a feature. But this is not great for production because it requires the user downloading and loading megabytes upon megabytes of JavaScript with this base64 encoding to run your page. Not great.
Furthermore, this code was unused. Nowhere in the app actually rendered GCL features illustration. So I just deleted the file. No problems. Rerunning the npm run build with analyze true, we saw a much improved page chunk. Now I'll go into more improvements on this later. But for now I was pretty pleased with this. Going down from 11 to less than 7.5, that's a pretty good improvement in the total size. Yay. And just confirming, I reran Lighthouse dev tools and saw that, well, Largest Contentful Paint improved from 17.6 seconds, give or take across a few runs, to 13.2. But interestingly, the overall performance score didn't actually improve. And I believe this is because the general performance score is a factor of issues including LCP, and LCP can only weigh in at so much. So beyond a certain point, LCP is just as bad as it can be. Later on, we will see it improve, I promise. But yeah, still four and a half seconds, give or take, improved of Largest Contentful Paint or how long it takes to paint the biggest visual thing on the pages, I think a nice user improvement. So bittersweet.
6. Takeaways from Performance Investigation
My takeaways here were one, it's still really fun to performance investigate unoptimized apps. Two, regularly audit your webpack analyzers and bundle analyzers. Three, some metrics may require multiple fixes before improvement. LCP was improved, but the overall performance score was not and that's okay.
My takeaways here were one, it's still really fun to performance investigate unoptimized apps. Get weird, wacky chunks like this. Two, similar to how you might regularly want to run all the pages in your site to see if they're working well, even if they're slow, regularly audit your, your webpack analyzers, your bundle analyzer, maybe see if there's some humorously large chunk or bundle somewhere in there. And three, some metrics will take multiple fixes before there's an improvement made. LCP was improved, but the overall performance score was not and that's okay. As long as user benefit is happening, I'm happy.
7. Issue with Barrel Exports and Tree Shaking
Three giant index.js files are a symptom of barrel exports and not being tree shaking. Barrel exports are a common pattern in JavaScript where an index file exports multiple files. The theory behind tree shaking is that it removes unused code from dependencies before the build. However, in this case, the unused parts of the barrel were not removed.
But okay, let's take another look at that bundle analyzer output. Three, giant index.js files. What's going on there? Now, this is a symptom of barrel exports and not being tree shaking, two terms we should go into. Barrel export is a common pattern in JavaScript when some file, like an index file, exports a whole bunch of other files. It's convenient so that whoever wants to import those other things, can just take it from the one place, that one barrel. And in theory, tree shaking, which is the process of removing unused code from your dependencies before they go into the build, should remove the parts of the barrel that are unused. In this case, it doesn't look like they are.
8. Improving Bundle Performance and ESLint Rule
And just to confirm, only 34 imports were found for the FortAwesome/ProLiteSVGIcons package. Importing directly from the individual files instead of the barrel export dramatically improved the bundle. The issue was not with next.js or barrel imports/exports but with the tooling at the time. An ESLint rule was written to prevent accidental usage of barrel exports. Performance improved with Contentful Paint LCP, total blocking time, and speed index. Performance is now in the average area.
And just to confirm this, I ran a search, how many times is this whatever FortAwesome slash ProLiteSVGIcons package imported from? Only 34 times. Now, I've actually used this package before. It's really nice. It's a quick, well-constructed collection of SVG icons of different weights. And they're all pretty fine-tuned for performance. None of them are huge. Which is why only 34 imports from it kind of raised my alarm bells of something funky, something fishy is happening here. 34 is a pretty low number.
So, I tried something out. I tried to, instead of importing from the barrel export, because I've seen barrel exports not get tree-shaken before. I tried importing directly from the files that contain the assets. Instead of importing, say, both the abacus and baby icons from the root, the barrel, I imported them from their individual files. And voila! Applying that fix across the 34 for light SVG icons imports dramatically improved the bundle. It reduced my number of giant index.js barrels from three to two. Which meant that that proof of concept showing what would happen if I no longer used the barrel export was, in fact, a significant improvement for the app! Now, I will note here, next.js 13.1, which was released after I did this investigation, did improve barrel import detection quite a bit for tree-shaking. And I believe later, subsequent versions of next.js did further work to improve the situation. So the issue here is not next.js anymore. The issue is certainly not barrel imports or barrel exports. The issue is just that the tooling at the time did not support this use case and has since been patched. But anyway, I wrote an ESLint rule because it's a good idea to write ESLint rules or general pieces of automation that prevent people from doing things you don't want them to do in the future. Although I'd fixed all these imports now, we wanted to make sure that someone wouldn't accidentally introduce a new usage of the barrel exports. This ESLint rule says that for any import declaration with a source value pulling from fordawesome anything as S3G icons without anything after it would get a context report telling you to use the individual path. You can see in the blog post on my blog that I'll link later that I also wrote a fixer to auto fix any imports which was really useful for applying automatically across the whole code base.
Rerunning performance. Yep, we saw finally an improvement from 36 to 51. We did improve Contentful Paint LCP from 13 to 12 give or take. We also significantly improved total blocking time which makes me think script parsing was an issue here and we improved the speed index. So I'm pretty pleased about this. At long last performance was no longer in the red. It was at least yellow, what they call average area.
9. Improving Performance and Removing Dead Code
13.2 was pretty, pretty slow. So I'm glad we improved it. Some takeaways: 1. Make sure your tooling supports tree shaking and barrel exports/imports. 2. Proof of concept larger fixes before investing too much time. 3. Automate good practices to save time and avoid manual enforcement. Unused code is detrimental to readability and build times. An awesome tool called Knip helps identify and remove dead code.
13.2 was pretty, pretty slow. So I'm glad we improved it. Some takeaways. One, make sure your tree shaking large dependencies, again barrel exports, barrel imports, totally fine. It's a very valid pattern in many cases. Just make sure your tooling supports them well. Two, it's a good idea if you're gonna make a larger fix such as writing a custom ESLint rule to proof of concept it. Make sure you're not going down spending too much time doing something that won't actually yield a lot of benefit. And three, love at automation. Whenever you can automatically enforce a good practice, do so that way you don't have to manually enforce it or clean up mistakes or bad uses of it later on.
Cool. Speaking of unused code, this one wasn't so much performance investigation as a general good practice. Let's say you have a function that's never called. It would be nice to have a tool that tells you this is dead code. You should delete it. Or let's say a type, an interface, that maybe previously was associated with a function but isn't used anymore. Or maybe even you have a dependency which used to be used maybe and no longer is. Be nice to have something that tells you this is dead. Please remove it. And unused code is bad. I want that tell me it's dead feature because unused code has two major drawbacks. For one, it makes your source files less readable. There's more stuff to parse through when you're trying to understand. And two, it causes often longer builds. At the very least, dependencies that are unused take up time in your installs your npmci or equivalents. And if you're doing some kind of linting and or building etc. on your source code, they take up time to be linted, build it and so on. And all that comes together to cause development to be slower to make your dev slow down, which is bad because you want your devs working as quickly and efficiently as possible. Fortunately, there's this awesome tool. Look at this ridiculous cow they made called Knip.
10. Using Knip to Find Unused Code
Knip is a tool that finds unused code in your project. It can be installed as a dev dependency and run with default settings. Configurations are available to analyze specific files. Although it may not find much in every project, it is still beneficial for developer enablement and preventing future issues. Ensure your developers are happy and consider adding useful tooling. Known preventative fixes are worthwhile, and Knip can uncover significant amounts of unused code. Remember, 'Knip it before you ship it!'
Knip does what I want. It finds unused code. So without getting too salesy on it, you can install it as a dev dependency optionally and then you can just npx knip and it'll run some defaults and find unused code for you. Now every project is different, so you can configure it. It's got some nice config settings. For example, this one takes a look at your project's index file as the entry point and then also analyzes all your project files that are source anything.ts. But we ran it and we didn't really find that much, but we did check it in as a CI step because not all performance fixes directly follow investigations with conclusions and Lighthouse scores. Sometimes you're just running Dev Enablement which is a good goal on its own. You want your devs to be great and future work avoided is still good and work avoided. So a few takeaways here, one, make sure your devs are happy. If there's tooling that you want to add in that would be useful, see if you can find time to do that. Two, known preventative fixes are definitely worthwhile. I have seen Kinect find megabytes upon megabytes and other code bases. So I knew that Knip would likely eventually hit this issue if we didn't add Knip. And three, as the Knip readme says, Knip it before you ship it, love it. But okay, back to the investigations.
11. Investigating Performance with Emojis
My favorite one of them all because it involves emojis and open source, this was a fun one. It still takes a few seconds to run, which is a little unusual. A full second, slightly more, is spent in the emoji plugin for Draft.js, the text editor. To Short appears to be making some rather fancy looking regular expressions. NS.toShort was taken almost 700 milliseconds. Creating a huge regular expression tends to be slow. We ended up creating a cache, where if there was a large amount of work being done, we stored it in a variable so that the work only needs to be done the first time the function is called. The pause was mostly resolved, and we went up from 51 to 65, almost 15 overall points better. The largest contentful paint dropped from 12 to seven. Total blocking time became green. Speed index improved. We sent this caching improvement to the Upstream dependency, the open source library.
My favorite one of them all because it involves emojis and open source, this was a fun one. Take a look at this recording. In the dev tools and the performance tab we reload and trace and measure. We see that even after the page has loaded the title, meaning scripts have loaded in, it still takes a few seconds to run, which is a little unusual for even local dev servers.
And if we look at the profile that was processed we can see that there are, while the page is running its scripts, a couple of seconds of white blank before we get to the homepage. And if we zoom in on there and just look, we see there's a long task, that's that red striping indicating too much is happening. And within that, a full second, slightly more, is spent in the emoji plugin for Draft.js, the text editor. And much of that is spent repeatedly in this NS.toShort function.
What's going on here? To Short appears to be making some rather fancy looking regular expressions. Now in the dev tools, if you click on where the function name is given a blue link, aha! You get taken to where that function is in source code in your dev tools, and look at this, it is annotated that NS.toShort was taken almost 700 milliseconds. That's time spent inside that function. 700 milliseconds just in this one function. That's a performance bottleneck if I've ever seen one.
So what's going on here? Now I actually had a really fun time investigating this with another open sourcer person, very nice guy named Marvin H. He's been writing a great series of blog posts called Speeding Up the Web. I've linked to them later on, would highly recommend. Marvin and I hopped on a Zoom call and looked at this NS.toShort. Here's a simplification of its implementation. In its essence, it takes in a string and runs a replaceAll utility on the string with a huge regular expression containing all sorts of emojis. Now creating a huge regular expression tends to be slow if you're dynamically creating it based off a lot of stuff, which this function was. Now again, I'm oversimplifying the investigation, read the blog post if you want more, but what we ended up doing in one or two places was creating a cache, where if there was a large amount of work being done, say, creating a huge regular expression, we stored it in a variable so that the work only needs to be done the first time the function is called. Ooh, great. Made me happy. And just confirming rerunning performance, the pause was mostly resolved, and wow, look at that. We went up from 51 to 65, almost 15 overall points better. The largest contentful paint dropped from 12 to seven. Total blocking time became green. Speed index improved. This was a happy change for me. So we actually sent this as an improvement, this caching to the Upstream dependency, the open source library.
12. Conclusion and Key Takeaways
Got merged a few months later. In the meantime, I used the npm-patch package to apply it locally so that I could get the changes before they emerged into the Upstream repo. Performance good. We found iframes and used lazy loading. We deleted unused files and fixed tree shaking with an ESLint rule. Knip prevented unused code. Cached the result of an expensive computation. Resources available online. Thank you very much y'all. Cheers.
Got merged a few months later. In the meantime, I used the npm-patch package to apply it locally so that I could get the changes before they emerged into the Upstream repo. Marvin and I were very pleased about this.
And that's all that I wanted to show with perf investigations. There's a lot more that we could dive into. We could dive into React's profiling, there's a great set of dev tools. We could go into React loops and hooks and all these things. But this talk is remote and for half the time that would take to go into those. So let's recap the stuff that we were able to go into.
Performance good. There are a lot of reasons why users should care and you should care about performance. We looked at quite a few different investigations. We found a lot of iframes where lazy loading was the solution. We found hidden embedded images going into the dev tools to find where the large chunks were visible and then just deleting the unused files. We saw tree shaking not working for barrel exports which was fixed with an ESLint rule, later a updated Next.js version. We saw unused code being prevented in the future with Knip and we saw my favorite one, the emojis where we cached the result of an expensive computation.
All these resources are available online. The web dev Why Speed Matters is a great blog post. Each of these five investigations has its own post on my blog and Marvin's blog includes speeding up JavaScript ecosystem, part six. Parts one through five are quite entertaining as well as our seven onward. That's all I've got for you. Thank you very much y'all. Cheers.
Comments