Power Fixing React Performance Woes

Rate this content

Next.js and other wrapping React frameworks provide great power in building larger applications. But with great power comes great performance responsibility - and if you don’t pay attention, it’s easy to add multiple seconds of loading penalty on all of your pages. Eek! Let’s walk through a case study of how a few hours of performance debugging improved both load and parse times for the Centered app by several hundred percent each. We’ll learn not just why those performance problems happen, but how to diagnose and fix them. Hooray, performance! ⚡️

22 min
23 Oct, 2023


Sign in or register to post your comment.

AI Generated Video Summary

This Talk discusses various strategies to improve React performance, including lazy loading iframes, analyzing and optimizing bundles, fixing barrel exports and tree shaking, removing dead code, and caching expensive computations. The speaker shares their experience in identifying and addressing performance issues in a real-world application. They also highlight the importance of regularly auditing webpack and bundle analyzers, using tools like Knip to find unused code, and contributing improvements to open source libraries.

1. Introduction to React Performance

Short description:

Hello, and welcome to Power Fixing React Performance Woos. Web performance is awesome. Modern frameworks like SvelteKit and Nuxt and Next and Remix and Nastro make good choices for performance. I'm going to walk you through a series of five improvements I made to the popular center.app website. The first improvement is addressing 81-iframe embeds.

Hello, and welcome to Power Fixing React Performance Woos, with me, Josh Goldberg. I'm an open source maintainer. I work in the TypeScript ecosystem and I wrote a book, Learning TypeScript published by O'Reilly, but we're not here to talk about all that.

We're here to talk about web performance, power fixing things. Web performance is awesome. If you're not convinced, highly recommend web.dev. Why speed matters? Summarizing its points, speed is important for retaining your users, they're more likely to stay. Improving conversions, that's good for the money. It's not good for your user experience because people don't like slow web pages, fun fact. It's an accessibility point because people with limited hardware and or bandwidth often can't use or have trouble using really bloated old slow web pages. Do not want.

Modern frameworks like SvelteKit and Nuxt and Next and Remix and Nastro and all these do make a lot of good choices for you. So if you're using something like, say, NextJS, which we'll see later, it oftentimes is set up to make good performance the built-in, the default, which actually makes it harder to write slow web pages. But not impossible. They don't prevent you from introducing performance aggressions. Even if you're doing everything right, it's still possible over time for things to creep in. I'm going to walk you through a series of five improvements I made, only some of which actually touch React code to the popular center.app website.

Now, this is from a perfectly good, respectable team. They did nothing wrong, except they just didn't have the time to focus on performance, which meant then that some performance problems did creep into the app, which I was able to help with. Normally, when I tackle a performance issue, it's in four phases. Identification, seeing what's wrong, ideally with something I can measure. Investigation, looking into what the root cause is. Implementation, ideally of a fix. And confirmation that the fix actually fixed the thing that we wanted it to.

The first of these is a real quick one, 81-iframe embeds. I've seen this very rarely, so it was really cool to come up here. When you look at the center.apps slash quotes page, prior to the fixes, it would take forever. Look at how slow this was. And the root cause was, we'll see soon, that it had a lot of iframes. But the effect, the symptom, was that it took forever and felt slow.

2. Identifying the Issue with Iframes

Short description:

And I had a clue because I had seen a lot of tweets show up on a page and take a while before. So just looking through the dev tools, we see a recording here of me confirming that, yes, it is what I suspected that there are a lot of iframes on this page. And fun fact about iframes. We can see here that there are quite a few of them.

And I had a clue because I had seen a lot of tweets show up on a page and take a while before. So just looking through the dev tools, we see a recording here of me confirming that, yes, it is what I suspected that there are a lot of iframes on this page. And fun fact about iframes. We can see here that there are quite a few of them. Each iframe is like a page within a page. So when you have 84 of them or 81 of them, that's quite a few pages. When this recording was taken yesterday, it was actually more iframes than when I'd initially done the investigation. It was a total of 94. So that's quite the slowdown. And they're all showing up at the same time, meaning they're all loading at the same time, which is why the page froze up and took a while to load. Boom.

3. Fixing Iframe Rendering with Lazy Loading

Short description:

In implementing a fix for rendering multiple iframes, lazy loading was used. Only a subset of the iframes is initially rendered, and more are loaded as the user interacts with the page. Lazy loading improved performance by reducing the initial load time. This approach is not specific to React and can be applied to other web development projects. It's important to optimize apps for performance, even if they were initially well-crafted. Lazy loading is a recommended strategy for rendering large amounts of content, especially when only a portion of it is initially visible to users.

In implementing a fix, I first found where the iframes are rendered, which is this general-use cards component. It's simplified here, but in essence, it loads in card data using a hook, and then for each piece of that data, stored in an array, it would map into this card component, rendering as a child component. And that card component calls to React Twitter Embed, which is a perfectly good popular NPM package that embeds a tweet as an iframe.

That's the standard way to use Twitter's external tweet embedding features, especially since they went all private only or cost-only for their APIs. So, large numbers, dozens, almost 100 iframes all rendering at once. The strategy that I would often take in a situation like this is lazy loading. This is a simplification of the fix we implemented with lazy loading. First, we dot-slice the cards, so that only card 0 through, in this case starting at 6, render at once. Then, every time a card loads, on load, we increment or add a little bit to an extra counter, saying we can additionally load this many more cards. So, after the first few cards load we can keep loading more and more.

Now, 6 is an arbitrary number, but it worked well here because that's roughly the maximum number of iframes anyone would see upon first loading the page. In theory we could have based it on the page's viewport or some such, but I didn't have the time, I was just doing this for fun. And, just confirming, much faster for recording. It's still loading the same number of iframes, it's just waiting to load. It's being lazy in loading all but the first 6. So, yay, that felt nice. And as we'll see in the remaining four investigations, not much React-specific stuff here. But, it is general good web principles. So, a few takeaways. One, unoptimized apps are, in my experience, the most fun to investigate because they might be totally well-crafted, they just haven't had the time to do those low-hanging fruit, those much more straightforward wins for performance. Two, this code was probably totally fine when it was first written. I imagine when the page was first implemented it probably only had 6 or 12 quotes at most, not ideal but not anywhere near nearly 100 iframes. And lastly, lazy loading is awesome, highly recommended as a strategy. If you have a whole bunch of stuff you want to show and only some of it is initially visible to the users, maybe wait to render the rest of it until a second or two. Let's move on.

Hidden embedded images. This was fun. So, I did run a performance score, which is the standard DevTools, hey, how's the performance? within the general Lighthouse family of checks for a page. And it came with a 36 score, which is not ideal, it's in the red. And going down the suggested opportunities for growth, which I'd highly recommend looking into if you ever get a performance score in the red or yellow, the one that first stood out to me was, total size 26 and a half thousand kib, or roughly two dozen megabytes.".

4. Analyzing the Bundles and Identifying the Issue

Short description:

Wow, that's a lot of code being loaded by the page. I used the Webpack Bundle Analyzer to analyze the bundles and chunks of JavaScript in the app. This helped me identify the issue with the Gcal features illustration.js file, which was the biggest part of any chunk by an order of magnitude. It contained base64-encoded images and unused code.

Wow, that's a lot of code, that's a lot of stuff being sent over the wire. Why was so much stuff being loaded? Why was so much stuff being loaded by the page? Well, I popped open this great tool called the Webpack Bundle Analyzer. Because Centered App is written in Next.js, we could use the really nice straightforward Next.js slash Bundle Analyzer integration, which made it relatively straightforward to plop open the require Next slash Bundle Analyzer and run it if process and analyze is true. In other words, I followed the docs and then ran this command, MPM run build. This created a local production build of the app analyzing the bundles or the generated chunks of JavaScript.

5. Analyzing the Issue with Gcal Features

Short description:

And my favorite part of the tool is it comes with a nice visualization. Gcal features illustration.js was the biggest thing on the page, turning the app chunk into a nine megabyte monstrosity. The file contained base64-encoded images, which is not ideal for performance. I deleted the file, resulting in a much improved page chunk and a decrease in Largest Contentful Paint from 17.6 seconds to 13.2 seconds.

And my favorite part of the tool is it comes with a nice visualization. And this visualization showed Gcal features illustration.js being by far the biggest thing on the page, the biggest part of any chunk by an order of magnitude, multiple megabytes. It turned the biggest, most important chunk, the app chunk, into a nine megabyte monstrosity. Huge. I'd never seen anything this big from something checked into the repository. I love it.

This is really cool to me. So I looked at the file and saw that it had a whole bunch of images embedded in as base64. Now, base64 is a way of encoding an image or some piece of data as a string. And it's totally reasonable to use it for small images. But if you have one that encodes to millions of characters, if it's a multiple megabyte image, base64 encoding it into your SVGs inside your React components is not generally a good idea for performance. It might have been a nice, quick way to prototype a feature. But this is not great for production because it requires the user downloading and loading megabytes upon megabytes of JavaScript with this base64 encoding to run your page. Not great.

Furthermore, this code was unused. Nowhere in the app actually rendered GCL features illustration. So I just deleted the file. No problems. Rerunning the npm run build with analyze true, we saw a much improved page chunk. Now I'll go into more improvements on this later. But for now I was pretty pleased with this. Going down from 11 to less than 7.5, that's a pretty good improvement in the total size. Yay. And just confirming, I reran Lighthouse dev tools and saw that, well, Largest Contentful Paint improved from 17.6 seconds, give or take across a few runs, to 13.2. But interestingly, the overall performance score didn't actually improve. And I believe this is because the general performance score is a factor of issues including LCP, and LCP can only weigh in at so much. So beyond a certain point, LCP is just as bad as it can be. Later on, we will see it improve, I promise. But yeah, still four and a half seconds, give or take, improved of Largest Contentful Paint or how long it takes to paint the biggest visual thing on the pages, I think a nice user improvement. So bittersweet.

6. Takeaways from Performance Investigation

Short description:

My takeaways here were one, it's still really fun to performance investigate unoptimized apps. Two, regularly audit your webpack analyzers and bundle analyzers. Three, some metrics may require multiple fixes before improvement. LCP was improved, but the overall performance score was not and that's okay.

My takeaways here were one, it's still really fun to performance investigate unoptimized apps. Get weird, wacky chunks like this. Two, similar to how you might regularly want to run all the pages in your site to see if they're working well, even if they're slow, regularly audit your, your webpack analyzers, your bundle analyzer, maybe see if there's some humorously large chunk or bundle somewhere in there. And three, some metrics will take multiple fixes before there's an improvement made. LCP was improved, but the overall performance score was not and that's okay. As long as user benefit is happening, I'm happy.

7. Issue with Barrel Exports and Tree Shaking

Short description:

Three giant index.js files are a symptom of barrel exports and not being tree shaking. Barrel exports are a common pattern in JavaScript where an index file exports multiple files. The theory behind tree shaking is that it removes unused code from dependencies before the build. However, in this case, the unused parts of the barrel were not removed.

But okay, let's take another look at that bundle analyzer output. Three, giant index.js files. What's going on there? Now, this is a symptom of barrel exports and not being tree shaking, two terms we should go into. Barrel export is a common pattern in JavaScript when some file, like an index file, exports a whole bunch of other files. It's convenient so that whoever wants to import those other things, can just take it from the one place, that one barrel. And in theory, tree shaking, which is the process of removing unused code from your dependencies before they go into the build, should remove the parts of the barrel that are unused. In this case, it doesn't look like they are.

8. Improving Bundle Performance and ESLint Rule

Short description:

And just to confirm, only 34 imports were found for the FortAwesome/ProLiteSVGIcons package. Importing directly from the individual files instead of the barrel export dramatically improved the bundle. The issue was not with next.js or barrel imports/exports but with the tooling at the time. An ESLint rule was written to prevent accidental usage of barrel exports. Performance improved with Contentful Paint LCP, total blocking time, and speed index. Performance is now in the average area.

And just to confirm this, I ran a search, how many times is this whatever FortAwesome slash ProLiteSVGIcons package imported from? Only 34 times. Now, I've actually used this package before. It's really nice. It's a quick, well-constructed collection of SVG icons of different weights. And they're all pretty fine-tuned for performance. None of them are huge. Which is why only 34 imports from it kind of raised my alarm bells of something funky, something fishy is happening here. 34 is a pretty low number.

So, I tried something out. I tried to, instead of importing from the barrel export, because I've seen barrel exports not get tree-shaken before. I tried importing directly from the files that contain the assets. Instead of importing, say, both the abacus and baby icons from the root, the barrel, I imported them from their individual files. And voila! Applying that fix across the 34 for light SVG icons imports dramatically improved the bundle. It reduced my number of giant index.js barrels from three to two. Which meant that that proof of concept showing what would happen if I no longer used the barrel export was, in fact, a significant improvement for the app! Now, I will note here, next.js 13.1, which was released after I did this investigation, did improve barrel import detection quite a bit for tree-shaking. And I believe later, subsequent versions of next.js did further work to improve the situation. So the issue here is not next.js anymore. The issue is certainly not barrel imports or barrel exports. The issue is just that the tooling at the time did not support this use case and has since been patched. But anyway, I wrote an ESLint rule because it's a good idea to write ESLint rules or general pieces of automation that prevent people from doing things you don't want them to do in the future. Although I'd fixed all these imports now, we wanted to make sure that someone wouldn't accidentally introduce a new usage of the barrel exports. This ESLint rule says that for any import declaration with a source value pulling from fordawesome anything as S3G icons without anything after it would get a context report telling you to use the individual path. You can see in the blog post on my blog that I'll link later that I also wrote a fixer to auto fix any imports which was really useful for applying automatically across the whole code base.

Rerunning performance. Yep, we saw finally an improvement from 36 to 51. We did improve Contentful Paint LCP from 13 to 12 give or take. We also significantly improved total blocking time which makes me think script parsing was an issue here and we improved the speed index. So I'm pretty pleased about this. At long last performance was no longer in the red. It was at least yellow, what they call average area.

9. Improving Performance and Removing Dead Code

Short description:

13.2 was pretty, pretty slow. So I'm glad we improved it. Some takeaways: 1. Make sure your tooling supports tree shaking and barrel exports/imports. 2. Proof of concept larger fixes before investing too much time. 3. Automate good practices to save time and avoid manual enforcement. Unused code is detrimental to readability and build times. An awesome tool called Knip helps identify and remove dead code.

13.2 was pretty, pretty slow. So I'm glad we improved it. Some takeaways. One, make sure your tree shaking large dependencies, again barrel exports, barrel imports, totally fine. It's a very valid pattern in many cases. Just make sure your tooling supports them well. Two, it's a good idea if you're gonna make a larger fix such as writing a custom ESLint rule to proof of concept it. Make sure you're not going down spending too much time doing something that won't actually yield a lot of benefit. And three, love at automation. Whenever you can automatically enforce a good practice, do so that way you don't have to manually enforce it or clean up mistakes or bad uses of it later on.

Cool. Speaking of unused code, this one wasn't so much performance investigation as a general good practice. Let's say you have a function that's never called. It would be nice to have a tool that tells you this is dead code. You should delete it. Or let's say a type, an interface, that maybe previously was associated with a function but isn't used anymore. Or maybe even you have a dependency which used to be used maybe and no longer is. Be nice to have something that tells you this is dead. Please remove it. And unused code is bad. I want that tell me it's dead feature because unused code has two major drawbacks. For one, it makes your source files less readable. There's more stuff to parse through when you're trying to understand. And two, it causes often longer builds. At the very least, dependencies that are unused take up time in your installs your npmci or equivalents. And if you're doing some kind of linting and or building etc. on your source code, they take up time to be linted, build it and so on. And all that comes together to cause development to be slower to make your dev slow down, which is bad because you want your devs working as quickly and efficiently as possible. Fortunately, there's this awesome tool. Look at this ridiculous cow they made called Knip.

10. Using Knip to Find Unused Code

Short description:

Knip is a tool that finds unused code in your project. It can be installed as a dev dependency and run with default settings. Configurations are available to analyze specific files. Although it may not find much in every project, it is still beneficial for developer enablement and preventing future issues. Ensure your developers are happy and consider adding useful tooling. Known preventative fixes are worthwhile, and Knip can uncover significant amounts of unused code. Remember, 'Knip it before you ship it!'

Knip does what I want. It finds unused code. So without getting too salesy on it, you can install it as a dev dependency optionally and then you can just npx knip and it'll run some defaults and find unused code for you. Now every project is different, so you can configure it. It's got some nice config settings. For example, this one takes a look at your project's index file as the entry point and then also analyzes all your project files that are source anything.ts. But we ran it and we didn't really find that much, but we did check it in as a CI step because not all performance fixes directly follow investigations with conclusions and Lighthouse scores. Sometimes you're just running Dev Enablement which is a good goal on its own. You want your devs to be great and future work avoided is still good and work avoided. So a few takeaways here, one, make sure your devs are happy. If there's tooling that you want to add in that would be useful, see if you can find time to do that. Two, known preventative fixes are definitely worthwhile. I have seen Kinect find megabytes upon megabytes and other code bases. So I knew that Knip would likely eventually hit this issue if we didn't add Knip. And three, as the Knip readme says, Knip it before you ship it, love it. But okay, back to the investigations.

11. Investigating Performance with Emojis

Short description:

My favorite one of them all because it involves emojis and open source, this was a fun one. It still takes a few seconds to run, which is a little unusual. A full second, slightly more, is spent in the emoji plugin for Draft.js, the text editor. To Short appears to be making some rather fancy looking regular expressions. NS.toShort was taken almost 700 milliseconds. Creating a huge regular expression tends to be slow. We ended up creating a cache, where if there was a large amount of work being done, we stored it in a variable so that the work only needs to be done the first time the function is called. The pause was mostly resolved, and we went up from 51 to 65, almost 15 overall points better. The largest contentful paint dropped from 12 to seven. Total blocking time became green. Speed index improved. We sent this caching improvement to the Upstream dependency, the open source library.

My favorite one of them all because it involves emojis and open source, this was a fun one. Take a look at this recording. In the dev tools and the performance tab we reload and trace and measure. We see that even after the page has loaded the title, meaning scripts have loaded in, it still takes a few seconds to run, which is a little unusual for even local dev servers.

And if we look at the profile that was processed we can see that there are, while the page is running its scripts, a couple of seconds of white blank before we get to the homepage. And if we zoom in on there and just look, we see there's a long task, that's that red striping indicating too much is happening. And within that, a full second, slightly more, is spent in the emoji plugin for Draft.js, the text editor. And much of that is spent repeatedly in this NS.toShort function.

What's going on here? To Short appears to be making some rather fancy looking regular expressions. Now in the dev tools, if you click on where the function name is given a blue link, aha! You get taken to where that function is in source code in your dev tools, and look at this, it is annotated that NS.toShort was taken almost 700 milliseconds. That's time spent inside that function. 700 milliseconds just in this one function. That's a performance bottleneck if I've ever seen one.

So what's going on here? Now I actually had a really fun time investigating this with another open sourcer person, very nice guy named Marvin H. He's been writing a great series of blog posts called Speeding Up the Web. I've linked to them later on, would highly recommend. Marvin and I hopped on a Zoom call and looked at this NS.toShort. Here's a simplification of its implementation. In its essence, it takes in a string and runs a replaceAll utility on the string with a huge regular expression containing all sorts of emojis. Now creating a huge regular expression tends to be slow if you're dynamically creating it based off a lot of stuff, which this function was. Now again, I'm oversimplifying the investigation, read the blog post if you want more, but what we ended up doing in one or two places was creating a cache, where if there was a large amount of work being done, say, creating a huge regular expression, we stored it in a variable so that the work only needs to be done the first time the function is called. Ooh, great. Made me happy. And just confirming rerunning performance, the pause was mostly resolved, and wow, look at that. We went up from 51 to 65, almost 15 overall points better. The largest contentful paint dropped from 12 to seven. Total blocking time became green. Speed index improved. This was a happy change for me. So we actually sent this as an improvement, this caching to the Upstream dependency, the open source library.

12. Conclusion and Key Takeaways

Short description:

Got merged a few months later. In the meantime, I used the npm-patch package to apply it locally so that I could get the changes before they emerged into the Upstream repo. Performance good. We found iframes and used lazy loading. We deleted unused files and fixed tree shaking with an ESLint rule. Knip prevented unused code. Cached the result of an expensive computation. Resources available online. Thank you very much y'all. Cheers.

Got merged a few months later. In the meantime, I used the npm-patch package to apply it locally so that I could get the changes before they emerged into the Upstream repo. Marvin and I were very pleased about this.

And that's all that I wanted to show with perf investigations. There's a lot more that we could dive into. We could dive into React's profiling, there's a great set of dev tools. We could go into React loops and hooks and all these things. But this talk is remote and for half the time that would take to go into those. So let's recap the stuff that we were able to go into.

Performance good. There are a lot of reasons why users should care and you should care about performance. We looked at quite a few different investigations. We found a lot of iframes where lazy loading was the solution. We found hidden embedded images going into the dev tools to find where the large chunks were visible and then just deleting the unused files. We saw tree shaking not working for barrel exports which was fixed with an ESLint rule, later a updated Next.js version. We saw unused code being prevented in the future with Knip and we saw my favorite one, the emojis where we cached the result of an expensive computation.

All these resources are available online. The web dev Why Speed Matters is a great blog post. Each of these five investigations has its own post on my blog and Marvin's blog includes speeding up JavaScript ecosystem, part six. Parts one through five are quite entertaining as well as our seven onward. That's all I've got for you. Thank you very much y'all. Cheers.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Remix is a new web framework from the creators of React Router that helps you build better, faster websites through a solid understanding of web fundamentals. Remix takes care of the heavy lifting like server rendering, code splitting, prefetching, and navigation and leaves you with the fun part: building something awesome!
React Advanced Conference 2022React Advanced Conference 2022
30 min
Using useEffect Effectively
Can useEffect affect your codebase negatively? From fetching data to fighting with imperative APIs, side effects are one of the biggest sources of frustration in web app development. And let’s be honest, putting everything in useEffect hooks doesn’t help much. In this talk, we'll demystify the useEffect hook and get a better understanding of when (and when not) to use it, as well as discover how declarative effects can make effect management more maintainable in even the most complex React apps.
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Concurrent React and Server Components are changing the way we think about routing, rendering, and fetching in web applications. Next.js recently shared part of its vision to help developers adopt these new React features and take advantage of the benefits they unlock.
In this talk, we’ll explore the past, present and future of routing in front-end applications and discuss how new features in React and Next.js can help us architect more performant and feature-rich applications.
React Advanced Conference 2021React Advanced Conference 2021
27 min
(Easier) Interactive Data Visualization in React
If you’re building a dashboard, analytics platform, or any web app where you need to give your users insight into their data, you need beautiful, custom, interactive data visualizations in your React app. But building visualizations hand with a low-level library like D3 can be a huge headache, involving lots of wheel-reinventing. In this talk, we’ll see how data viz development can get so much easier thanks to tools like Plot, a high-level dataviz library for quick
easy charting, and Observable, a reactive dataviz prototyping environment, both from the creator of D3. Through live coding examples we’ll explore how React refs let us delegate DOM manipulation for our data visualizations, and how Observable’s embedding functionality lets us easily repurpose community-built visualizations for our own data
use cases. By the end of this talk we’ll know how to get a beautiful, customized, interactive data visualization into our apps with a fraction of the time

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Advanced Conference 2021React Advanced Conference 2021
132 min
Concurrent Rendering Adventures in React 18
Featured WorkshopFree
With the release of React 18 we finally get the long awaited concurrent rendering. But how is that going to affect your application? What are the benefits of concurrent rendering in React? What do you need to do to switch to concurrent rendering when you upgrade to React 18? And what if you don’t want or can’t use concurrent rendering yet?
There are some behavior changes you need to be aware of! In this workshop we will cover all of those subjects and more.
Join me with your laptop in this interactive workshop. You will see how easy it is to switch to concurrent rendering in your React application. You will learn all about concurrent rendering, SuspenseList, the startTransition API and more.
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Featured Workshop
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.
You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.

React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.
The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.
React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.

React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Featured WorkshopFree
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.

React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents
- The different kinds of React application tests, and where component tests fit in
- A mental model for thinking about the inputs and outputs of the components you test
- Options for selecting DOM elements to verify and interact with them
- The value of mocks and why they shouldn’t be avoided
- The challenges with asynchrony in RTL tests and how to handle them
- Familiarity with building applications with React
- Basic experience writing automated tests with Jest or another unit testing framework
- You do not need any experience with React Testing Library
- Machine setup: Node LTS, Yarn