React Performance Debugging

Recording available for Multipass and Full ticket holders
Please login if you have one.
Rate this content

Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).

Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.

Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.

(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)

148 min
31 Oct, 2023


Sign in or register to post your comment.

AI Generated Video Summary

This Workshop on React Performance Debugging covers various topics such as analyzing performance with Chrome DevTools and React Profiler, optimizing components to improve performance, and debugging performance issues with tools like Why Did You Render and console.time. It also emphasizes the importance of prioritizing performance and fostering a performance-oriented culture within development teams. The Workshop concludes with insights on monitoring and optimizing React performance, including challenges with third-party libraries and the limitations of the React Fiber compiler.

1. React Performance Debugging Workshop Introduction

Short description:

Welcome to our React Performance Debugging Workshop. We will cover 3 parts, focusing on ChromDevTools indirect profiler, a real-world app with complex code, and debugging effects. Let's start by looking at a slow app. When editing a lot of nodes, the app becomes very slow. We can use DevTools to see when the app is fast or slow. The frame rate pane fills with red when typing into the editor, indicating a performance issue.

Welcome, everyone. Welcome, everyone, to our React Performance Debugging Workshop for React Addressed. Thank you so much for coming. I'm excited to see you all here.

Let's get to the actual stuff. So hi, welcome, everyone. I'm Ivan Akulov, I'm a Google developer expert, I'm a web performance engineer, I've been working on performance for five or six years, I've worked with companies like Framer, Google, CNBC, etc., and today, we are going to talk about React Performance Debugging.

Before we get to it, a bit of the work stuff. So we would have a 10 minute break, the workshop lasts three hours. And we would have 10 minute breaks every hour. In the end, we would have a bit of time for the Q&A. So that if you have any questions performance-related in general, or about the workshop specifically, feel free to keep them to the end. If you get any questions during the process of the workshop, feel free to shoot them into the chat and I would see them and I would respond to them. And if you click if- or you can't click this, obviously. If you follow this link, let me send this link into the chats. If you follow this link, Reaperf notes. You would find a Google documents that let's you keep a collaborative notes. So if you're the kind of person who tends to keep notes during the workshop, feel free to keep them there. This way you would keep the notes. Everybody else would keep the notes. In the end, you'd get super detailed notes from everyone. It's like communism. But without the bad beats. Is this an appropriate joke? I hope it's an appropriate joke. Sorry. Anyway. If you follow that link, you'd find a Google doc and you'd also find the GitHub repository. So... I have this already cloned. Feel free to run Yarn and Yarn Start to work on the same apps that we'll be working on. All right. I'm jumping too much. Ah. Yes. Okay. And... With all the work stuff done, let's start.

So, we would have 3 parts of the workshop today. We'd have 3 examples. We'd have 3 slow interactions we'll be working on, we'll be debugging today. The first one, we'll focus just on the ChromDevTools indirect profiler. The second interaction during the second part of the workshop, we'll look at the real world, at a big real world app with a lot of code with a lot of complexity. We would learn about ydtRender, we would learn about matching hooks that trigger some renders with code because this is surprisingly tricky in big apps. In the third part of the workshop, we would look at the debugging effects, we would look at some Rex 1517 challenges with the profiler, and we would take a look at two more tools like useRateAgeUpdates and Console.Loc. To start, to start with the first part, let's take a look at a slow app. So, if you clone the repository, and if you enter the repository, you would find two applications in this repository. Now, if you enter the nodes application, if you install the dependencies, and you would run yarn start, you would get this cute little app that lets you keep nodes written and marked down. It's a very basic app. Right, I could create a node, I could write something, some text, I could create more nodes, I could create a hundred nodes at a time, for some reason that's a weird node app. And, here is one thing about this app, here's one particularly slow introduction. So, let's say I'm a developer of this app, and a product manager or a user came to me and was like, hey, look, so we have a complaint. When a user has a lot of nodes, and they try editing any of the nodes that they have, what would happen is editing any of the nodes that they have would be very, very slow. And so, you as a developer, you go to the app, you create a few hundred nodes, 700 nodes you try typing into the editor and you notice that typing into the editor, indeed, feels slow. And you, folks from the other side of the screen, you can't really see this, right? Because it's me typing. It's me who feels that I type the letter and then it takes like half a second for the letter to actually appear on the screen. But here's one really neat trick that I like to use to see when the app is fast or the app is slow. So if I open DevTools, if I open my DevTools, and I click the kebab menu, and I click More Tools, and I click Rendering, I'll be able to scroll down and click the checkbox that says Frame Rendering Stats. And if I enable this, if I open More Tools, if I open Rendering, and if I enable Frame I would see this tiny overlay, tiny overlay in the left part of the screen that shows me when the app is idle, or where the app is busy doing something. So right now, if I just try moving my mouse over the node buttons, over all these buttons at the top of the nodes, you would see that the app is fast. If I try scrolling down, you would see that the app is fast. Blue here means that the app is idle. Yellow or red means that the app is busy doing some work. And the fps rate shows how frequently the app updates. So I'm doing stuff and the app is quick. But if I open any of the nodes, and if I try typing into any of the nodes, you would see how the frame rate pane starts filling with red. And the more letters the way, the more keys I press at one time, the more it fills with red. This means that the app is slow, and this means that the browser is busy doing something when I am typing into the editor. All right. So we've got a performance issue.

2. Analyzing Performance with Chrome DevTools

Short description:

I type into the editor and notice the app is slow. I open Chrome DevTools, enable CPU throttling, and record a performance trace while typing. The CPU row shows when the app is busy, and I see three spikes of JavaScript activity corresponding to my interactions. I match the CPU row with the app and hover over the spikes to see concrete screenshots. I zoom into a JavaScript spike, switch to the main pane, and analyze the tasks triggered by the key press event. I focus on the big things like function calls and their origins in Ragdoll development.

I'm typing into the editor, and the app feels slow. Now whenever I have a performance issue, any kind of performance issue, what I always do is I open Chrome DevTools, I go to the performance pane, I enable CPU throttling, because the developer laptops, our laptops are fast, and our users' laptops or phones are not. So I enable 4x CPU slowdown and then try typing into the editor while recording a performance trace. So I click record, and I type it into the editor. I press one key, and I wait a bit. I press another key, and I wait a bit. I press maybe one more key, and I wait a bit, and then I click start.

So once I complete that, what I do, what I get is I get a recording of everything that happened in the application while I was interacting with the application. And if you're seeing this for the first time, this might feel pretty overwhelming. There are lots of paints here, and there's lots of stuff happening here. There are a lot of things to look at. But that is all right, because actually the only two or three paints that we need to look at, the only two or three parts of the recording that we need to look at, is the CPU row and the main pane.

So the CPU row, the first thing I do when I get the performance recording is I look at the CPU row. The CPU row shows me when the app was busy. If the CPU row is empty, that means the app was idle. If the CPU row is filled with some color, that means the app was busy doing something. Yellow, in this case, means JavaScript. It could also get green or violet. That means painting or style layout or calculations. Gray means some other work that's mostly happening inside the browser internals. So, you get the CPU row and in the CPU row, I could immediately see right away when exactly the app was busy. So, I see that I typed into the editor three times and I get three spikes of JavaScript activity that, each of them took maybe one second, maybe two seconds. I don't know yet. We'll zoom into it and find out later. And I also got one spike of just something unclear in the very beginning. And so, this is my first. I record the performance trace and I look at the CPU row and I find when exactly my app was busy. And so, the next step, for me, is to find out which of these spikes correspond to my actual interactions in the app. Because it could also be that the app gets some timer that get triggered, or the app gets some network response that arrives, and then the app would do a bunch of work because the network response arrived, because the timer triggered, not because I typed into the editor, right? So, step one, look at the CPU row. Step two, match what's happening in the CPU row with what's happening in the app. And for this, what I do is I hover the CPU row and I look through these concrete screenshots that are showing me what, how exactly the, the app was looking at any given moment in the time. So, let's see, let's look at the first CPU spike. So here's the CPU spike and I'm hovering over it and nothing really is happening in the app. You see that the TextEditor over here, it does not get any new letters. The last two letters in the TextEditor are FG. I hope you can see this over the Zoom video compression, but nothing is happening in the app. Now, as I move my mouse further, I get over to the first CPU spike, and I see that after the CPU spike, the TextEditor gets a new letter. This means that I typed and that typing corresponding to the first few spike. Let's see what happens next. I hover over this, I look over this, and I see another letter getting added. So here it was just F, and here it's F-E. You can see that one more character is getting added. So this was another time when I typed, apparently. And let's see further, so same picture here. The CPU spike ended, and a new letter appeared on the screen. So what this means is that I have four spikes, and out of these four spikes, three spikes correspond to my concrete interaction type into the page. So that was the second step. The third step that I do is, I zoom into any of these JavaScript spikes, and I switch to the main pane, and I look through the main pane to try to make sense of everything that happened during the CPU spike. All right, so let's take a look. Here is the CPU spike. It corresponds to this task, not necessarily a JavaScript task, just a task, which means the browser was busy doing something that took 900 milliseconds. Now, if I go down in the task, I would see that the task got triggered because of the key press event. The key press event triggered the text input event, the text input event triggered the input event. The input event in turn, if I zoom in, I would see that it's split into two rectangles, which means it triggered two things. One of them is the function call. It triggered some function call. And another one of them is the runMicroTasks rectangle. So by default whenever I see some task splits into a smaller task, I just pick the largest task and I start going down. My task, my goal right now is not to understand everything what's happening in this task, but just form a high level picture of like, okay, what exactly the browser is responding the most most of the time on, right. So I'm just going to focus on the big things. So key press, text input, input, runMicroTasks. Then I get some function call. And then after the function call, I actually start getting some rectangles that have a different color and that have, that have different names. And all these rectangles are function calls. So this is an anonymous function that was called from Ragdoll development. This is a flushSyncCallback function that gets called from Ragdoll development. So flushSyncCallback function called performSyncWorkOnRoot, which again was called from Ragdoll development, performSyncWorkOfRoot called renderRootSync, which again comes from Ragdoll development. And if I keep going down like this, if I keep just clicking through all these functions, remember, I'm just forming a high level understanding. I don't need to dive really deeply into this. If I look through all these functions, if I try to figure out where all these functions come from, I would see that all these functions, they actually come from Ragdoll development, until I get to the next step when the functions actually get different color and they come from different files.


Analyzing React Profiler

Short description:

We analyze the performance trace in the Performance pane and see that most of the time is spent running React DOM development code. We switch to the React Profiler pane and record the same interaction using React Profiler. The React Profiler shows the components that get updated in the app and the time each component takes to re-render. The notes list component appears to be the most expensive, taking 10.6 milliseconds to render on its own and 950 milliseconds with its children. We pose a question about the missing time in the notes list component rendering and discuss possible answers.

So, this is pretty easy to see by clicking all the functions but this is even easier to see by looking at the function colors and the rectangle colors. So rectangle colors here, when you get some JavaScript goals, rectangle colors do not have any concrete meaning. So the fact that this function is anonymous, sorry, so the fact that this anonymous function is like green or the fact that this flash sink color functions is like green, it does not really mean anything, it's a random color but what matters is when you have all the functions colored in the same way, what matters is all these functions come from the same file. So what I could see right away is as I'm going through these functions, I see that the first three functions come from regular DOM developments and what this means is that all the next functions that have the same color likely also come from regular DOM development.

All right, so here's what we've done so far. We get a, in a report from our user, from our product manager about a slow interaction. We have produced this slow interaction in the performance pane. We've recorded, we've recorded the interaction with serve three spike of CPU activity. We zoomed into one of the spikes and we saw that the spike is triggered by the key person went and it in turn calls a whole lot of React DOM development functions. So I, as a developer, I have no idea what's happening in React DOM. I have no idea what any of this functions mean. I just see a bunch of React DOM development code and this feels super complicated. Like what the heck is happening here? But I, as a developer, I, as a developer, do not have any idea what's what's happening here. But I actually do not need to because what happens when I record something and I look through the performance trace and I see that most of the time in the performance trace is spent running some React DOM development code. In this case, it's 900 milliseconds, 800 milliseconds being spent running some React DOM code. What I do is from the Performance pane, What I do is from the Performance pane, I switch to the React Profiler pane and I record the same interaction using React Profiler. Now, the React Profiler pane is not available by default. It comes from React developer tools. So if you're following me, and if you don't have this pane, this is the extension that you need to install. I just dropped it into the, Zoom chat. And if you install this extension, if you reload the page, and if you close and open the DevTools, you would get the Profiler pane. And well, let's try recording the interaction using the Profiler. So click on record, and then try typing to the editor again. And again, you see that the app gets slow. And if I stop the recording, if I stop the recording in the Profiler, I would see the following. So the first thing the Profiler, the React Profiler shows me multiple things, while the performance pane is showing me everything that's happening in the app at the level of concrete functions. React Profiler is showing me everything that's happening inside React at the level of components. So if I record an interaction using React Profiler, I could see all the components that get updated in the app. And so what I see in this case is I see that the React Profiler showed me that there, when I typed into the editor, there was one re-rendered that happened. React Profiler calls that commit, that re-render took 855 milliseconds with effects taking 55 and one, 56, 57 more milliseconds. And oops, hold on. This is spoilers. This is spoilers. Let me re-record this. Yeah, done. So pretend this, pretend this never happened. So it shows me the number of re-renders that happened to the app. It shows me the component that got to rendered to the app. And it shows me the time every component took to re-render. So let's see and let's try to make sense of what's happening here. So the first thing I see is that the app component to rendered. The second thing I see is that the dark mode provider rendered, then the dark mode provider caused the context provider to render. The context provider caused the notes list to render. And then the notes list caused the node button to render. And then another node button to render, right? And then there was also some primary pain taking nine milliseconds. So the numbers that I have here, they show me how much time the component took to render on its own, versus with all of its children. So you could see that the app components on its own took 0.8 milliseconds to render. And to render in here means just calling the component function, because that's literally what rendering is, right? I re-rendering the app components literally just means calling this app function, calling the app components. The app components, because it's rendered, it caused the document provider to render, it caused the context provider to render. That caused the notes list to render. And the notes list component seems to be the most expensive. It's yellow. And it took 10.6 milliseconds to render on its own, but it, together with its children, took 950 milliseconds to render. And now, here comes my first question to you. And this is a trick question. So if you know this, this is good, but if you don't know this, I'm wondering what would be your hypothesis about this. So here's my first question that comes to you. So I'm looking through the Rect Profiler, and I see that the notes list component, it seems to be the most expensive one, it took 10 milliseconds to render on its own, or almost a second to render with all its children. But if I look at the children of the notes list component, I would only see two children. I would see one note button taking 15 milliseconds to render and another note button taking six milliseconds to render, which is together 20. So my question to you is, where did the other 930 milliseconds go? Because together with all its children, it's 950, but I can only see two children and they account attribute only to 20 milliseconds. Does anyone have any idea where the rest 930 milliseconds went? It's all right if you don't know. I'm curious what would be your guess. Quinqui asks, are render function in NodesList? No, because the render function of NodesList, this is actually, so that was a good guess, but the render function of NodesList, it's actually the first number. So I could see right away that the render function of NodesList was just 10 milliseconds, which means the components are responsible for the remaining 940. Alexandra asks, was it hooks? Also a good answer, also, sorry, also a good guess, but no, because the hooks actually get recorded. So when you have some components, like app for example, or let's open the NodesList component, right? Here's, oh, no, that's CSS. NodesList slash index.jsx, here's the NodesList components. It has, like, one hook, right, and if this hook for some reason took, like, 50 milliseconds to render, React would still capture it because React just measures how much time the NodesList function took to render. And if use state was, like, 50 milliseconds, nodes list calls use state, and if use state was 50 milliseconds, then nodes list would also take 50 milliseconds.

React Profiler Bug and Workaround

Short description:

Tracked Profiler has a bug where it does not render all components when there are too many in the profiler. Zooming out or widening the DevTools screen allows more components to be rendered. The React Profiler does not handle this well, unlike the Chrome DevTools Pane. The workaround is to either zoom in or render fewer components. By reducing the number of node buttons from 700 to 100, the performance improves significantly.

Tian asks, was it reconciliation? Also good guess, no, reconciliation actually is not visible in the Recto file at all. It's the cause that Recto file does not really show you. Oh yeah. So Gregers, sorry, Gregers, that's probably the right name, sorry, is that the right way to pronounce it? Yeah, almost perfect. Yeah, okay. Sorry for any imperfections. So Gregers says that in his profiler, there are more children in Recto and, yes, yes, yes, Gregers also, yes. So what's happening here, and the reason I'm showing this, because if you encounter this for the first time, this is so puzzling, this is so confusing. What's happening is that Tracked DevTools have a bug, or maybe it's a design decision, I don't know. But Tracked DevTools, Tracked Profiler have a bug, which is when you have too many components in the profiler, what's happening when Tracked Profiler cannot fit all of these components into the screen, it just does not render them. And this is very annoying because this is very confusing when you're working with Tracked Profiler for the first time. It could totally mislead you, definitely mislead you the first time I saw this. But if you zoom out, if you make the DevTools screen wider, you would see as Tracked Profiler gets more and more space to render the component it needs. These components, it, like, more components actually show. So here you could see, I would hover the notes list, and you can see that I got, like, way more node button components, each of them taking one, two, three, maybe five milliseconds. So this is a very annoying thing. I think it is design, it's a design decision. I do not like the design decision, but this is a thing. If you ever see white space in React Profiler in the middle of the recording, what this means is not that track profiler went and spent some time doing... I know, it just was, it does not mean that React was idle or React was doing some other work. That just means that React Profiler was not able to feed all the components into the space. The Chrome DevTools Pane handles this fine, the React Profiler, unfortunately, does not. All right, so we've solved the trick. Great job, everyone. We've solved the puzzle and we've learned that the nodes list takes 950 milliseconds to render. Simply because it renders an absolute ton of node buttons. Just look at this. Node button, node button, node button, node button. And if I hover any of these node buttons, I could see which component it actually corresponds to. So I could see that every of these node buttons is just an item in this list. So at this point you might already have some ideas on how to solve this. But when I'm debugging stuff, what I always try to do before actually figuring out the solution, what I try to do is I try to go to the very end and I try to figure out why exactly this performance bottleneck is happening. Because sometimes when I do that, I find a solution that's way simpler and way easier than whatever the solution, whatever the intermediate solution I can come up with. So I'm saying this, did anyone think of virtualization? Could you send a plus, if you thought of virtualization when I was showing this huge screen, huge list of nodes. Oh yeah. So some people thought of virtualization, right? And that's an optimization that you could apply here, but we're not going to do that yet. So yeah, the workaround question in the chat. What's the workaround for this? Yeah, the workaround for this is just either zoom in, or if you have a good idea of what's causing so many components to render, try to render fewer of them. So in this case, I'm rendering 700 known buttons. Let me disable frame rendering stats. I'm rendering around 700 not buttons. So if I render just a hundred of node buttons, then Tractor Filer would have a better chance to feed all of them in. So I'm going to do just that. Let's do that. So I'm going to clean the recording and delete all node buttons and create a hundred nodes and rerecord the same interaction, but this time just for a hundred nodes. And yeah, you see the issues gone. I can see all of the node buttons and I can see that the interaction is now much faster. It takes just 145 milliseconds. It's still slow. One hundred milliseconds are still slow, but, well, now it gets all the components.

Analyzing Component Rerenders

Short description:

After analyzing the profiler, we discovered that the app is slow due to component updates in the notes list component. We investigated why these components are rendered and found that the useState hook and certain functions were causing the re-renders. To optimize this, we used the useCallback hook to memoize the functions. However, despite our efforts, the nodes prop still needs to change, making further optimization difficult. As a result, we need to explore optimizations in the components rendered by the NodeList component.

All right, so here's what I've learned so far. After looking through the performance pane and after realizing that all the code here comes from React, I switch to the profiler and I click record and I recorded the profile, the interaction, and I realized that the app is slow because we have a bunch of, they have component updates, the notes list component updates, and the notes list component causes a whole lot of buttons to render.

Now, here's the next question. Why exactly is this happening? Why do all the note buttons re-render? And to answer this question, here's what I'm gonna do. So there are multiple ways to answer this question. We're going to look at all these ways, that all the ways through the whole workshop. But the first, the simplest thing that I always do is I go to the profiler settings. I click the settings here. I go to the profiler settings and I make sure that by record why is component rendered while profiling checkbox is enabled. And if the checklist is enabled, and if I rerecord the interaction again, and look at what's happening in the profiler now, I would see that every component I would hover would get this section that says why did this render? And so let's see why exactly all these components are rendered.

So the app components renders because it's hook number one changed. Let's see what the hook number one was. So I'm going to go to ARP.slashindex.js6. And I would look at my hooks and I would see that my hook number one was the useState hook. It looks like a hook that contains all the notes like, I'm ignoring all the code that's down there. Right? I don't care about it. I just look at the first hook and the first hook is the hook that has all the notes. And well, when I'm typing to the note editor the notes obviously update. So I guess this is a useful re-render. We cannot really do anything with it. Then I keep going down, I see that the dark mode provider render because it's children changed and the children probe always changes every time, so I also can't do much with it. Context provider just never shows why it redeemed it. It's just renders every time you re-render it. Then the notes list components, the notes list component changed because it's notes prop changed. And also because it's on new notes requested to know on delete all requested props changed. So Emmett asks a great question. Is it possible to get the hook name instead of a number because it's a little bit confusing. Unfortunately, no. So we'll be talking about this in the second part of the workshop. React DevTools cannot, React provider cannot give you the hook names, so you need to do a bit of extra jumping back and forth to actually figure out what the hook actually is. And there are pitfalls. So just now I went and I saw that hook number one changed and it was this hook, but you can't really do that in most of the cases. I'll be showing the real-world app where this doesn't work. And we would see how to actually match the hook numbers to the actual hooks. But we'll talk about this a little later. So the nodesList component, the nodesList component, the nodesList component rendered because it's nodes on your nodesRequested to nodedeleteAllRequested, hook's changed. So this looks more interesting. The on your nodes requested to nodedeleteAllRequested looks like functions, right? And it very commonly happens that some functions that we have day, if they're not memorized, they would just cause some component to render. So let's go, let's take a look at the nodesList and let's see if there's anything we can optimize with it. So nodesList, I'm going to go to the nodesList, here are all my prompts that I have nodes and here are all new nodes requested to ondeleteAllRequested. And if I go to the, if I comment click the nodesList, I could see where exactly the nodesList is called. And let's see, on newNodesRequested creates new nodes and ondeleteAllRequested deleteAllNodes and if I click this props, if I see which values this props gets, I would see that indeed, every time the app components is rendered, we recreate the saveNodes function, we recreate the createNewNodes function, we recreate the deleteAllNodes function and that's all these new functions, even if nothing changes inside them, they get passed into the notes list and they cause the notes list to render. So how do we solve this? Tell me, how do we solve this? What do we do with these functions? To prevent them from changing every time? Yes, we slap a use callback. Yes, we wrap them with a use call back. So all right, all the annoying drag boilerplate, let's do this. So createNewNodes, I'm gonna wrap createNewNodes with use callback. Use callback, yada, yada, yada, function body, array of dependencies. I need to import the use callback from React, or I need to get imported, and I need to write the array of dependencies, and React Profile, oh, sorry, not React Profiler, GitHub Copilot is actually sometimes good at the arrays of dependencies. But if it's not, if it's not, let's see, let's see, what do we have here? We have the getNodes function that is imported. We have the setNodes function, which comes from the useState, but we do not need to pass this function to the array of dependencies, because React guarantees that this function is the same every time. So it's not going to change. And what else do we have here? SetActiveNodeId, that's also the setState hook. And Node.Txt, where does Node.Txt come from? Oh, no, it's just grades, yada, yada, yada, Map. PutNodes, that's also, well, it looks like it has no dependencies, right? Because you know, it's getNodes, setNodes, it looks like it has no dependencies. So I'm just going to put no dependencies into it. And now delete all nodes. Let's see, use callback, yada, yada, yada. Delete nodes, it's some external function, it's imported getNodes, setNodes, setActiveNodeId, so this also has no dependencies. All right, look, we have memorized our functions. So let's see what that has done to our app. So I'm going to reload, and I'm going to click Record, and I'm going to press the same keyboard button again. And let's see the NodeList component still re-rendered, and the two components, the two functions that we've memoized with these callbacks, they do not change anymore, but the nodes prop still changes, and the nodes prop still has to change because it contains all the existing nodes. When I'm typing to the nodes, the nodes prop that contains all the existing prop, it has to change. There's no way to prevent this change. So, well, we've just done a lot of work for nothing. Congrats. We can't optimize. What this means is that we can't optimize the NodeList component, so we have to go farther down. So let's see. The NodeList component renders a bunch of other components.

Optimizing Filter Input Component

Short description:

We can optimize the filter input component by using memo. By adding memo to the component, we can prevent it from rendering when its parent component renders.

We have the filter input component that re-renders because its parent component rendered. So no props changed, no state changed, no hooks changed. It's just the parent component rendered. Any idea how we solve this? How do we prevent the component from rendering when its parent component rendered? Yes. It's memo. So, let's go to the component and let's slap a memo onto it. So it's filter, what was it? Filter input. Filter input index of JSX. So, yeah, basic React boilerplate, oops, no. I do not need media, whatever it was. Memo import from React, reload, record, stop recording. All right, look at this. We have optimized now our filter input component and the filter input component is now gray, it did not render, great job me.

Optimizing Node Button Components

Short description:

The nodes list component renders many node button components, causing performance issues. We explore different solutions to prevent unnecessary re-renders, such as using useCallback or event bubbling. We pass the necessary functions as props to the node button component and wrap it with memo to optimize performance. By implementing these changes, we significantly improve the app's speed and demonstrate the importance of profiling and optimizing performance.

All right, but that leaves us with the biggest bottleneck. The most expensive part of the nodes list component the most expensive part of the nodes list components, like 99% of all its children are all the node button components that the nodes list renders. And if I hover all these not buttons, I would see that each component to render because it's prop on node activated changed. That again looks like something you could apply use call back it, right? Maybe, maybe I don't know. Maybe the function changes for good.

Well, we have the dot button component and each of the not button components takes just one millisecond to render but because they have a lot of them together, they take a lot. So let's see if we can prevent the node button component from re rendering. Let's see, so we have the nodes button, actually, now, let's look, let's find it in a different way. So one way to find this just to go to the code and find like the node button index to J six or whatever. But if that component is used in multiple places in the app, then it might be very hard to find which plays actually renders this component. Where does this render come from? So one thing I like to do to find where this component lives in the codes is a click this component and then I switch to the components pane and this component remains focused, right? And I look at the source pane that shows me the exact place where that components was rendered. So now if I copy this, and if I go to notes list index just six 35, well, it's not the exact place, I guess the source maps are a little off but here at line 37, this is the exact component that the notes button that we are rendering. And here's its onThatActivated property. Look, it's an anonymous function again, but it's an anonymous function in a loop. So normally when we have an anonymous function, what we would do is we will take that anonymous function and we would wrap it with use callback. Here, we can't do that because we can't use hooks in a loop. That would break the rules of hooks. Hooks cannot be used conditionally. So my next question to you is any idea how we could solve this instead of using use callback? Let's see how this onNotActivated. To transfer this, to maybe make it easier to transfer, let's see how the onNotActivated property is used. I'm gonna, can I open this in separate? Oh yeah, here's I open it in a separate split view. And here's the not button component. It receives the onNotActivated prop and it just passes it into onClick. And here we create the onNotActivated function and it's called some parent function that's also called onNotActivated and it passes an ID into it. So any idea how we could solve this? How we could prevent the onNotActivated prop from changing? No? Svetlana suggests we can create a handler function on top level as const that executes onNotActivated. Richard suggests static reference and let's take the ID inside the components. So I don't fully follow the first idea, sorry. The second idea, yeah, this is one of the solutions. And maybe Svetlana, maybe you also meant the same thing. But yeah, here's one way to solve this. So instead of, what if instead of passing the anonymous function that I'm creating every time, ooh, Jaggers also suggests event bubbling and that's actually a really cool trick. You could put, you could put the, you could put in a month listener somewhere higher, for example, on this diff. And instead of handling the click on each of these nodes, you would handle a click at the level of these nodes list. And then the click event would bubble. But yeah, that poses its own challenge. It's hard to figure out which node button corresponds to which ID, right? So that's a good workaround, but we definitely have the easier solutions here. So Svetlana, what you're suggesting to understand, to understand your solution, what you're suggesting is to do something like this, right. Const handle active node equals on.activated ID and then pass this in here. Is this right? Oh no, no, after you state. All right, but here's the challenge. How do you get the ID? Because the ID is only available inside the map, right? So if we do this, we would not have any access to the ID. That's the challenge for this, yeah. If this did not need any access to the ID, we would be able to just move it all the way up and wrap it with this callback and that would be great solution, but now that we can do this, yeah, what Emad is suggesting, Emad, sorry, maybe I mispronounced, what Emad is suggesting, what Richard is suggesting is we can just pass this function on its own and call it from inside the component. So let's do just that. Instead of calling the function, instead of creating the function at the nodes list level and then passing it down the node button, I'm going to pass the onNotActivated and ID functions. Sorry, onNotActivated, and ID props and here inside the node button, I'm going to receive the new ID prop and I'm going to receive the onNotActivated existing prop and I'm going to call onNotActivated ID right here. And one last thing that I'm going to do is I'm going to put a memo around node button because even if no props change, the component will still re-render by default. So to prevent it from changing, to prevent it from re-rendering, I need to wrap up the memo. All right, ooh, we've got some error. Let's see how this works. So I switch to the profiler pane and hold on node to early. I open the node, I try typing into the node and it's not recording it. Look at this! Look at how good this is. We prevented all the node buttons from rendering. That is so awesome. We've solved the first performance issue. Look, if I enable the frame rendering stats now and if I try typing to the editor, the red chunks, even with 4x CPU throttling they're way, way smaller than they were before. But if I command out this solution or just stash it under the solution, right? And, oh, I just stopped the feet server. That's not good. And I try typing now. Look at how slow this is. It's terribly slow. And if I reapply the solution then the app just gets way, way, way faster. It's still not 60 FPS and there's still more stuff to optimize. But the goal of this workshop is not showing you the optimization, the goal of this workshop is to show you the approaches to profile it. And this is my first way to profile any of the performance, any performance issue that I encounter. I record it for the performance trace. And if I see that most of this stuff is happening in direct, I switch to direct profiler and I record it for direct profiler and I enable record which components rendered while profiling. And then I look through all the components, I find the most expensive components and I try to prevent them from re-rendering in any way that I can think of. And if I do this, my app gets much faster.

React Performance Debugging Workshop Q&A

Short description:

Memoization is not free, but in most cases, the cost of re-rendering every component is larger than the cost of memoizing components. It's best to memoize only the components that actually need it. The React profiler shows components that did not render with an approximate width based on previous renders. We learned how to use Chrome DevTools and React Profiler to debug performance issues. We also have a React performance flow chart available. In the next part, we will switch to the Widgets Editor app and address heavy app performance and React Street mode. If React Profiler gets stuck, try disabling CPU throttling or recording shorter interactions. React Street mode does not enforce rendering components twice in the profiler pane.

So this is, this wraps the first part of the workshop. Let me answer your questions and then you will take a break. So Emmett asks, won't this be expensive if you're memorizing a lot of stuff? So this would generally be not. So memoization is not free, right? Memoization is not free. It has the CPU codes because you still have to compare all the parameters you pass into use memo or React has to compare all the parameters that you pass into use memo. It has the memory costs and actually if you're attending React Berlin on React Berlin there's going to be a talk about the memory costs of use memo. I'm very much looking forward to it. I'm really curious to learn it. But in general, the cost in most cases the cost of re-rendering every component in most of the cases, it's larger than the cost of memoizing components. And I get asked from time to time like, should you use memo everything? And my answer to this, if you can understand what exactly you need to use memo, if you can use the React profiler painting, if you could use Chrome dev tools, then you do not need to use memo everything. Because at any moment if you're up a slope, you could just record the trace, go to the React profiler, find the most expensive component and memoize it and only it. And that would be the best solution. You would only memo the stuff that actually needs memoing. But if you can't use the React profiler, if you can't use the Chromium performance paint, then if you feel unfamiliar with them, then or if you're, I know, teammates feel unfamiliar with them, then yeah, probably wrapping everything with memo and use memo and use callback is the best solution until the React forgetting comes out. Quimqui asks, why is it greyed out? Why is this greyed out, but those components still take upwards in the flame chart? So this is just how React profiler works. It memorizes the time each of the components took to render, I think during their first render or something like that, and then it just uses that approximate width when the component did not render. So this is the difference between the profiler pane and the Chrome performance pane. The performance pane, if something does not happen in the performance pane, React would just not show it. In the profiler pane if some component does not render, React would still show it with some width, but with some width that it would try to infer from previous renders, but it would explicitly show you that this component did not render. All right, so that was part one of our workshop. We learned how to use Chrome DevTools, we learned how to use React Profiler to debug performance issues, to see which components render, why exactly they render, et cetera. And all this stuff, all this process that they went through, it comes from my head. Like record stuff in Chrome DevTools, then switch to Direct Profiler if it's mostly React, et cetera, et cetera. All this stuff comes from my head. But for all of you folks, and you might have already seen this because there's a link in the Google Doc. For all of you, I have prepared this React performance flow chart that you can find in the Google Doc data shared, or you could find it in this React perf flow chart link. Let me copy the link and send it. I cannot copy that link. Let me type that link into the chat. Oh no, let me open that link. And now I can copy that link and paste it into the chat. Um... And, well, this is more or less pretty much my process of debugging slow, React interactions. What we've done would be now through the first part of the interaction, through the first part of the work, we recorded the interaction using dev tools with four XCPU throughout it and we realised that what's happening in the interaction is mostly React code. And then we went and checked what's happening. Well, we actually didn't check for Random root sync and we will do that in the first, I would show why it's important in the third part of the workshop, but we realised that what's happening in the app is mostly React rendering. So, I opened the React profiler and I recorded the trace within using React profiler, and I tried to find the most expensive components, components that render most often take most time. It was the node buttons. It was all the node buttons. And then I used ReactiveTools, settings, record by component rendered to answer why exactly these components were rendered. Yep, and then I found how to render them less often. So, this was just a part of the flowchart that we used. Let's see how this part applies not to simplistic apps, like the app that we just looked at, but to real large, huge apps with a lot of components, a lot of stuff in them, and to answer that, let's switch to the second app that we have in the repositor, which is called the Widgets Editor. So we're going to cd Widgets Editor, I'm going to install dependencies, I'm going to run yarn start here as well. KoenGui asks, what do I do if my app is so heavy that RactorFiler gets stuck processing data? Well, I don't have a good answer to this. It's a sad situation to be in. It's unfortunate. So the first thing that we do is if you do have CPU throttling enabled, then maybe disabling it would help. I'm not sure if CPU throttling only throttles the page or also the extensions, I'm not really sure about that. So maybe disabling it would help. The second thing that helps is just recording shorter. So, not starting the recording when the app opens, but doing the recording of only the slow interaction, only the slow bit that's slow. And otherwise, if none of these things help, I really don't know what to do. It's an unfortunate situation because I don't really know what could help apart from a faster machine. So, ask your boss to get you an M3 MacBook Pro. They just came out and you have a good case. Just tell your boss, hey boss, I want you to spend a couple thousand dollars on me for a good reason. So, Emmad asks, does React Street mode enforce each component render twice? Because I'm a little confused when profiling, Matt. The answer is, no. So React Street mode enforces your effects to run twice and maybe it doesn't force the components to render twice. Okay, actually hold on, that's a really good question. Let's just check this. In any case, my answer is, I don't think this affects the profiler pane because I have never seen React Street mode affecting the profiler pane. Let's check this real quick. So I'm going to go to notes slash index.js6. Oh look, we're already using the Street mode in the notes app. And yet, every element is only rendered once, right? We don't get the effects of the Street mode where we don't get two nodes lists in the recording. So yeah, the short answer is React Street modes would not show you every component rendering twice in the recording. Does it actually render every component twice? I don't think it does. But maybe in some edge cases, maybe we're mounting.

Analyzing Real-World App Performance

Short description:

Let's explore a real-world app with performance issues. Changing the data type property causes the app to freeze and triggers unnecessary re-renders. There are tools available to automatically test interactions and measure React performance. The author provides an article on monitoring React performance. By using the performance pane and Rect profiler, we can analyze the app's performance and identify areas for improvement.

But I don't remember that and that's easily googleable. So let's take a look at real complex real world app. So this is an app, this is an open source app that a client of mine kindly allowed me to use for a workshop. It's an artificially slow down version of the app. And here is one performance issue that this app has. So I'm going to enable 4x CPU slow down and I'm going to enable the rendering, frame rendering starts again. Let's look here's a performance issue, so if I open the... So, hold on, I need to explain what this app does. So the app that we have here is just a basic widget-aided wrap. So it's like a low code, which is great tracking, drop some apps onto the canvas, sorry, I can drop some widgets into the canvas, I can set properties for these widgets. I can connect these widgets to one another with some logic. I could move them around, et cetera, et cetera. Like you see, I have like some filtered widgets and then the, some table and stuff like that. So this app has a performance issue, which is if I open any properties pane and if I try, like for example, this filter by content properties pane, properties pane of the search field, and if I try to change the data type from the data type that I have now, oh wait, I did not enable CPU throttling, from the data type that I have now to this same data type that I have, so basically, from text to text, nothing should change in the app, right? Nothing should, like I'm not changing anything, but if I do that, if I change the data type property from text to text, what would happen is my app will get frozen for a significant chunk of time. You could see this here, I get a bunch of red and also a bunch of yellow. So I click text, select text, and my app get frozen. This is a performance issue, is nothing should happen in the app when I change the text property. And there's also another way to see this performance issue. So if you go to the rec components and go to settings again, sorry to the rec profiler and go to settings again, switch to the general pane and click highlight updates when components render, what you would get is every time any component renders, you would see that component highlighted. So for example, if I'm typing here, oof, you could see everything getting rendered as I'm typing. This is really slow, this really should not happen, but it's happening, right? And the same thing also happens if I select text and then click text. So nothing, I don't change anything in the app, but the whole app re-renders. This is like really slow and this is really inefficient. So let's try to figure out where all these renders come from and let's try to figure out what is making the apps slow and how we can make it faster. Alexandra asks, is there some tool to automatically test a bunch of interactions to detect possible performance issues? Yes and no. So, debugger, INF tester. No, render. Zhu-zhu-zhu-zhu. INF. I think debugger, yeah. Oh no, it is, yeah, INF debugger. So, there is a tool that's very good, I've never used it, but I know the developer, I know the developer really cares about performance and the team has huge experience with performance. This tool, you could put any website in there. I know,, let's put Reddit, it's slow, right? I'm gonna put a new tool there, and it will run a test and it will try clicking basically every, okay, no, I'm not gonna do that, it will try clicking basically everything on the screen and it record how much every click takes. And that's pretty convenient. It does not tell you what exactly is happening during that click, but it's still convenient to see what's slow. Then, another thing that you could do to actually figure out, like when that's happening. Another thing you could do to measure interactions, measure parts of the app, figure out what makes them slow, is set up some tooling to collect React performance data for you. And I have an article about this. So if you go to, which is my consulting website, and go to articles and guides, and look here for how to monitor React performance, you would find this article, which I'm going to send them to the... Actually, sorry, I'm gonna send both links into the chat. You'd find this article that talks about how to measure React performance in code. So how to instrument every interaction, how to collect the data, and send it into analytics, and what else to do with it. It's my best up-to-date knowledge in how to measure React performance in the wild. So with that in mind, let's dive into our slow interaction. So we have a slow interaction, I click text, and everything renders. So the first thing that I'm going to do always is I switch to the performance pane and I try to record what's happening in the app using the performance pane. So I enable CPU throttly, I click record, I pick text, and I stop the recording. Let me close this. And in the recording, again, I see a lot of stuff. I see a huge spike of gray in the beginning of the recording, which seems to be just the profiling overhead. So it's not something I need to care about. I see another huge spike of JavaScript in the middle of the recording, and then a smaller spike of JavaScript in the end of the recording. And if I hover this, if I look into what's happening, I will see that... Yeah, the first spike of JavaScript seems to happen right when they click the text value in the dropdown. You could see in the frames that it starts when the dropdown is open, and then as it progresses, the screen updates, and I get all these re-renders the direct profiler showed me, happening, happening, happening, happening. And then I get another spike of JavaScript activity, which also apparently re-rendered some components, but this one was smaller, this one was cheaper. So there's a lot of stuff happening now here. It's a huge railroad, so let's zoom into the first one, the first spike, and let's try to see what's happening here. So I get several large tasks. The first one happens when they click, when it happens, and as they go through it, I can see that most of the codes, again, comes from Ragdoll development. Ragdoll development everywhere, right, in the same color. The second task, I click through it and, again, it's Ragdoll development. The third task, I click through it, and it's Ragdoll development again. What about this spike? Let's see. It has two tasks, event mouse forward, event mouse out, and if I click through them, it's Ragdoll development again. Look, so most of the time, it seems like we're spending most of the time in Rect code. What this means is I am switching to Rect profiler. And I record what's happening in the app using the Rect profiler. So let's do that.

Analyzing React Profiler Renders

Short description:

I open the properties pane, disable green flashes, and start recording the interaction. The React Profiler shows 13 rerenders, some more expensive than others. I analyze each render and note the components involved. The first render takes 50ms, rendering the editor header and widget editor. I take notes using the flowchart as a guide. I identify the prevalent rendererRootSync function and switch to the React Profiler to find the most expensive components. I continue analyzing each render and noting the components involved.

I'm going to open the properties pane again. I'm going to disable these green flashes because they're a little annoying, all right. And I open the, no, please open. Don't be buggy. Yeah. I open the properties pane again and I open the drop down. And now I start recording, I select text, I stop recording, and here is what I see. Did I stop recording? Hold on. Let's wait a little bit. Let's wait a little bit. Let's not stop recording immediately because remember there was a period of idleness and then another CPU spike. So let's just look at all of them together. So I'm going to open, click record, select text, wait a little bit, and then stop recording. Yay, we're done. So let's see what the rector file shows us.

The first thing that I see is that we've got not one this time, but 13 rerenders that happened in the app. Some of these renders were more expensive. Some of these renders were lesser expensive. The cheaper renders, they are short in the height, the expensive renders, they are tall in the height. That's quite an easy way to find the expensive renders. And if I go through each of them, I would see all the components that got rerendered during each of these renders. So let's see what's happening. Let's go through each of these renders and let's try to figure out what's happening in each of these renders, right? So let's see, the first render, the expensive one, it takes 50 milliseconds and it renders two separate components. The editor header, which takes 10 milliseconds and renders widget editor, which takes 47 milliseconds. This does not add up, right? 47, nine, still 47. This is something I see sometimes in directive tools, just numbers. Not enough numbers between components and numbers in the, like total render duration. I don't know where this is coming from. It's probably another directive tools bug. But I see that my editor header component rendered and I see that my widget editor component rendered. Let's see the next one. And this is actually the moment when I typically go and start taking notes, because there's a lot of stuff happening and it's useful to take notes. And if you check the flowchart that we have right here, you would see the notes actually suggested right here in the flowchart. So here's what we've done so far. We recorded the interaction, used the DevTools, we looked at what's happening there, we noticed that it's mostly React codes. I skipped a step, but I looked for, if I was doing it fully like completely the way I'm always doing it, is I would look at which function is prevalent among the React code. And I realized that it's the rendererRootSync function. And then I would switch to the React profiler and I would try to answer what are the most expensive components or components that will render most often? And I would go through all or all the recorded renders and I would take notes. This is always like something I do when I'm personally profiling from. I just look at all the renders and I take the notes. So let's do that. Oh, text, hold on, text, I just.

Let's do that. So we have 15 re-renders, sorry, mate, no, re-renders. And here's the first render. So the first re-render takes 50 milliseconds and it re-renders editor header, which takes nine milliseconds, and it takes, I'm sorry, it renders Widgets Editor which takes 47 milliseconds. I don't really need to go farther down them, I don't really need to figure out what exactly is happening farther down these components, what's rendering inside them. At this point, I just need to understand what are the most expensive components or what are the components that you render most often? So I'm just gonna go, each render a log of the components that render. So the second render, it took 28 milliseconds, took 28 milliseconds, and it rendered the property pane view and table. Again, numbers don't quite adapt, I don't know why, property pane, it was 28 milliseconds. Oops, 28ms, and the table, which took 20ms. I'm just gonna remove these numbers because they do not adapt right. So, okay, render three. Render three. Oh yeah, here's the tiny bit of green, transition. We've got some transition happening, I could also switch to the ranked chart to see the most expensive components. This is just components on their own, not with children. So I rarely use it, it's rarely useful because I always want to look at stuff with children. But, okay, it's just some transition. Render three, transition. Let's say, render four. Render four, what happens here is we have just the widget editor. And the widget editor takes 50 milliseconds. So, at this point, I think I'm not going to, or I don't know, maybe I will. Let's keep just writing it like this. Render four, widgets, editor, 50 milliseconds. Let's see, render five, render five, property, paint view again, 30 milliseconds, table 25 milliseconds. So, I can just copy this and update the numbers. Property, paint, 30, 25. And so, I keep going over all the renderers like this, over all the remaining renders.

Analyzing Hooks and Custom Hooks in RackProfiler

Short description:

We analyze the most expensive component, the Widgets Editor, which takes almost 100 milliseconds to render. We investigate why it re-renders and find that hooks five and 47 changed in the first instance, while only hook number five changed in the second instance. However, when examining the hooks in the component, we only find 13 hooks. This discrepancy is due to hooks inside hooks, which RackProfiler does not recognize. To match the hook numbers shown in RackProfiler to the actual hooks, we can look for custom hooks in the components pane of Rack DevTools, where the custom hooks are listed with their corresponding numbers.

And I keep looking at, okay, what are the expensive parts in each render? Why, what's taking the most time? Like, here, again, we have their editor header in render 11, for example. And so, I keep going like this, and I keep summarizing like all the components that render. And once I've summarized every render, I look at the summary, and I find the most expensive component, the component that corresponds to the most time spent rendering. And I start debugging it, because it's the most expensive part, because we could save the most time by optimizing it. And in this case, the most expensive component is the Widgets Editor. It's almost 100 milliseconds. So... Alright. Let's take a look at the Widgets Editor. Let's try to figure out why it rerendered. So Widgets Editor... I could click the Widgets Editor component, and I could see all the instances when it's rendered, and I would see that it rendered twice. The first time, it rendered because hooks five and number 47 changed. Hooks five and 47. And the second time, it rerendered because just hook number five changed. Alright. So we figured out why the hooks... Why the component rendered. Now, let's start to figure out what these hooks are. So I'm going to go to the codes... No, no, no, no, no. Not my private node. Did it open? Well. I need code. Yeah, that's not good. So. What I need, I need the wiki center. And I need to figure out what are hooks number five and hooks are number 47. So let's look at the widget editor. Let's see all the hooks that they have. So we have hook number one, hook number two, hook number three. Four, five, six, seven, eight, nine, 10, 11, 12, 13. And that's it. We have number of hooks. So here is another question for you. Rack Dev tools tell us that the component will render, the widget editor component re-renders because it's hook number five and 47 have changed. But if you look at the components, if you look at all the hooks that it has, you would only find 13 hooks within this component. So does anyone have an idea of what's going on? Where is hook number 47? What happened to the hook numbers? Why is TrackProfiler showing these numbers?

Yes, it is indeed hooks inside hooks. So one challenge, one annoying challenge with the RackProfiler is that when you get the hook numbers, React does not, RackProfiler does not know anything about custom hooks. What RackProfiler knows, is that your component has like 50 or 80 or maybe a hundred built-in React hooks being called. But if these hooks are wrapped with some custom hooks, some hooks coming from React Redux, some hooks coming from your own code, some hooks coming from other state managers, RackProfiler would have no idea about these hooks. And so when it shows you hook number 547, it is your job to match these hook numbers to the hooks that actually change in the code. And there are several ways to do this. So the first way, the first way that I always try is I try to find these hook numbers not in the Profiler pane and not in the code, but in the components pane of Rack DevTools. So here's how I do this. If I click the Widget Sage Control Component, it's already clicked and I switch to the components pane, this component would get focused again. And if I look through the hooks pane, I would see that the hooks pane inside the components, it does list all my custom hooks. So can I put this side-by-side? Let's try to put this side-by-side. Hold on, I'm going to move Zoom for powers. You can't see them, but I can. And they covered this stuff. So here, look, correct? The components pane, it does recognize the custom hooks. It shows me the use widget selection hook. It shows me the use paragraph hooks, use dispatch select, use selector, use selector, use selector, use selector, use selector, use dynamic app layout. And then a few built-in hooks, use affected, use affected, used affected and use callback. And if you open any of these hooks, what you would see is you would see numbers next to every built-in rect hook that's getting caught. Even if these numbers happen multiple levels down in the custom hook tree. The only hook that does not get numbers is the context hook. And there's a good reason for it. It's implemented differently. And there's a talk about it all on Rects at Amsterdam. I don't remember the talk name, but if you ask me in the chat I could find it. But yeah, if you expand all these hooks, you would see the numbers. And that is one way to match the numbers that Rect Profiler tells you to the actual hooks. It is annoying. Yes, there is no better way that I know of. It is a little cumbersome, but let's do that. So I'm going to screenshot this. I'm going to put it on the screen so I could compare it to the components pane. And let's see, we need to find hook number five.

Using Why Did You Render for Performance Debugging

Short description:

The React Profiler sometimes shows incorrect hook numbers due to a bug. To overcome this, I use a tool called Why Did You Render. It helps me understand why components render and which hooks and props have changed. By enabling Why Did You Render, I can see the changes in the Widgets Editor component and match them to specific hooks. I can also use the debugger to set breakpoints and determine which selector actually changes. This tool is invaluable for performance debugging and provides more precise information than the React Profiler.

And the hook number five seems to come from my first use selector. Use selector, here it is. So this hook has hook number five. All right. Oh, I saved and it reloaded. That is unfortunate. Which is said, let's find which side to begin. And then hook number 47, let's find, no, not here, not here. Oh yeah, here it is. Hook number 47. So hook number 47 lives apparently inside the dynamic app layout. And apparently it's a use effect hook. Which is, do you suffocate, suffocate, suffocate, suffocate, the second before the last use effect, hook 47. And if you're looking at this and wondering, okay, but why exactly, how exactly does it change? What's actually happening here? This does not really make much sense to me. I have a good comment for you, which is it does not make much sense to me either, so one issue and other bug with the React DevTools, with a React Profiler, unfortunately, is while it's mostly precise at showing the hook numbers, sometimes the hook numbers get off. I never had enough time or enough will to go and try and figure out what exactly is causing this and why exactly the hook numbers get off, but when you have enough hooks in the component, what happens sometimes is React starts miscounting and it starts showing you hook number 47, or it actually means hook number 49 or hook number 40-something, and while this first approach going to the components pane and matching the hook numbers it works well in simple apps, it is not foolproof, unfortunately, due to this bug in custom apps. In, sorry, in custom apps, in large apps because the hooks number gets off, and this is why apart from doing this what I often always do is I match this with another approach to finding the right hooks, the corresponding hooks. I use a tool called Why Did You Render. My question to you, folks, has any one of you used Why Did You Render before or heard of Why Did You Render? Could you just send a plus into the chat if you used or heard the tool that's called Why Did You Render? I see one plus and see a few minuses. Anyone else? All right, so Why Did You Render is a third-party library that I could plug in into my React app and this library would tell me why exactly each of my components renders, including which of the hooks have changed, which of the props have changed, and how exactly these props and these hooks have changed. It's available at WellDoneSoftware.yd to render on GitHub. And I already have this tool installed in the repository, so now I just need to enable it. So, I'm going to go to widgets-editor-slash-index.ts6. And I'm going to uncomment this line that says uncomment to enable yd2-render. And yd2-render is just configured like this. It's the boilerplate from the readme. You could read the readme and you could do this in your app. But if you do this, and if you go to any component that you're profiling, and if you add three more lines of code, widgets-editor.yd2render, equals, lock on different value true, what you would get is, in addition to the rect profiler, in addition to everything that rect profiler is showing, you would get this. yd2render would instrument every render of the widgets editor, and yd2render would look, every time something inside that component changes, and so yd2render would look all the wedges that have changed, and we'll show how exactly they have changed. And you would see that, look, hook, your selector has changed. With these values, meanHeight changing from 700 to 100, or Children changing from this array to this array, UnbiddenPaths changing from this array to this array, it will show you every change that happens in the component that you're trying to inspect. And this is really, really convenient. When I have some components, like Widget Editor in this case, that I'm trying to figure out, okay, why exactly it's rendering, oftentimes I would look in the performance pane, at the profiler pane, but then I would go and enable Widget Editor to render, and I would start inspecting it with Widget Render. Anyway, let's see how Widget Editor Render works and why did Render helps us to profile this interaction. So what we figured out so far is if I go to the, if I have this complex real-world app with some interaction that's being slow, what could happen, what would often happen is I would report a performance trace, and I would see not one but multiple things happening. And if all of these things or most of these things happen in Rect, I would go to the Rect profiler and I would record what's happening with Rect profiler, and I would also see not one but multiple renders happening. Whenever you profile a complex real-world app, that's something you would see. And so the next step, when you're profiling, when I'm profiling the real-world app and I realize that most of the time is spent in the Rect code, is I just go over every render and if I find the most expensive components, and then I try to figure out why these components have rendered. And in this case, I tried to figure out using the Rect profiler on its own. And unfortunately, I have stumbled upon an issue which is I tried to mention hook numbers to the actual hooks, and I realized that the result does not make much sense. Hook number five is some selector, yes, but hook number 47 is the useEffect. Why would useEffect change? That does not really make much sense. So, if the result does not make much sense, or if I want to dive deeper into the result, then what I do is I connect to yd to render, and I look at the logs that yd to render shows me when the interaction actually happens. Let's see how this looks. So, I open the text drop down, and now I'm going to repeat the same interaction that I've been profiling. I'm going to click the text, and as this happens, I get three logs into my console corresponding to three changes that happened inside the Widgets Editor, and I would see everything that changed during each of these changes, so let's go through this one by one. The first thing that happens, I would see is the Widgets Editor re-rendering because one of its hooks changed. This hook is a useSelector hook, and this useSelector hook returns different objects that are equal by value. This means the triple equals returns false, but the deep equality returns true. This is common for performance issues, but we are not going to solve this now because this is about performance debugging. Instead, we're just going to try and figure out which useSelector this are, so another thing I could do is I could try expanding the previous value and the next value to see which are exactly the useSelector returns. And this is already way more convenient than the profiler's pain because if I know what's happening in my app, I could look at this data and I could realize, ooh, look, maybe this is useSelector number four, or maybe this is useSelector number seven. I could just look at this and be like, okay, I know what's happening here. Or, and I could do that also for the next useSelector change, right, that also returns different objects that are equal by value but it's different objects this time. They have different shapes, right? So it's some other useSelector. And there's also the third useSelector, the changes that again returns the same object that's, it looks like the same object that got returned during the first render, so maybe it's this same useSelector. Maybe it's different, but it returns the same data. I don't know. But I already can see all the data that's like turned, it's already way more convenient because I can match it to the data that I have in my head. But, another trick that I could do to actually say with 100% certainty, with 100% precision, which selector that actually is. Is, I can use the debugger to set the breakpoints and then figure out which selector actually changes. So, let's do that. I'm going to and to do that, I would need to send a breakpoint at the line that locks this value. So, here's how I can do this. I'm going to click the codes. I'm going to go to the network and go to the code that which locks this line. Which is why did you render the GS lines, I don't know for. I'm going to right-click this line. I'm going to select at conditional breakpoints. And, here I would need to write a condition that when it becomes true, the debugger would stop.

Debugging Selector Changes

Short description:

To debug when a new selector changes, add a conditional breakpoint to compare the new selector with the variable hook name. When the condition is met, the debugger will pause execution at that line. By examining the call stack, you can trace the exact line and selector that changed. This method is especially useful for complex components with multiple selectors.

So, the condition that I want to be true is I want the debugger to stop when a new selector changes. Right? And so, if I look at the console logs, if I look at console logs, I would see that what this line locks hook, then my hook name, the result. So, I want to pose the debugger when why did you render logs, hook, new selector result? Because I only care about new selectors. It could be used state, it could be anything else. So, I'm going to add a conditional breakpoint that will compare a new selector with apparently the variable hook name. So, when hook name, triple equals your selector, Chrome will now pose the execution at this line. And so, hold on, follow me, I will select the text and Chrome will pose the execution at this line. I could also check the console, I could see that why did you render it started logging the same thing that it was logging before, right? We just said it's already written because of hook change, it still haven't logged this line because we stopped on it. But, wait, where's debugger or tech method? Yes, here it is. If I look at the call stack and if I start going up the call stack, up, up, up, up, up until I get to out of why did you render to the actual up code that belongs to my app, what I would see is I would see the exact line that get changed the exact selector that get changed the exact selector that get logged into the console because it get changed. And this is extremely convenient when you have a complex set, when you have a complex component with like 10, 15 selectors. This is super convenient to match. I wish there was a better way. It is a pretty cumbersome way but this is extremely convenient to match the exact selector, the exact change to the exact selector or exact other hook.

Debugging Complex Apps and Slow Typing

Short description:

We learned how to debug complex, real-world apps by recording with dev tools, identifying expensive components, and optimizing rendering. In the third part, we will discuss slow interactions with large nodes and how they affect typing performance.

So here's what we learned. We learned that we just need to do renders because the widgets hooks five. Apparently, this is the widgets user selector. Renders. There's still hook number 47, maybe it's not 47. It might be a wrong number. Maybe the RegDefTools miscount. But if I continue execution, then RegDefTools would stop again and I would keep going several levels up and up and up again until I get to my codes. And I would get to my code. I would get to a hook called UseDynamicAppLayout, which is called from Widgets Editor, it's called here. And then UseDynamicAppLayout calls the CanvasWidgetUseSelector. And that's another hook that changes. So CanvasWidgetUseSelector inside UseDynamicAppLayout. And then I continue. And again, I stopped at the same breakpoint and I go several layers higher until I get to the Widgets Editor, and I see my WidgetsUseSelector again. And this matches what they saw just running a quick sanity check. Does this match what I saw in the Rack Profiler? This seems to match what I saw in the Rack Profiler because three hooks can't get changed. Two of them were identical. And the first hook was the WidgetsUseSelector and the third hook that got changed was also the WidgetsUseSelector. So, so far so good. And... Yep. So this is what I do. This is what they do to debug this case. This is what they do to debug this interaction. This is what I do to debug this concrete component. And once I figure out which hooks are actually changing, I'm going to go and try to find a way to optimize this use selector. And our workshop does not cover that, but you could do some stuff. You could do, I don't know, you could optimize your selector itself. You could maybe do some of my selectors. You could do deep equality comparison. You can do a lot of stuff to optimize this, but this is generally the process I take with actual real complex, real world apps. I start recording stuff with the dev tools and I look whether it's mostly rect code or non-rect code or maybe something else. And if it's mostly rect code, I see if it's rect rendering, which in this case it was again. And I try to figure out which components are the most expensive and I just keep notes, like this. And then once I found the most expensive components, I tried to figure out why they were rendered using rect tools, using why did you render. And then I tried to figure out how to make them render less often or render fewer of them or etc. So it's basically the same process, it's the same process for simple apps. It's the same process for big apps. It's just that with big apps you need to repeat it over and over and over and over again until you've solved all the expensive interactions, expensive components.

So this was the second part of the ructure. We looked at real world app. We profiled the real world app with Chromative tools and rect profiler. We looked at why did you render. Which shows us which component, why exactly the components rendered. It's a very convenient tool. I like it a lot, it shows us what changes inside selectors, inside hooks. And we looked how to match the hook changed inside the rect profiler with the actual code. I can do this using the rect components pane, which is precise in maybe 90% of the cases. And I can use this using wide to render, which is a bit more cumbersome, but it's 100% precise. Sweet.

So in the third part of the workshop, we will talk about effects, which can also make your app slow and about a few other things. So let's take a look at another slow interaction in our first app. So one interaction we've profiled and we've sold was slow typing into the nodes when you have a lot of nodes. But here's another edge case, another situation when this issue happens. So let's say we fix this issue and we deploy the code to production and we made the announcement that look, not entering, not typing, it's now passed. But some users still came to the comments and were like, hey, it is still slow for me. And you went to investigate and you found out that these users have a different setup. Specifically, instead of having a whole lot of regular nodes, they have several really, really, really large nodes. And then when they type in any of like unrelated nodes, they can have as many nodes as they need, but if they have a lot of... Sorry, if they have several huge nodes, what happens is typing is indeed slow. Despite the fix that we've made, when you have huge nodes in your app, typing is still slow. So let's try to reproduce this case. I'm going to create maybe 20 huge nodes, and I'm going to click Add a node. And so now I've got like 20 nodes that are really, really big, right? But none of them is focused. Like this node is huge, but it's not focused. I'm just going to open a small node, and I'm going to try typing into this small node. And as I would type into this small node, I would again notice the typing is slow. So look, I'm typing, and the main thread gets completely red. I'm typing more, and it fills with red again.

Debugging Performance with Profiler

Short description:

To debug performance issues, start by recording the interaction in the performance pane. Look for spikes of CPU activity that correspond to slow interactions. Zoom into the code and analyze the rag dom development code. Avoid jumping to the profiler right away, as it may not show the effects that run after the render. The wrap profiler only shows the time spent re-rendering components, not the effects. This can lead to misunderstanding the actual performance issues.

So a lot of work, and yeah, it feels slow. So how do we debug this? How do we profile this? So my first step again is always, I go to the performance pane, let's switch this back to the light mode that we had through the whole worksheet. I go to the performance pane, and I click record. And I record the same interaction once, twice, thrice, and stop. And then I look through the recording. I see again, three spikes of CPU activity that correspond to me typing to the editor. So this is one, and then this is two, and this one is three. And I zoom into the first one, and I start looking through it. And again, I notice some rag dom development code, et cetera, et cetera. So again, a lot of rag dom code, right? So at this point, I could think, time to switch to the profiler and try to record this with a profiler. But, and this is what I've been doing so far. Yeah. Like don't be mistaken. This is what I've been doing so far. I've been skipping one very important step. In fact, if you do this, if you do this every time you have some performance issue, if you see some rag dom code and you jump to the profiler right away, you would make a mistake. Because if I do this now, if I switch to the profiler and if I try to record this interaction using wrap profiler, I record it once and I stop recording. What I would see is I would see that I had one render and this render took 26 milliseconds and inside the string rendered only the AppComponent rendered and the AppComponent took this 26 milliseconds. But what I might not notice is that while the render took 26 milliseconds, the effects that run after the effect has, after the render has been performed took 10 times more time. They took 300 milliseconds. And wrap profiler, it does not show the effects at all. It only shows the time that the components spent. Sorry, that the components spent re-rendering. If any component has inexpensive renderers, the wrap profiler would not show this industry view at all. And even worse.

Analyzing React Effects

Short description:

Folks, who of you still use React 16 and React 17? Could you send your React current React production version into the chat? A few of you are on Rect 17 and with Rect 17, this is even tricker because with Rect 18 you could at least see, you could at least notice that, hey, Passive Effects Reactor and Passive Effects and the Passive Effects took some time. But if you use, if you, for some reason use Rect 17, if you haven't upgraded to Rect 18 yet, what would happen is I would see the render duration, but I would not see the effect duration at all. And this might be very confusing. As long as I see the React code still taking most of the time and it's still taking most of the time, 50 milliseconds plus 315 milliseconds. It's almost the full duration of this keypress event. As long as I see this, I could just look at the event, I could look at the name functions and I could figure out what exactly is happening in the React code. Here's a very nice cheat sheet that I have for this. A React internal reference. This is true for React16, React17, React18, whatever React version you use. So render root sync means that you, the director is busy rendering some components. We have the commit layout effects, which runs use layout effect component did mountain and component it updates. We also have the function that's called flushBaseEffect, which in the reference it's the function that runs useEffect. And so what this means is I can't go to React Profiler and figure out what's happening here. Instead I need to keep going further down the flame chart until I leave the React code and stumble upon the actual effect that is getting executed. And it's quite easy to find, it has a different color. And if I find it, if I click it, I would see that here, here's this effect. Use effect, save nodes to local storage. All right, so we have found the culprit, we have found the expensive effect. How do we make it cheaper? So, at this point, there are two ways to analyze it. The first thing I could try to figure out is why do these effects rerun? Does the effect actually need to run or is it just its dependencies rate that's changing unnecessarily? Maybe we're also passing some function that changes every time.

Folks, who of you still use React 16 and React 17? Could you send your React current React production version into the chat? I'm just wondering, who of you are on 18? Who still uses 16? Who still uses 17? Could you send it into the chat? 17, 17, 18, 18, 18, 18, 18, 17, 18. Oh yeah, so, nice. A lot of you are already on 18. I just keep asking these during every workshop that I do and I just see how more and more people upgrade to 18. This is fascinating. All right, but a few of you still, ooh, AngularJS 1x. Okay, I'm sorry for that. This, my condolences. Anyway, a few of you are on Rect 17 and with Rect 17, this is even tricker because with Rect 18 you could at least see, you could at least notice that, hey, Passive Effects Reactor and Passive Effects and the Passive Effects took some time. But if you use, if you, for some reason use Rect 17, or no, sorry, not for some reason, if you use Rect 17, if you haven't upgraded to Rect 18 yet, what would happen, okay, I'm gonna switch to Rect 17, use Rect 18 false and comment out with the same part that does not exist in Rect 17. So if I switch to Rect 17, I get this warning that I custom coded and I re-record the same interaction using Rect Profiler, oh, and stub, wait now, oh, reload. Oop, and stub. Wait, this. Oh no, I did not switch to, what did I do wrong? What did I do wrong? Oh yeah, right, I need to restart the VEED server because it still uses Rect, Rect 18. So if I switch completely to Rect 17 in this app, or if you are still using Rect 17, if I try to record the same interaction using Rect Profiler, what I would see, I would see the render duration, but I would not see the effect duration at all. And this might be very confusing because unless I go in and directly match what I see, hold on, unless I go in and directly match what I see in the performance pane, with the profiler pane, and unless I notice that the profiler pane tells me that it's 25 second, sorry, milliseconds, but the performance pane actually tells me that the interaction took 350 milliseconds. Unless I notice this, what could happen is I would look at this component and I would try optimizing the most expensive components like we were doing in the past two hours, and that would not give me any benefit. Because this is only responsible for like 10% of the whole interaction cost. So this is why, this is why, there in the flame chart, there's one more step that I've been skipping so far. Which is whenever I record some interaction using Rack Profiler, sorry, using Chrome DevTools, what I need to check is, I don't only need to check whether it's mostly React code or mostly something else. I also need to check whether it's React rendering or whether it's React effects. And I'm going to switch back to React 18, yarn add, React 18, yarn start, and I'm going to record this again with React 18. This is a wrap, F, and stop. And I would have to look through my recording and try to figure out what exactly is happening here and where exactly React spends most of its time, if it's in rendering or if it's in effect. So, let's see what's happening here. We get the keepers event, the keepers event triggers the text input event, input, et cetera. Then I get some calls from the code mirror, anonymous, text, area input, poll, yada, yada, yada, yada, yada, like all this code, I don't really care about all these functions, they just run the child functions, right? They do not spend time, much time in their own, it's like, they're mostly just structures for other functions. So, I keep going down, I see some anonymous from index JS 6. All right, this does look like my code. We get some React, it's some React and we get some event handler. And then we call save node, then we call save node again, and then we finally get to React code. And so save nodes splits into two rectangles, it calls two functions, one dispatch set, state and another anonymous. If I keep going further, the second anonymous, so it's some React Redux code, and then again I would get to React on development. And so at this point, if it's just, if it never gets to React codes, then it's these brands, then it's mostly non React code, then I need to figure out what's actually happening there. What are the most expensive functions? But as long as I see the React code still taking most of the time and it's still taking most of the time, 50 milliseconds plus 315 milliseconds. It's almost the full duration of this keypress event. As long as I see this, I could just look at the event, I could look at the name functions and I could figure out what exactly is happening in the React code. And here is a very nice cheat sheet that I have for this. A React internal reference. This is true for React16, React17, React18, whatever React version you use. So render root sync means that you, the director is busy rendering some components. If you scroll farther down the flame charts, you could even see these components. You could see the nots list component being rendered. That just the nots list function being called, right? You could see the not button being rendered. You could see some React markdown being rendered. Button group buttons. So you could see all this. You could see all the components that are actually being rendered farther down the flame chart. Then, commit layout effects is the part that runs, use layout effect component did mountain and component it updates. We don't really have it here, it seems like. We have the commit root. Oh no, we do have it. Here's commit layout effects. We also have commit mutation effects, which I think just update the DOM. I'm not fully sure. I don't fully remember, but like, it's never a bottleneck. We have commit layout effects. It runs some layout effects and it's very cheap in this case. And then we have uncommit root, which is something else again. I don't care about it. Then we have the function that's called flushBaseEffect, which in the reference it's the function that runs useEffect. And so what this, what, when I see this, and when I see the flushBaseEffect is responsible for the biggest chunk of the recording, what this means is I can't go to React Profiler and figure out what's happening here. React Profiler would not show me this. Instead I need to keep going further down the flame chart until I leave the React code and stumble upon the actual effect that is getting executed. And it's quite easy to find, it has a different color. And if I find it, if I click it, I would see that here, here's this effect. Use effect, save nodes to local storage. All right, so we have found the culprit, we have found the expensive effect. How do we make it cheaper? So, at this point, there are two ways to analyze it. The first thing I could try to figure out is why do these effects rerun? Does the effect actually need to run or is it just its dependencies rate that's changing unnecessarily? Maybe we're also passing some function that changes every time.

Analyzing useEffect Dependencies

Short description:

To track which values update in your components, you can use a tool called useYDTOBlade. It allows you to see the exact dependencies of a useEffect that change. In this case, the nodes dependency changes while the useEffect runs. However, it is not possible to prevent the nodes from changing in this scenario. The getNodes function is not the bottleneck, and using an anonymous function or the resource function would not solve the issue. Any other ideas on how to optimize this?

Maybe we need just useMemory or callback something, right? So let's try to figure out that. We have our use effect that takes 300 milliseconds to run and it changes because it's nodes prop changed and active node ID props changed. So in this case, all these props they're defined just right here, right? It's very easy to see what they mean and what they do but if this was a bigger app, if there are multiple level of indirectness, then this will be harder to see. And so one tool that I really like to use for this to figure out which dependencies have changed is a tool called useYDTOBlade. So if you're Google for useYDTOUpdate. I think previously you had just a copy pasteable example. Now it's all dependencies. I don't like dependencies. It was very convenient before. Anyway there are a bunch of hooks called useYDTOUpdate and you could install some library to have that hook but until a few months ago, there was a website that would give you just the copy pasteable code that you could just copy and put in your app and then use it like this to track which values update into your components. And I really like this because you don't need to install anything, you just copy paste, it's so convenient. I'm gonna regret it. Oh, no it's reloaded anyway. Well, whatever. Let's go to my app slash use effect. I'm gonna return track to team. So I'm getting to go to app index just six and here's my use effect. And I want to see which one of these values actually change. So what I'm gonna do is I'm going to just paste the hook that I just copied. It's referenced here in the flow chart or you could just install um install in your library that has it. And I'm going to call this hook like this. We're going to call plus any string that would identify this log. So let's say it's app use effect. And I'm going to pass the values that everyone to track. In this case, by default, it's props, by default it's used to track which props change. But you can use it for anything. And this case I care about the notes and about the active node ID values, which are the parameters in this use effect. Cool. So let's see, let's see what it does. So I'm going to open the node and I'm going to type into the nodes. And they say type into the node once, twice, thrice. I see the logs from which ID to update, which show me the exact value, the exact dependency of this use effect that changes. And I could see that the exact dependencies of this use effect that changes is the nodes dependency. And this nodes dependency changes from this subject to this subject. So you could even copy, click this and store objects as a global variable. And also right click the second one and also store objects as the global variable. And then I could maybe deep compare them or do whatever I want with them. Temp one, temp two, like they are not equal, right? The nodes have changed. So the nodes parameter changes this while use effect runs. So my question to you, we've already seen this nodes object change, can we optimize this? Can we really optimize it? Can we do anything about this? Can we prevent this nodes from changing? And if yes, how, and if not, why? Can we make this nodes? So like I'm typing to the nodes, I'm typing to the nodes editor, right? And then use effect runs. Team suggests change the get nodes. How exactly should we change them? So first on mute and suggest that might faster. Yeah, so I think we need to pass in like anonymous function, right? In order to, I'm not sure if that's causing the issue, but yeah. That's actually a great point. So like right now we're running nodes every time, right? And this would call a draw, this would make it run only once when the components has been graded. So no, this is not the bottleneck, this is not a bottleneck. If this was a bottleneck, then we would see the get node function itself being expensive. Or, sorry, there are two ways this could be a bottleneck. The first way is if the get node function was being expensive, then we would just see that in the trace. Or another way this could be a bottleneck is if useState was taking this value and was setting nodes to this value every time, right? That's why you suggested to use that resource function. So this is not what's happening. UseState only uses the value that get nodes returned during the first render, then it just ignores it. So, yeah, this would not help. But great suggestion. Any other ideas, folks? We have a useAffect that runs on every key press because the node's object changes. Can we prevent it from changing? And if yes, how? And if we can't prevent it from changing, then why? Any ideas? It's alright if you guess wrong. It might be tricky. Just a wild guess. You know that useAffect block, you have two... Two things there. Is it possible to separate them out? I don't know if that's what... Causing the issue, because I don't know what that function does. Oh yeah. Yeah, yeah, yeah, yeah. So that's a good idea. That's a good idea. It is actually possible. So you're thinking in the very right direction. I don't think it would help here. Let's see what the safeNodeToLocalStorage actually does.

Analyzing Node Saving and Optimizing useEffect

Short description:

We can balance the node saving process by devaluing saving nodes to LocalStorage on every key press and instead saving them when the browser is idle or the user has stopped typing. However, we cannot eliminate the useEffect hook or replace it with useMemo in this case. The node subject changes as we type into the nodes editor, and we need to save the nodes to local storage whenever they change. To optimize the effect, we can analyze the performance recording and identify the marked function as the most time-consuming part. Calling marked on each node is expensive due to the size of the nodes and the parsing process. We can consider alternatives such as removing marked, finding a cheaper library, or optimizing the conversion process. To measure the duration of each loop iteration, we can use the console.time function, which starts and stops a timer and logs the duration into the console. This allows us to identify the most expensive iterations and optimize accordingly.

Doo, doo, doo, doo, doo. Comment option. So, safeNodeToLocalStorage. It takes the... Come on, get bigger. Why can't I, oh yeah. It takes the node subject. It takes the active node ID, it transforms the node object, it serializes every node out of the nodes. It transforms it into some measurable format, and then it stringifies everything, and then it saves it into LocalStorage. So, theoretically, you could split it, because activeNodeID is independent from the nodes, yeah, they are just set independently. But, because activeNodeID is, so you could see that there's this whole bunch of logic that saves the nodes, right? And then there's just one line, that sets the next node ID. So, but, so you could definitely separate them, and that might work in some cases. In this case, it still wouldn't work, because it's the node subject that changes every time. ActiveNodeID does not change, so it doesn't matter if we separate it or not. Qwinkly suggests to the balance nodes, and that is a great approach, yes. If we devalue node saving, if we devalue save nodes to LocalStorage, we don't need to save nodes to LocalStorage on every key press, right? We can do it when the browser is idle, we can do it when the user has stopped typing, so we can just balance it. That is an amazing solution, yes. Ikor asks, can we get rid of useEffect and useUsememo? I'm not exactly sure how, but if you have an idea, like a code snippet that could work here, please, that you think could work here, feel free to suggest. I'm not really sure. Yeah, I'm not really sure how to replace this with useMemo. But anyway, the key idea, the core idea here is that the, in this case, the node subject changes because it stores all the nodes, and it changes because I'm typing into the nodes editor. So I'm typing here the node subject changes, and that causes the useEffect to run. There's nothing really we can do about this. It has to change. It stores the objects, and we have to save nodes to local storage when the nodes change. So this is the step. Why do this have to run? We have figured out why they run, and we are not able to optimize this. So the next step is the final step, is I just need to go back to Ractor Filer, and I need to try to figure out how can I make this effect, how can I call this effect less open, which is what Quimic we suggested, we could debounce it, or how can I make that effect cheaper? And here's one last trick that I like to use, one last trick that I wanted to show you today. So if I go to the performance pane, and I, let's see, let's see, so here's save nodes to local storage, our, so here's our effect, right? UseEffect. It takes 300 milliseconds. All of the time in the useEffect is taken by the saveNodes to local storage function. Now, what's happening inside that saveNodes to local storage function is we just have a bunch of codes running like this. So we have transformer nodes. So we create the transformerNodes object. Then we walk over every node. We create a transformerNode object that just takes the original node, adds the HTML property, marked parse, and takes the date, just converts the date, parses the date from whatever, from as the date, sorry, as an ACO string. And then assigns the object. Then it stringifies the object. And then it saves it into local storage. And if you look into the performance recording, what's oops, let me make it narrower, what I would see is, I would see that most time in here is taken by the marked function. So as you went to safeNodes local storage, and I can see some empty time where maybe the browser failed to capture what exactly is happening. This typically should not happen. But this is why it's sometimes useful to just re-record. But I see that most of the time here is spent running the marked code. So we have a bunch of nodes, we have 20 huge nodes, and what we do here is we call a marked parse on each node. And marked parse, marked is a library that takes markdown and converts it into HTML. So calling marked on every of these nodes is apparently expensive because, well, nodes are huge and parsing them takes a while. And so, well, one way to optimize this is maybe to get rid of marked or like to get rid of the conversion or try to find a cheaper library or do something else. But before that, if you wanted to figure out which of these nodes are the most expensive, where does the browser spend the most time calling marked parse on every node. In which loop iteration the browser's spent the most time. How would you do that? My question to you. How would you do, how could you do, how do you measure how much each iteration takes? Because profiler does not show this. It only shows us the subsequent function that are called for say not to look out search. It does not show me which loop iterations are cheaper versus more expensive. Maybe it's only the first iteration that's expensive. Maybe Mark just needs a bunch of time to initialize and then all the subsequent calls are expensive, right? So I can't quite assume that everything there is perfect. So yeah, conicly suggests console time and this is the perfect function for this purpose. So whenever you need to go to the sub function level to figure out how much time each part takes inside one function, You would not be able to see this in the performance paint by default. And this is where console time comes really handy. So you might have heard of console time. Console time is a function that basically looks like this. You call console time, you pass in whatever label you want to pass in this case, we can call it loop iteration, plus the note ID. And that starts measurements for whatever codes you want to measure that starts the timer. Then you use, then you call console time and with the same label and that stopped the timer. And the moment you lock console time ends, the browser is going to lock the duration of this duration into the console. And you might have heard of this use case. Let's see how this works. I'm going to open the notes, hold on, I'm going to open the notes and I'm going to try typing and I type and you see like loop duration 11, 1999, 12, 9, 10 milliseconds and then like 0.04. So like a bunch of expensive iterations, one of very cheap duration. Again, a bunch of expensive iterations, one very cheap iteration.

Using Console Time for Performance Debugging

Short description:

The console time feature in Chrome DevTools is not only useful for logging to the console, but it also logs to the performance pane. This allows you to track the duration of specific parts of your function and identify the most expensive sections. By annotating your code with console time, you can easily see the duration of different parts of your function without having to match function calls with lines of code. This simplifies the debugging process. In this workshop, we covered various topics such as Chrome DevTools, React Profiler, why did you render, matching hook numbers with code, debugging effects, and using console time to debug effects. We also discussed the challenges of enforcing performance checks at the PR CI level and recommended using Lighthouse for checking loading performance.

And this is nice. This is handy. I can do this with just console lock myself, right? But one cool thing about the console time is that not only it locks into the console, it's also locks into the performance pane. So here, if I now, if I click record, if I press a letter and I stop recording and I look at my, here's my anonymous use effect, here's my save notes to local storage function that gets split into two functions because for some reason, Chrome tools sometimes do that. It's still the same function because in the code, it's the same function. And I still can only see that the marked functions here are getting called, but if I scroll up, if I scroll up and if I look at the timing's pane that would appear here, whenever I start adding any timings, I would look into the timings pane and I would see, look, local duration for this note, local duration for this note, local duration for this note, local duration for this note, et cetera, et cetera, et cetera, et cetera, et cetera. Logical duration for every note that I have annotated like this, including for the very cheap last one. And this is super, extremely convenient to figure out what exactly is happening inside your function and which parts of the function are the most expensive. In this case, all loop iterations cost more or less the same, but it was just one that would be expensive. You would be able to track it like this and to figure out what's actually happening inside it.

So Quinqui asks, sorry, what's the value of this? We could already see how much value Mark parsed was taking. This is a really, yeah, maybe this is not a great example. I would have to think of a more complicated example, but let's see. What if we find some huge component amount, I don't know, or use effect, infusion charts ready, component amount. I find it's mostly useful in like big parts of, sorry, big parts, big more complicated confluence, so in this case you know that Mark parse is the function that does parsing. And because I told you this, so you know, you know that this function is going to be cost more or less the same for every nodes, or it's like the cost is going to be proportional with the size of node because I told you that. But when you're dealing with some unknown code, and especially if that code this large, I just find it, it's definitely not the first thing that I use well when debugging, but when I find it, like where I find it useful is when I get a bunch of like function calls in here, and I don't really want to just try and separate each of this function. Like maybe I'd get like 40 function calls down here, and all of them would be more or less cheap. And I don't really want to try and look at them and try to manage them with this code. I highlight, this function happens on this line, this function happens on this line. Sometimes what I would do, so I would just annotate some parts of the function with console time. And I would write the annotations the way I understand the function instead of having to like go back and first and match the function calls with lines and trying to figure out what happens. And then I would be able to see that, like look maybe this if takes five milliseconds and this if takes seven milliseconds. And I don't really like need to look at what functions this if calls and this if calls if I have my console annotations. It just makes some parts of the debugging seems simpler.

So, if you can deal with that, it's perfect but I know, I tend to use it sometimes. All right, so this was the last thing I wanted to show you during this workshop. So we looked at multiple things in this workshop. We looked at Chrome DevTools and Rack Profiler. How to profile the apps within. We looked at Realworld app. We learned about why did you render. We learned how to match hook numbers with code. We learned how to debug effects and why it's important to follow the flame chart and look at rendered sync versus commit layout effects and flash passive effects. Because if you just go straight into Rack Profiler, then you might miss the most expensive part of the render is actually the effects. And we learned how to use use ready to update and console time to debug why an effect updated and logs some parts of an expensive function some parts of an expensive effect. So this is it for today. Feel free to remember or save this link to the Rack Profiler chat. It was accessible, it would stay public. Let me drop it into the chat once again. And now it's the time for Q and A. Do you folks have any questions about everything we went through today? Or do you folks have any questions about anything Rat performance in general? I do have the question. Yeah, go ahead. So how long did you wait for the library to come up? So why did you render? You said you waited for some time. And do you see that project being continued? Cause obviously this is open source, I think. Do you see like a trend that more and more people is gonna contribute to that or maybe another library that does similar things? Sorry, I missed the first part of the question. Like how long I waited for this library to come up? Yeah. What do you mean by that? When did you come up? When did it become like released to the public, I guess. Oh yeah. I don't really know to be honest. I think it was around for a few years. I don't think it's actively maintained nowadays. So if anything better pops up, I would learn about this. So like the last publish, you could see this. The last publish was like in 2022. But it still works well. I have some wishes for it. I don't know. There are times that some things I would love to add into this. But as it always happens, at some point folks lose interest in it and then it just stops being maintained. But so far it's still really good. It works well, including the directive team. Does this answer your question? Yeah, it does. Thank you. Right. Any other questions folks? About stuff we went through? About React performance in general? About web performance in general? Alright. Alexander asks, how would you recommend to enforce check performance at PR CI level? Is there a tooling to check performance before or after it changes? Ooh, this is my favorite subject. So, there is no good way for this. I think... So, I'm gonna give the short answer, then I'm going to give the long answer, right? So the short answer is, if you want to check for the loading performance, you use Lighthouse.

React Performance Monitoring and Optimization

Short description:

If you want to check for runtime performance, it can be harder to do. There are tools and articles available that provide detailed information on automated React performance testing and monitoring. However, it is important to note that reducing performance issues is not solely a technical challenge. It also involves cultural aspects and the mindset of the team. Simply introducing a process or tooling for performance monitoring may not be effective if the team does not prioritize performance. It is crucial to foster a culture where performance is valued and to hire individuals who care about performance. When dealing with third-party libraries, it can be more challenging to optimize performance. In some cases, submitting a pull request to the library may be an option, but if the library is not actively maintained, patching the library directly in the codebase using a tool like patch-package can be a workaround. However, understanding the third-party code and making changes to it requires careful analysis. The React Fiber compiler can help improve performance, but it will not solve all performance issues.

If you want to check for runtime performance, this is harder, and the article that they sent earlier, it goes over this in details. So, articles and guides. Automated React performance. It goes over this in details, I'm not going to repeat myself. And also, there are some tools that I have not used. Performance monitoring. There's a tool called CallbackRezure, I have not used it. It's some CI performance testing companion. Hopefully, it's good. So, this is the short answer. These are the tools. Are there any other? NLP values, right? No, oh yeah, no. So, well, there's also this really good article from Netflix. Wait, no, sorry. Okay, this is the short answer, this was the short answer. These are the tools, these are the solutions. The long answer is that this is tricky, and this is tricky for two reasons. The technical one and the cultural one. The technical one is that from a technical perspective, you would get a lot of noise. I've tried this in the past, there's always like 10, 20% of noise and so unless the regression is really big, your Ponditorian would not sketch it. Netflix has found some ways to reduce this noise, specifically they look like, here's the amount of noise that you really like reasonably get. They look, they do two things, they do anomaly detection and they do, sorry, I don't really remember, you'd better break this, but they talk a lot about the changes that they've done and it's an amazing case study, so the most naive approach would not work. Please check the Netflix case study to see how they did, how they reduced the noise. The second part of the answer about the cultural thing is that very often when I work with the teams, I see something that happens, so, wait, no, we're on the site, so Dan Lu culture. So this is maybe not something you could affect as an engineer, but this is something, if you plan to grow more and become more senior, this is something that's really useful to know. So there's the difference between the culture and the process. So the process is when you take some performance tooling and you set up performance monitoring and as a process and you, I don't know, force people to make sure that performance stays good whenever they introduce some changes. This is the process, you introduce a new process, people, if people follow the process, everything's good. Culture is when you hire the people who actually care about this, actually care about performance. And there's also the incentive which is, yeah, you could require in the promotion that people make something faster, I don't know. But these three different approaches, they give different results and working with my clients, working in different companies, I have found the process approach does not generally work well unless people care about performance. So, I don't know why you're asking about performance monitoring, maybe everyone in your team cares about performance and then that's good. But very often I talk to engineers that care about performance, but the rest of the team does not care. And they go like, hey, how do I make the rest of the team care? How do I do this? Maybe I set up some performance monitoring, maybe I do something like that. And you cannot do that as an engineer. It stems from the culture, it stems from hiring and it stems from the leadership. You can talk to leadership, try to convince them and you can show them this to say. Um. So. Yeah. Does this answer the question? Yes, awesome, great. Quinqui asks, in Unreal world apps, I often see the flame shirt filled with third party library code HeeGG. Oh, and other my favorites. Often it simply just looks like it's doing a ton of little things, nothing really stands out. Yeah. So this is complicated, I don't have a good framework for this, a good solution for this. The way I do this, the way I approach this is, typically I just try to look at the performance pain, and I go over the performance pain, and I try to notice the spots when one big function splits into a bunch of smaller functions. Basically when the tree branches down. Because when that happens, well, that happens that means, maybe we have a loop that calls the same function over and over in a row. Or maybe that means that function does a lot of things and calls a bunch of other functions. But anyway, that attracts my attention because typically when I look into that, that's a good point to start optimizing from. And this works well with your code. With third-party SysWay Tree Card, when you work with something like a ggrid, you can't really, there's way less that you could do. So I still go into third parties and I try to optimize them. And then what I do is, I don't really believe in pull request. The good action is to try and submit the pull request to the library to get it fixed. In more than half of the cases, the library is not maintained maybe they're like taking a year to merge the pull requests or stuff. But if the issue lies in a ggrid or like in some other third party library, what they've done sometimes is I've just patched the library straight in NodeModules. There's a package on npm called patch package. And what it allows you to do it allows you to make some changes to NodeModules and then just save this changes into your code base and reapply the changes on every subsequent installation. So let me drop that. And we've done this actually like, I don't know if you want a case if you want a use case. Yadda yadda yadda casual, casual, casual. So we've done this actually with a client with the ggrid with a client called casual. Like we are also dealing with the ggrid and we had to patch some stuff inside. Pup, pup, pup, pup, pup, pup, pup, pup, pup! Give me a good, oh yeah, like even this section, for example, talks precisely about how we fixed the ggrid. So you do need to dig into the third party code, which is annoying, but if you understand, if you figure it out what the third party code does, then it's still very much fixable. Does it answer your question? Yes, all right. Pu-pu-pu-pu-pu! Emmat asks, will React for git compiler fix the performance issues that are related to React? So it will fix some of them. It will not make your app magically faster.

React Performance Workshop Conclusion

Short description:

Fixing the boilerplate stuff is helpful, but there are other performance issues to address manually. Thank you for attending the workshop. You will receive a recording of the session. Follow me on Twitter for more performance tips. Hope to see you again!

I think it will fix the most annoying part of them, which is all the boilerplate stuff that you have to do. But, there's unfortunately a lot more stuff that could happen, like we just saw, we just saw one of these examples, we saw a use effect that runs every time and that calls a very expensive function. And you can't really do anything about it, like, automatically, you need to actually look at it and figure out how to optimise.

Thank you so much for coming to the workshop, you will have a recording, yeah, this is something to talk to GitNation folks. So, I think you will receive the recording in the portal. So, thank you so much for coming to this workshop, it was a huge pleasure to work with you.

A bit of self promotion, I'm on Twitter, which used to be, which is xnow. I post direct performance trivia, so feel free to follow me for direct performance tips and tricks. And thanks so much for coming. And I hope to see you at some other time, maybe at a conference.

Watch more workshops on topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
- Introduction
- Prerequisites for the workshop
- Fetching strategies: fundamentals
- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)
- Test your build and serve it on Vercel
- Future: Server components VS Client components
- Workshop easter egg (unrelated to the topic, calling out accessibility)
- Wrapping up
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:
What’s CLS and how it’s calculated?
How fonts can cause CLS?
Font loading strategies for minimizing CLS
Recap and conclusion
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
JSNation 2023JSNation 2023
44 min
Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
You know that annoying bug? The one that doesn’t show up locally? And no matter how many times you try to recreate the environment you can’t reproduce it? You’ve gone through the breadcrumbs, read through the stack trace, and are now playing detective to piece together support tickets to make sure it’s real.
Join Sentry developer Ryan Albrecht in this talk to learn how developers can use Session Replay - a tool that provides video-like reproductions of user interactions - to identify, reproduce, and resolve errors and performance issues faster (without rolling your head on your keyboard).

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Few developers enjoy debugging, and debugging can be complex for modern web apps because of the multiple frameworks, languages, and libraries used. But, developer tools have come a long way in making the process easier. In this talk, Jecelyn will dig into the modern state of debugging, improvements in DevTools, and how you can use them to reliably debug your apps.
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Our understanding of performance
user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.
React Summit 2023React Summit 2023
24 min
Debugging JS
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.