Deep Diving on Concurrent React

Rate this content
Bookmark
Slides

Writing fluid user interfaces becomes more and more challenging as the application complexity increases. In this talk, we’ll explore how proper scheduling improves your app’s experience by diving into some of the concurrent React features, understanding their rationales, and how they work under the hood.

29 min
21 Oct, 2022

AI Generated Video Summary

The Talk discussed Concurrent React and its impact on app performance, particularly in relation to long tasks on the main thread. It explored parallelism with workers and the challenges of WebAssembly for UI tasks. The concepts of concurrency, scheduling, and rendering were covered, along with techniques for optimizing performance and tackling wasted renders. The Talk also highlighted the benefits of hydration improvements and the new profiler in Concurrent React, and mentioned future enhancements such as React fetch and native scheduling primitives. The importance of understanding React internals and correlating performance metrics with business metrics was emphasized.

1. Introduction to Concurrent React

Short description:

Hello, React Advanced. We're here to talk about Concurrent React, diving deep into its internals. If you love Deja's talks, you'll enjoy this one. I'm a front-end engineer at Medallia and also volunteer at TechLabs. Let's start by summarizing Concurrent React in one word or expression. Share your thoughts using the QR code provided.

Hello, React Advanced. It's great to be here, finally, at one of my favorite conferences, so thanks for having me. We're here to talk about Concurrent React. I guess it's going to be the second talk to date discussing a little bit of the internals of React. If you love Deja's talks, you're probably going to like this one. Hopefully, it's probably going to be as fun as that.

This is me – I'm a front-end engineer at Medallia. I also volunteer at TechLabs, and you can find me everywhere as widecombinator. By the way, all of the links for this session, including the slides, are available on this QR code, so if you follow up. One quick heads-up is we're supposed to be diving deep here, so whenever you feel like some content needs more discussions or explanations, look for this emoji, it means we're going to have further discussions.

Cool, I'd like to start by asking you – so if you guys had to summarize Concurrent React in one word or expression, what you would go with? For example, we watched Tejas talks, and we saw that fibers are units of work, so if you had to do a similar exercise with Concurrent React, what would you do? For that, I really want to have your help, so this is the QR code for you to input your opinions, and I have 30 seconds, so yeah, I'd love to hear from you what you think about Concurrent React, one word, one expression, what it's all about, and yeah, it's funny because 40 seconds sounds like a lot of time but when you have to strike a conversation in the meantime. So yeah, another 10 seconds to go.

2. Impact of Long Tasks on the Main Thread

Short description:

Let's talk about the main thread and the impact of long tasks on our apps. We often see these tasks blocking the main thread, causing unresponsiveness and user frustrations like rage clicking. Metrics and research show that slow first input delay can be seven times worse on mobile devices, and long tasks can delay TTI by up to twelve times on mobile. On old devices, half of the loaded time can be spent on long tasks, negatively impacting conversion rates. To avoid blocking the main thread, let's explore different task running strategies.

Let's take one step back and talk about the main thread. Cool, so let's take one step back and talk about the main thread. And let's talk about what's running on the main thread. We've probably seen that kind of stuff before when profiling our apps. Those are the long tasks or what we see on dev tools with those red flags because they're taking too much time for the long thread and the effect of that in our apps is terrible. So, because here, this example, we have some input fields, we have checkboxes, we have links and buttons and basically when we have long tasks running on our main thread, all of them are blocked. So our app becomes responsive. And you might say this is a virtually created example. But it actually happens a lot of there in real apps. And that's, for example, why we have things like rage clicking and other user behaviors that are reacting to that. And not only it happens a lot up there, but we even have metrics. So we have, for example, first input delay and other metrics that we've probably seen lighthouse or other tools that help us spot when this happens and et cetera. And not only we have metrics, but we have research around that. And for example, slow first input delay can be seven times worse on mobile devices. And not only this, but long tasks, they also delay TTI and other metrics. And again, on mobile, they can be up to twelve times longer than on desktops. And last but not least, on old devices, on old mobile devices, they could half of the loaded time could be spent on long tasks. And it's already bad when you say like that. But when you see some business outcomes of that, like for example, your conversion rate, it's even worse. So we get to the point where we want to avoid blocking the main thread. So how can we do that? To do that, we start discussing some task running strategies.

3. Parallelism with Workers

Short description:

Let's explore parallelism in the browser using workers. Workers have some gotchas, such as limited access to variables and the DOM. We can use actors or shared memory as abstractions for workers. Actors fully own their data and communicate through messages. However, postMessage is a fire and forget mechanism with no built-in request and response tracking. Shared memory, like shared array buffer, allows for direct memory access, improving communication efficiency.

So let's say we have four tasks we want to run in the browser, A, B, C, and D. We could, for example, go with a parallel approach. So basically, we have multiple tasks running on multiple CPU cores at the same time. We could have concurrency. That is, we have one single thread, but we quickly switch between the tasks to give the idea of concurrency. And we could have scheduling. That is pretty much like concurrency, but we have an extra piece of software called a scheduler assigning different priorities to different tasks and organizing the whole thing.

So let's start by the other approach, parallelism. So parallelism, the browsers, as you probably know, happens with workers. So workers, they have a few gotchas. So data exchange is via message passing. So we have that thing called postMessage that you probably know. But the first gotcha of workers is that we don't have access to the variables or the code from the thread that created them. And also we don't have access to the DOM. So making UI changes from a worker is really, really complicated and sometimes better than possible. But we have two abstractions that we could go with when thinking about workers. We have actors and shared memory.

So the first of them, actors, you've probably heard about actors from other languages like Elixir or other, especially in the back end, it's really important. So actor is an abstraction where each actor fully owns the data it is operating on and they only see, send and receive messages and that's pretty much it. And in the browser, we can think as, for example, the main thread is the actor that owns the DOM and the UI. But the first gotcha is that postMessage is a fire and forget mechanism. So it doesn't have any built-in understanding of request and response and tracking that. And the second thing is, okay, we offload code from the main thread to make things faster. But at the same time, this communication, because it happens via copying, it has a communication overhead. So we have to balance that and also the worker we are sending things to, it might be busy, so that's another thing we should take into consideration. On the other end, we have shared memory. And in the browser, we have one type called shared array buffer. And that's really great because, for example, if we send a shared array buffer via postMessage. On the other end, you're going to get a handle to the exact same memory chunk. So that's great.

4. Concurrency and Scheduling in Concurrent React

Short description:

Because of the way the web and browsers were built, there are no built-in APIs for concurrent access. Frontend engineers often find WebAssembly (WASM) challenging and slower when crossing over to UI-related tasks. Workers are great for data processing but difficult for UI. The second part focuses on concurrency and scheduling, exploring heuristics, priority levels, and render lanes. React follows a cooperative multitasking model with a single interoperable rendering thread, allowing interleaved rendering and background updates without blocking user input.

But the thing is, because of the way how the web was built and how the browsers were built, there's no built-in APIs with concurrent access in mind. So, because of that, we end up having to build our own concurrent data structures like mutexes and stuff.

And not only that, but we're not doing arrays or objects or anything that we're familiar with in JavaScript. We're just handling a series of bytes. And someone could say, okay, what about WASM? WASM is great and I agree. And that's actually probably the best experience we can get for the shared memory model.

But, again, it doesn't offer the comfort of JavaScript. And by comfort, I mean the familiarity. So a lot of frontend engineers when jumping into WASM, there's a lot of learning curve. And probably the most important thing is it's faster than JavaScript when you stay within WASM. But the more you have to cross over the line and do any DOM manipulation or anything that has to do with the UI, the more you start it starts to get slower. And then sometimes it gets to a point that you realize that some of the fastest low level WASM implementations might be slower than regular libraries like React as well or whatever.

So before we move forward, I have to say, though, that there's a lot of interesting stuff out there happening with workers and WASM. So we have the atomics type and we have open source stuff like WorkerDOM and Conlink, they're amazing. And if you're working with web workers, you should definitely check them out. But it turns out that workers are amazing for data processing and crunching numbers and that kind of thing. But they end up being difficult for UI related stuff. And it's sometimes harder than simply adjusting the work you got to do for a scheduler.

So that's when we get to the second part, concurrency and scheduling. So back to the question, I asked before, fingers crossed because this is going to be live and hopefully it's going to work. So those are your opinions. So, OK. Some emojis, of course, Concurrent React is about emojis. But, OK, I think Concurrent React is confusing, see that? So problems, pointless renders, priorities, I like priorities. So I'd like to show you my, and thanks by the way for participating, my take on Concurrent React is scheduling and I only have 20 minutes, so I've grouped a few concepts that I love about Concurrent React. So we're going to be exploring quickly the heuristics, we're going to be talking about priority levels and render lanes.

So OK, we talked about workers and we saw there is no workers or anything related to parallelism. So what do we have? We have this cooperative multitasking model where we have a single interoperable rendering thread. And because it's interoperable, rendering can be interleaved with other work that is happening on the main thread, including other renders from React. And also because of that, an update can happen in the background without blocking response to user input or that kind of thing.

5. Heuristics, Priority Levels, and Render Lanes

Short description:

React uses the execution back to the main thread every 5 milliseconds, making rendering interoperable. Priority levels range from immediate to idle, determining when tasks should be done. Render lanes, built around bitmasks, allow for batch rendering and reduce overhead. These concepts can benefit front-end engineers in handling a lot of data.

And one of the most interesting things I think is the heuristics behind that. Because React uses the execution back to the main thread every 5 milliseconds. And I had made the first time I saw that, it sounded a lot like one of those magic numbers we usually do in the front end, especially in CSS. But it turns out that it's smaller than a single frame, even when you're running on 120 FPS devices. So that's what makes that in practice rendering is interoperable. And that's really amazing.

Another thing are the priority levels. So we see them in the source of the scheduler. But you can see them repeated throughout the whole framework. So we can see them in the reconciler package, we can see them in the renders like React.non, and we can even see them in the dev tools. And basically, they range from immediate to idle. And each of them have different assigned priorities that are basically going to tell React when something should be done.

Last but not least, render lanes, which are also amazing abstraction. And if you were to teach this talks, you're probably wondering what that part of the code was all about. So render lanes are an abstraction built around bitmasks. So one lane bits one bit and a bitmask. And in React, each update is assigned to one lane. And because of that, updates in the same lane, they render in same batch and in different lanes, you get different batches. And the first good thing you get is because it's one bit mask, so you have 31 levels of granularity. And the other amazing things is that basically they allow Reacts to choose whether to run multiple transitions in a single batch or in separate batches, and this whole thing reduces the overhead of having multiple layout passes, multiple style recalculations, and multiple paints in the browser. That was a lot, right?

And I myself, when I went through some of these concepts, I was like really, really mind-blue, it was all amazing and really interesting but at the same time I couldn't not remember this talk by Kaiser. It's called But You're Not Facebook. So we're not building schedulers, we're not doing that kind of stuff on a daily basis. So how can we, like the other 99% front-end engineers benefit from these on everyday projects? So this takes us to the next part, that is scheduling in React for the rest of us. So again, there are many, many scenarios where I think each of the concurrent features can be amazing but I grouped four of them here. And the first one is when we have to handle a lot of data. So I'd admit that a lot out there, we saw a lot of examples that in the first moment they sound not practical. For example, we see how to find primes in our random methods and update that and optimize that with the transition, or how can we run complex algorithms for cracking passwords or even rendering like huge stuff. And those examples are great for benchmarking and showing what you can do with concurrent React. But at the same time, it's important for us to remember that we front-end engineers, we render a lot of data points, for example, on things.

6. Rendering Large Amounts of Data

Short description:

Rendering large amounts of data, such as in a dashboard, can cause laggy animations. By using transitions, animations can be smooth and responsive, regardless of the data size.

Or sometimes we have to render things on a canvas and we don't have off-screen canvas available. Or sometimes we just have to process a lot of data. So, one example is a dashboard. So we are usually building dashboards, for example. And in this one, basically I'm rendering the amount of visitors per day on a website. And as you can see, the animation is a bit laggy because of the amount of data I'm updating. And there's not a lot of magic in this component. I have a simple effect, I have some state and a non-change handler. So if I change that to use a transition, we can see that, for example, first the animation is always smooth no matter what, and also no matter how much data I have, it's responsive all the time.

7. Optimizing Performance with Transitions

Short description:

When I first saw transitions, I realized their potential for optimizing performance in various scenarios. For example, in an app with a large number of data points plotted on a map, we used workers and Redux Saga to handle searching, filtering, and data optimization. Similarly, in a game admin panel with thousands of players sending messages, we had to virtualize lists and use memoization extensively. However, transitions could have greatly improved the optimization process.

And I myself, when I first saw transitions, I really wish I could go back in time. For example, I was working on an app about five or six years ago where we had about 100,000 data points plotted on a map. And not only that, we had to support searching and filtering. So back then, we used workers to do a lot of data, and we used Redux Saga in its utilities to optimize things. And even the bouncing, but it could have optimized a lot of it just using transitions. And another example was this app I was building three years ago, something like that. There was the game admin panel for an online game where you have thousands of players sending thousands of messages, and as anatomy, you were supposed to search and filter all of those messages. So again, we had to fall back to virtualizing a lot of lists, and we overdid memorization everywhere with a lot of use memos and a lot of callbacks. But it could have optimized some of that using transitions.

8. Tackling Wasted Renders with External Store Hooks

Short description:

To tackle wasted renders, we can use external store hooks like use history selector. By creating selectors for specific properties, we can minimize unnecessary re-renders, resulting in improved performance for large-scale apps.

Another thing is tackling wasted renders. So we usually think about use callback, use that kind of thing for tackling wasted renders. Or even for example, changing the props we pass using react.memo and that kind of stuff. But who here thinks using external store hook? Wow, okay, cool. So it's a hook that was first marketed out there as part of concurrent React for library maintainers. And actually, we saw some state libraries adopting them. For example, Redux itself started using it from V8, and Vultio and many others. But this kind of brought the question how could we use it? One thing we do use a lot is React Router, right? So I guess most of us are doing like React Router for yeah, quite a few hands. So we probably know use location. That is a hook that we can get, for example, path name, hash and other information about our route. But use location is kind of an over-returning hook, so it's going to give us a lot of information even though we just need some of it. And if we use it like that, the result is, for example, you can see here that even though I'm just updating the hash, the path name components is going to re-render because I'm watching that hook. So how could we change that? We could go back to our original example and replace it with a new hook called use history selector. And I created use history selector using use history plus use sync external store, and now I can create selectors for path name and hash. And the result is that as I click now, hash is the only component to re-render, so I saved some re-renders. And this was a really simple example but in a huge scale app that saves us a lot.

9. Hydration and Concurrent React

Short description:

Hydration improvements allow for selective hydration and prioritizing the parts of the page that users interact with. Concurrent React enables faster interactivity by allowing the browser to perform other tasks simultaneously. This results in lower first input delay and improved user experience. Companies like Verso have already implemented these improvements in the Next.js website.

Another thing are the hydration improvements. So if you've been doing a SSR in React before, you know that hydration could only begin after the whole data was fetched for the whole page, and this also affected how users interact because they could only start interacting after that. And the other bad thing is that if you had a component or some parts of your app that loaded faster, they would have to wait for the slow ones. Now we have selective hydration, so now React won't wait for one component to load to continue streaming the rest of the HML for the rest of that page. And not only that, but React is smart enough to prioritize hydrating the parts that the users interact with first, and that's amazing. And also thanks to concurrent React the components can become interactive faster, because they allow the browser, because now the browser can do other work at the same time. And the final result for our users, for example, is lower first input delay or lower interaction to next page, which is also another amazing metric. And we even have people out there doing that. Verso, for example, they revamped the Next.js website using this technique.

10. New Profiler and Future Enhancements

Short description:

Last but not least, the new profiler allows us to properly figure out what's happening with our apps. It provides a scheduling tab and new hints for optimizing transitions. Exciting things are coming down the line, such as IO libraries like React fetch, built-in cache for components, suspense for CPU-bound trees, more hooks for library maintainers, offscreen components, server components, and native scheduling primitives in the browser called Prioritized Task Scheduling. This promise-based API, aligned with the work of the React core team and other teams, offers robust scheduling and interesting features like execution to event loop, task scheduling, and checking browser input. Companies like Airbnb and Facebook are already using it.

Last but not least, the new profiler. Because not only we want to build apps, but we also want to properly figure out what's happening with our apps. So we have this scheduling tab that we can use, for example, to properly see how our transitions are going. And one of my favorite parts are the new hints that we have. So, for example, we can� the profiler can see that we have a long task that could be potentially moved on to a transition. And then this could be like an interesting optimization. And I'm really, really hyped about all these things.

But I have to say that I'm even more hyped about what's coming down the line. So we're going to have IO libraries like React fetch. It's been out there for a while now. It's going to happen at some point, I guess. We're going to have built-in cache for components to integrate with suspense. One of my favorite parts is suspense for CPU-bound trees. So if you profile your app and you figure out that some part of your tree is really going to take a lot of time, you can go ahead of time and then fall back without even trying to render. That's amazing. We're going to have more hooks for library maintainers like using search in fact. The offscreen component that's also another one that I love, that it's basically a way to assign idle priority as the ones we saw before in your code to a part of your tree. Server components, which could be lots of other talks just to address them. And it's not a React thing, but something I'm really, really hyped about is the native scheduling primitives in the browser.

So, who here has seen this API umbrella before, it's called Prioritized Task Scheduling. If you haven't, I would definitely recommend you to check out. But it's basically a more robust way to do scheduling in the browser other than, for example, requesting idle callbacks and stuff. And it's going to be something that is promise-based and integrated directly into the event loop. And not only that, but it's aligned with the work of the React core team and other teams like Polymer, the guys from Google Maps, and even the web standards community. And this API is an umbrella for a lot of interesting stuff, like APIs for using execution to event loop, APIs for scheduling tasks, APIs for checking if the browser is busy handling some kind of user input and etc. And we even have people out there using it. So we have, for example, Airbnb. Facebook, who was one of the major contributors for input planning, for example, they're also doing that. And even we have libraries strongly inspired by the spec of that API. And we have an umbrella like the main thread scheduling.

11. Closing Notes and Takeaways

Short description:

React is not reactive, but it is concurrency. React has pushed other frameworks to the future and the whole web. First-class support for promises and understanding React internals helps create abstractions. Scheduling doesn't guarantee better performance. There is no silver bullet. Correlate performance metrics with business metrics. Slides available on speaker deck profile.

So a few closing notes. The first one is, I'd like to start by linking this talk by Rich Harris. I love this talk, it's one of the most brilliant I've watched in the last couple of years. It's called Rethinking Reactivity. And in this talk he says that React is not reactive. And yes, this whole thing has been out there for a while now and yes, he's right, it's not reactive because of the whole nature of virtual DAW and how it works, but it is concurrency, concurrent. And it might be enough for your case, virtual DAW, and a lot of things have been optimized well enough for a lot of cases, so that might be yours.

Another thing is this quote by Guilherme Verso from a couple years ago when he said that React was such an amazing idea that we would spend the rest of the day exploring its implications and applications. And I think that, for example, the fact that React is strongly related to this scheduling API, which is amazing, is one of those signs. So React has not only pushed other frameworks to the future, but also the whole web and I think that's amazing to see.

For the next conclusion, who has seen this recent RFC that is kind of the hot stuff on Twitter? Everyone is talking about first class support for promises and React use and some people have a lot of different opinions about that. So I have one really simple example of coding. I don't intend you to read everything, but basically what I'm doing here is creating a really, really simple cache and it's in TypeScript and I'm throwing promises and etc. But I created a hook called use promise in this code. And we can use that code, I know the font size is not the best one here, but basically I have this delay and I'm using promise with any promise. And the result is it is suspending and etc. And when you see that it might sound a lot like, okay, this is React use. So yeah, this is like, of course, a very, very, very simpler version of React use. But the reason why I'm showing that is that I really think that understanding those internals and the rationales behind them really helps us creating our very own abstractions. And this is really, really amazing. And one example is the first-class support for promises. Another thing is that scheduling doesn't necessarily mean better performance. Just like reactivity or any other strategies, like using or not Virtual Dawn, it has its drawbacks. So because of that, it's always important that we keep in mind the cliche that there is no silver bullet. And because there is no silver bullet, it's important that we identify our core metrics and what's really important for us. These days, there's a lot of information out there, so you're going to see amazing people building amazing tools with a lot of amazing opinions. And I know it's really, really easy to feel lost amongst so many different things, so many different thoughts that all look amazing. So it's important for us building apps and even building our own library sometimes, that we correlate those performance metrics with business metrics, like conversion rates and et cetera, because that's what matters most of the times in the end for our users. The slides for this session are available on my speaker deck profile. The link was in the other link and probably the biggest takeaway here is that I have stickers.

12. Scheduling Concurrent React Questions

Short description:

So if you have opinions about scheduling concurrent React questions, if you want to talk more about that performance, I'll be around not only here, but in the speaker booth and in the event. Thank you so much for having me, React Advanced. Thank you, thank you, thank you! I'm curious to know, did your interest in this come from stuff you encountered at work or is it something you've been interested in on the side? It was a personal experience when Toothpaste was released. I saw the benefit of stalking what was happening in the source code of React. And one final question, how would you recommend someone start with concurrent mode? Start by wrapping things with strict mode and see how your app reacts, then go from there.

So if you have opinions about scheduling concurrent React questions, if you want to talk more about that performance, I'll be around not only here, but in the speaker booth and in the event. So that's all I had for now. Thank you so much for having me, React Advanced. Thank you, thank you, thank you! Please step into my office. We have time for maybe one or two questions, so please join us as we get. I'm going to wait to see if any come up in the Slido. This is all very exciting. Obviously, it's the future.

I'm curious to know, I love that you showed us sort of more practical examples for the rest of us. Did your interest in this come from, like, stuff you encountered at work and then it was helpful or is it stuff you've been interested in on the side? Actually, it was a very, very personal experience a couple of years ago when I... When Toothpaste was released and I was watching the conference and we even had Jared Palmer and others presenting all of it, and it was really interesting. And when I saw that, it reminded me a lot of something a friend of mine had shown me a couple of months before and I approached him after watching that, I was like, they just announced it, how were we doing that a couple of months ago? And then he basically mentioned, oh, I saw that by reading the pull request related to suspense, and et cetera. And that's when I started seeing the benefit of basically stalking what was happening in the source code of React and following that up. And that's in the end helped me, for example, optimizing some of the apps I was working on back then. So yeah, it was back in 2019.

Okay. And one final question, and then you'll be at the speaker's lounge, which is how would you recommend someone start with concurrent mode? Oh, that's a tricky one. So actually I would say start, try to spot where you go through all of the concurrent features that by now there are a lot of them, and try to spot in your app what would be a potential feature for one or another, and also see if your app is concurrent mode compliant. So start by wrapping things with strict mode and see how your app reacts to that, and then go from that, like one feature for another. Maybe transitions are one of the most straightforward ones to go. But yeah, try to see where they all fit. Not only just goal, because everyone on Twitter is talking about that. Thank you so much for your time. Mateus will be at the speaker Q&A next to reception for more questions. We have to move on because of time. But another big round of applause.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Remix is a new web framework from the creators of React Router that helps you build better, faster websites through a solid understanding of web fundamentals. Remix takes care of the heavy lifting like server rendering, code splitting, prefetching, and navigation and leaves you with the fun part: building something awesome!
React Advanced Conference 2022React Advanced Conference 2022
30 min
Using useEffect Effectively
Can useEffect affect your codebase negatively? From fetching data to fighting with imperative APIs, side effects are one of the biggest sources of frustration in web app development. And let’s be honest, putting everything in useEffect hooks doesn’t help much. In this talk, we'll demystify the useEffect hook and get a better understanding of when (and when not) to use it, as well as discover how declarative effects can make effect management more maintainable in even the most complex React apps.
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Concurrent React and Server Components are changing the way we think about routing, rendering, and fetching in web applications. Next.js recently shared part of its vision to help developers adopt these new React features and take advantage of the benefits they unlock.In this talk, we’ll explore the past, present and future of routing in front-end applications and discuss how new features in React and Next.js can help us architect more performant and feature-rich applications.
React Advanced Conference 2021React Advanced Conference 2021
27 min
(Easier) Interactive Data Visualization in React
If you’re building a dashboard, analytics platform, or any web app where you need to give your users insight into their data, you need beautiful, custom, interactive data visualizations in your React app. But building visualizations hand with a low-level library like D3 can be a huge headache, involving lots of wheel-reinventing. In this talk, we’ll see how data viz development can get so much easier thanks to tools like Plot, a high-level dataviz library for quick & easy charting, and Observable, a reactive dataviz prototyping environment, both from the creator of D3. Through live coding examples we’ll explore how React refs let us delegate DOM manipulation for our data visualizations, and how Observable’s embedding functionality lets us easily repurpose community-built visualizations for our own data & use cases. By the end of this talk we’ll know how to get a beautiful, customized, interactive data visualization into our apps with a fraction of the time & effort!

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Advanced Conference 2021React Advanced Conference 2021
132 min
Concurrent Rendering Adventures in React 18
Featured WorkshopFree
With the release of React 18 we finally get the long awaited concurrent rendering. But how is that going to affect your application? What are the benefits of concurrent rendering in React? What do you need to do to switch to concurrent rendering when you upgrade to React 18? And what if you don’t want or can’t use concurrent rendering yet?

There are some behavior changes you need to be aware of! In this workshop we will cover all of those subjects and more.

Join me with your laptop in this interactive workshop. You will see how easy it is to switch to concurrent rendering in your React application. You will learn all about concurrent rendering, SuspenseList, the startTransition API and more.
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Featured Workshop
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Featured WorkshopFree
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn