TypeScript Performance: Going Beyond the Surface

Rate this content
Bookmark

Do you ever find yourself pondering how to identify and address performance issues in TypeScript to maximize the effectiveness of your code? If so, join us for a talk on the performance of TypeScript and the techniques you can use to get the most out of your code. We'll delve into various ways to debug performance, explore how to leverage the power of the TypeScript compiler to detect potential performance issues and use the profiling tools available to track down the underlying bottlenecks.

34 min
21 Sep, 2023

Comments

Sign in or register to post your comment.

AI Generated Video Summary

Today's Talk provides an overview of TypeScript performance and tools to address performance issues. It covers the compiler process, including the parser, binder, checker, and transformers steps. The Talk emphasizes the importance of keeping TypeScript up to date for better performance. It also discusses strategies for optimizing TypeScript compilation and debugging, analyzing build performance using trace files, and improving performance by simplifying types and avoiding overloading union types.

1. Introduction to TypeScript Performance

Short description:

Today we're going to talk about TypeScript performance. It will be an overview of the existing tools and how you can use them to help you deal with different performance issues in TypeScript. We will focus on the developer's experience and tooling. Slow compilation time and a lagging editor can be quite annoying and time-consuming. Keep your TypeScript up to date for better performance.

Hey, everyone. Thanks, everyone who's joining. Today we're going to talk about TypeScript performance. So a few words about me before we begin. I work on open source at The Guild, primarily in the GraphQL ecosystem, and I also organize a TypeScript meetup in Poland. You can find me on Twitter as AlexandraSays, or B-rows on GitHub. And also feel free to check out my personal website, alexandra.codes.

So this talk will have three main parts, an introduction to the topic of performance, ways to debug performance, and what to do to improve it. It will be like an overview of the existing tools and how you can use them to help you deal with different performance issues in TypeScript.

So I will start with the introduction. At this conference, we all probably know already what TypeScript is and why we are using it, so I will go straight to the point. So what do I mean by performance today? Usually, when we talk about performance in computing, we talk about the runtime performance, like how fast things are to our users. But today we're going to focus on the developer's experience and on our tooling. And I wanted to talk about it because whenever we are building a feature or fixing a bug in production, we would like to be like this Formula One driver. Our tooling should get us up to speed and it shouldn't slow us down because the better our process, the more value we can deliver to the end users.

And I think that this is an important topic because like slow compilation time and a lagging editor can be quite annoying. And what's also important, it can be quite time consuming. So we'd like to avoid that. So, because TypeScript team is doing like a lot, like really, really a lot type performance improvements, before I even go further with my presentation, I want the first takeaway from this talk to be keep your TypeScript up to date. I'm going to show you a quick example. So a few months ago, or maybe even like half a year ago, I wanted to debug performance issues or like see if there are any in Hasura console. That was like a familiar code base for me, because I used to work there. So I was like, okay, let's check it out. It's a fairly big application. And I run TypeScript compiler and it was 35 seconds, almost 35 seconds. I think we can all agree that this is a lot. And then before doing any debugging and looking to the types, I upgraded TypeScript. Back then, the latest version was 4.9.5. And you can see that it's like three times faster. That's a really, really huge difference.

2. Understanding TypeScript Compiler and Performance

Short description:

And you can compare the check time, it went down from 31.5 seconds to less than nine. That was huge. So, firstly, thank you TypeScript team for all the performance improvements. And secondly, remember to keep your TypeScript up to date. We can think logically that if we have bigger code base, if we have more code, then TypeScript will work slower considering the amount of code we have. But that's not always the case. I want to show you one example. This is more or less what we are going to talk about today, those kinds of issues, and how to spot them and maybe what to do to avoid them. But first, before going further, I wanted to go over how a compiler works.

And you can compare the check time, it went down from 31.5 seconds to less than nine. That was huge. So, firstly, thank you TypeScript team for all the performance improvements. And secondly, remember to keep your TypeScript up to date. You can use tools like dependabot or others like renovate to make it even easier. But the thing is that it's not always enough.

We can think logically that if we have bigger code base, if we have more code, then TypeScript will work slower considering the amount of code we have. But that's not always the case. And I want to show you one example. So what I have here is a simple page that renders... This is the whole code. It renders four buttons. I have medium button, small button, some big button that is being created from the... This is not the file I wanted to show you. Here's my big button. This is created using style function from style components. I also have a big button anchor and I'm also using the style function. This is the only difference, this is an anchor. So this code renders four buttons. You can see here that the compilation time was almost 14 seconds, and I think we can all agree that that's not ideal. We have only four buttons, imagine if we had eight or something. This is more or less what we are going to talk about today, those kinds of issues, and how to spot them and maybe what to do to avoid them.

But first, before going further, I wanted to go over how a compiler works. What are the steps in the TypeScript compiler, because I think that this knowledge will help us better understand how it affects the performance, and maybe also how it will allow us to pinpoint what step is responsible for our performance issues, and in the long term, it will make us better TypeScript programmers. So the first thing, the most important part, is program. It's like an object that has all the compilation context. The two things that are needed are obviously some TypeScript files and a ts.Config that describes how the compiler should behave. Then we have a scanner step, which scans the source code, like character by character, and converts it into a list of tokens. If there's an invalid token, it will throw an error. So here's an example for this simple one line of code.

3. Understanding TypeScript Parser and AST

Short description:

We have const keyword, it also recognises whitespaces, we have identifiers, column token, and so on. The next step is parser, which constructs an abstract syntax tree by analyzing the tokens. This tree brings context to the scanner and allows for the identification of variable declarations. Tools like TypeScript AST Viewer and AST Explorer can help visualize the code as an abstract syntax tree.

We have const keyword, it also recognises whitespaces, we have identifiers, column token, and so on. Then the next step is parser. It takes all the tokens, all this list and constructs an abstract syntax string. So you can also say that it brings context to the scanner. So for example, you can see the AST here. So when it goes over the tokens and it says that there's a const keyword and later on there's like a column token or an equal token, it will know that it will be a variable declaration so it can construct this variable declaration node. You can use tools like TypeScript AST Viewer, or there's also AST Explorer to visualise your code as the abstract syntax tree.

4. Understanding the Binder Step

Short description:

The next step is Binder, which gathers information about the context, including metadata for each node of the tree, scopes, and parent nodes. This information is used by the checker to determine the type of a node. It's an important and expensive step in the compilation process.

Ok so the next step is Binder and this is a very important step. It gathers information about the whole context. So it's quite expensive, it's a single run to the entire AST and it picks up some information that will be used in the later steps. So here's an example. One thing it does, it stores the metadata for each node of the tree. It also keeps track of the scopes. So when we are inside a function, what variables can I use, what's the function scope. Another thing is that it sets up parent nodes on each node, so that later when the checker has to see what's the type of a particular node, it can easily go up the tree. I will explain that in a second.

5. Understanding the Checker Step

Short description:

The checker step in the TypeScript compiler is responsible for checking if types are assignable to each other and performing type inference. It fills gaps in type information by traversing the abstract syntax tree and using parent information to determine the type of a node. This step is crucial for ensuring type safety in TypeScript code.

Okay, so now we have the checker step. And that includes most of the diagnostics. It has two main responsibilities. One is it checks if types are assignable to each other. And the other one is it does type inference. So if there are any gaps, there's no explicit typing. And if we have one node and we don't yet know what's the type, it's checker's responsibility to figure it out and to fill those gaps. So this is why this parent information about parents is important. Because let's say the checker is on one node, and there's no type information. So maybe it needs to go up the tree because maybe somewhere higher there was an explicit type declaration, or maybe somewhere higher it already figured out the type, so this is why it needs to be done fast. And then once it figures out that, for example, maybe there was explicit type declaration for the function and we are somewhere inside of the function, it can pick up the type and then go again down the tree to fill the gap.

6. Understanding Transformers Step

Short description:

Transformers strip information about types for JavaScript code and leave only types for declaration files. TypeScript Playground offers a scanner plugin and AST plugin transformers to visualize these steps.

Okay and then we have transformers. So to put it shortly, this step takes the ASD and if we want to have the Javascript code it strips all the information about the types, and if we want to have the creation files it strips all the things related to JavaScript and leaves only the types. And finally we are getting the files that we requested. That is if we requested any because you can also run TSD with no emit flag to only do the type checking bit. So this is more or less what happens in reality it's not that straightforward. There can be like a lot of back and forth between the steps. But I just wanted to give you an overview of the important bits. So you can also try it out in TypeScript Playground. There's a scanner plugin there's also AST plugin transformers so you can visualize some of those steps.

7. Optimizing TypeScript Compilation and Debugging

Short description:

With a larger codebase and more complex code, TypeScript compilation can be slow. To improve performance, it's important to debug and understand what needs improvement. Running diagnostics or extended diagnostics can provide basic information about the codebase. The diagnostics output includes parseTime, bindTime, checkTime, and emitTime, which can help identify areas for optimization. Additionally, tools like TSC Diagnostics Action and webTrimUp CLI can automate and visualize the diagnostics process. Checking the number of files and lines in the output can help determine if the TypeScript configuration is correct. The show config flag can be used to view the final configuration. For checking types, the generate trace flag and a demo project can be helpful.

Okay so we saw what's going on and obviously with a larger codebase and more complex code that can be slow. All that like traversing through the AST and all the tokenizing on the code that can take a lot of time. And what's the easiest way to be faster? And you can think about this question in terms of not only TypeScript compiler but in general if you have a long to-do list and you want to be done for the day and enjoy your evening, what can you do to be done faster? Well, the answer could be do less stuff. And well the thing is with TypeScript compiler that we can't eliminate some of those steps but we can make it do less work and we're gonna see how later.

So the first part is debugging because without knowing what we should improve, we can't really improve anything. So the question is that my build takes forever, my IDE is lagging, and now what can I do? So the first thing that I usually do is run diagnostics or extended diagnostics and this will give us some basic information about what's going on in the code base. So this is an output from the diagnostics plug. You can see how it maps to different compiler steps, we have this parseTime, bindTime, checkTime, and emitTime. And if you want something that makes it easier, like something that automates it without you having to run it on your code base every now and then, I created this very, very minimalistic GitHub Action. It's called TSC Diagnostics Action. And basically, for every PR, it gives you this diagnostics comparison. You can configure it to have different outputs. But going back to the output of the diagnostics pack. So, the first question you can ask yourself is, do these numbers, the number of files, lines of TypeScript, JavaScript, JSON, and so on, do these numbers roughly correspond to the number of files in your project? Because if not, then maybe TypeScript is picking up too many files and maybe that means that your test configuration is not correct. So, one thing that you can do is you can run a list files, like this TSC list files, with list files plug, and see what exactly are all the files that TypeScript is picking up. While the output from this plug is not very user-friendly and not very easy to read, you can use this webTrimUp CLI tool to visualize it. It will open this HTML in your browser with everything that is important visualized, so you can see what's taking, what files are significantly huge, and so on. It will help you spot that maybe you are compiling something that you don't really want to. Okay, another thing is high program or write time. So that also can indicate that the configuration is not correct. And I wanted to to show you another flag. There is show config. It's especially useful when you have like, you use extends in your config, and you extend one config after another. So sometimes it's quite difficult to know what's actually the final configuration. And that show config flag can help you with that because this will print like the actual configuration that TypeScript is using for your project. Okay, and now the most important part, and I think the most interesting part, a hi check type. So for that, I'm usually using generate trace flag, and I'm going to show you a quick demo. I have one project here. I just picked a random project from the GitHub Explorer page, basically. I just wanted to play with different projects.

8. Analyzing TypeScript Build Performance

Short description:

I run the generate trace command using yarn tsc hyphen hyphen generate trace. The trace JSON file can be loaded in the browser, allowing you to analyze the performance. By examining the trace file, you can identify problematic files and pinpoint slow type checking. In this example, the TRPC TS file and underscore apt-get function seem to be causing the slowdown. TypeScript suggests adding an explicit type annotation to resolve the issue. This is valuable when working with a large codebase or when trying to identify changes that may have caused a slowdown.

And I run the generate trace command. So basically, to do this, you run this yarn tsc hyphen hyphen generate trace, and you also have to provide out directory. I already did this before. Not to spend time on that, I have it here. It generates trace JSON file and types JSON file.

Now what can I do with this trace JSON, because you can also probably see that it's not very user-friendly to read. I'm loading it in the browser. So I have it here. So you can open, like if you're using Chrome, you can open like about tracing, or you can go to like you can open developer tools and go to performance stuff and load it there. There is a button, you click it, and then you pick the trace file that you just generated. And you're going to see something like this.

So now when I open this file, I can see that maybe there's something going wrong. So in this example, one thing I notice is that around here, there seems to be a lot going on. And if I click on this check source file, I can also learn what is the file that is being problematic. And what that gives me already some information. I know where to look for my issues. And then if I go down with this check expressions, you can see that the metadata here is slightly different. I also have the path, but I have the position and end. This check source file only had the path, so basically, you can pinpoint where the type checking is slow. So I can go even lower. I see now that I'm in the TRPC TS file. Now I'm in underscore apt-get, and I guess this one is something I can look into. So if I open this file, I will copy it, go back to the project. So if I open this, I pasted the error in case it doesn't work during this presentation, but I can already see that TypeScript tells me that the input type of this node exceeds the maximum length the compiler will serialize. The explicit type annotation is needed. So, okay, I know where the problem is, what's causing my TypeScript build time to be slower. And I know that this is the place that I probably want to refactor. This is quite useful, because imagine you're working on a large code base, and, or maybe you went on vacation and someone, like other people, were working on the code base. You go back and you notice that the build time is much longer. And how do you proceed? Where do you look for it? Do you go over all the PRs, all the comments that people added to see what was possibly changed that caused the issue? Well, that's one way.

9. Analyzing Type Comparison with Generate Trace

Short description:

Using the generate trace command allows for faster identification of problematic files and slow type checking. By copying the ID and navigating to the code base, you can locate the type being compared. The source and target types, along with additional information such as type arguments, can be found in the type's JSON file.

But using this generate trace will get you there much, much faster. And I also wanted to show you another thing. Here is a second place that looks like it takes quite longer, this particular file. It's this handle children event types test, PS. So here we have other boxes. We have this check the part note with this kind of metadata. We have check variable decoration, check expression. We also have something like structure tied related to. And here we have source id and target id. So those are like ideas of the types that are being compared.

And now what to do with that? Well, there's no information to trace itself. So you don't really know what the type is. But if we can copy this id and go to the code base. And if we go to this type's JSON file that we didn't look before. Let me make more space. There's something like go to line in VS Code. And you can go to this particular line. We have this id. And now we know what is the type that is being compared to. We know that the first declaration is in just mock-extended package. And we also know like what's the start, what's the end. And sometimes there's also additional information. So we know that this is our source, that is being compared to. And the target, I will go to line again, and the target is Prisma Client. We also know like what are the type arguments. This is a generic type. We have this first declaration, points to node modules. And sometimes we also have like a display information. So it's not sure if I can show. For example, here we have this display, it shows you exactly the code here in this space.

10. Working with Trace JSON and Types JSON

Short description:

And now if we go to this file that we are now debugging, we can see that there is in fact something like Prisma Mock. This is more or less how you work with the trace JSON and the types JSON file that was generated. You can also use a tool called analyze-trace to get information about hotspots in your code. We're also working on an extension called TS-PERP to make the experience easier.

And now if we go to this file that we are now debugging, we can see that there is in fact something like Prisma Mock. And this is where this just mock extended is being used. So this is more or less how you work with the trace JSON and the types JSON file that was generated.

Okay. Now let's go back to the slides. This one. Okay. You can also use tool called analyze-trace and you're provided with the output directory that you like where the types JSON and trace JSON were generated and it will give you information about some hotspots in your code out of the box.

Now there's also something that me and Daniel is also speaking at this conference are thinking about and slowly working on. We wanted to make this experience a bit easier. So we're thinking about something called like this is a working name TS-PERP and this is going to be an, for now, a VS code extension. I made a recording in case something goes wrong because this is a work in progress. So I'm going to show you basically this extension generates the trace for you and you run one command to display the trace. So it kind of saves you some back and forth between the browser and your project and we have also some other ideas so stay tuned.

11. Improving TypeScript Performance

Short description:

There are a few ways to encounter problems with TypeScript performance. Firstly, if it doesn't work as intended, check your configuration settings. The TypeScript team has created a performance wiki with useful tips, such as naming complex types and extracting them to separate type aliases. Additionally, simplifying types can improve performance, although it may require code refactoring.

Okay, now the slides and the improving part. So there are a few ways in how you can have problems basically. So the first one is it doesn't work the way it's intended to. So that probably means that you have to check your configuration, maybe your exclude, include or pys are not configured correctly. You can reference this tsconfig page, like you will learn there about all the configuration options and how to set it to actually what you want. And now let's say we have that covered. It does work the way it's intended to, but still it's doing too much work. So now we have a few improvements that we can do like TypeScript team created this performance wiki with a lot of very, very useful tips. I'm going to show you just a few with some examples.

So here I have two examples. I think I only have time to show you one. This is just a change from Zote runtime validation library. So basically what happened here, you can see that this Zote, I'll make it slightly bigger. Okay, you can see that the Zote formatted error was quite complex. Like there was a lot going on. We have conditional types and we can guess that, you know, CacheGrid probably has a lot of work to do there. So basically what the author of DPR did was to extract the complex part, the one with conditional typings, to a separate type alias and use it for ZotFormattedError. And I think in this description, we can see what was the improvement. So we can see that the check time went from 31 to 20 seconds. That's, I would say that that's a lot. And why that happened? It's because Typescript can now cache this type alias so it doesn't have to recalculate that every time this ZotFormattedError is used. And that caused this improvement. Okay, so this is the first tip, name complex types, extract them to separate type aliases. Another thing is to make your types simpler. I showed you this demo with startComponents, right? So the reason why it's taking that much time is because I'm using a lot of complex higher level functions from startComponents, this Time, this Memo. So this is actually from React and startComponents, but this one is from startComponents. So the Typescript has to do like a lot of inference to figure out the props. So in that case, there's like no easy fix, there's not like one line fix for the typings. Like something I needed to do here was to actually like refactor my code. And I have it here.

12. Optimizing TypeScript with Explicit Typing

Short description:

I created new components without using this type, I provided the props, I typed them explicitly. Running extended diagnostics showed a significant improvement in performance. Simplicity can lead to better performance. In another example, I found that explicitly providing a generic parameter in a GraphQL code generator helped improve TypeScript's inference. Debugging a slow file revealed that inferring the first parameter caused the slowdown. Providing the generic parameter explicitly eliminated hotspots in the codebase.

I just basically, I created new components without using this type, I provided the props, I typed them explicitly. So I have it. I have it here. I have those types and here my basic button looks like this.

So now if I run the extended diagnostics to see the difference, I think it will be pnp. Good, yes. Yes. You can see that this is much faster. It's only two seconds. Okay, so that was another example. And the takeaway from here is to like not show up sometimes simpler means better. And simpler also can mean more performance.

And another quite interesting example, something I found in one of the projects I was working on, a GraphQL code generator. You can sometimes help TypeScript. But there is, I added this, if you really need to. Like, you can rely on TypeScript inference for most of the time. But I just, I wanted to share this particular example because I found it interesting.

So, I started by showing you the trace. Yeah, here we go. So this is the generative trace from this project. You can see that this file, this Babel.ts is taking a lot of time. So, I went, like, I debugged it. It turned out to be around this declarer function and particularly about inferring this first parameter, the options here. And what I did, I provided this generic parameter explicitly. So I typed this client Babel preset options as what we actually use. And that helped quite a lot. I will show you the trace after the change, so you can see that there's no more hotspots in the code base. And that happened because now you can see that this, the second generic parameter, it has default type. There's this Babel.plugin object. But because the first one doesn't, a TypeScript had, like, it always needed to run the inference for this.

13. Enhancing TypeScript Performance: Practical Tips

Short description:

TypeScript performance can be improved by being reasonable and not overdoing it. Avoid declaring a large number of elements in a union type, as it can significantly slow down compilation. If improvements and configuration fixes have been made but the performance is still slow, the incremental flag can be used to cache compilation information. When opening an issue, ensure you're using the latest TypeScript version and include extended diagnostics output and trace from the generated trace usage. Debugging performance issues with the generated trace output can help identify areas for improvement. For more information, visit alexana.code and check the TypeScript congress entry.

So because I provided the first one, so I basically did something here, then TypeScript is able to default to this one for the second parameter. So that was my set of another example, and the difference was that the check time went from one and a half seconds to 0.88. And I think the last example I wanted to show you is to be reasonable. So TypeScript has a lot of really, really amazing features. One of them are, I will slowly close this, templates string literals. So you can kind of... like you can do a lot of that. But sometimes maybe you want to ask yourself how much you want to do, because this is an example from one of the issues reported in TypeScript. So basically we have this type, full date string, and like the year, that month, and the day is all being declared with templates. And you can see, if I hover it, that there are more than 74 thousand elements in this union. And that's quite a lot. So even though I have a really really short code here, like it's 30 lines of code, 29 if I count correctly, this takes, what is that, almost two minutes to compile. That's a lot. And the check time is, yeah, it's slightly less than the total time. So it's almost like, also almost like two minutes. So the takeaway here is also to be reasonable. Don't overdo it and don't skateboard on Rake.

So now let's say that we did some improvements and we also fixed the config if there were any issues. So it works the way it's intended to. It's doing the minimum required work, but it's still slow. Now what you can do is you can use the incremental flag. You can set it in the compiler options. So that will let TypeScript cache some information about the compilation. So whenever you recompile the project, TypeScript will calculate what's the least possible effort to do so. So it won't compile files that were changed, for example. And if it's still bad, you can open a new issues. So some tips to do so. Firstly, make sure that you're using the latest TypeScript. Maybe your issues are already solved. You also have to include extended diagnostics output, the trace from the generated trace usage and a minimal reproduction or maybe a link to the repository, if that's possible. So to summarize, we saw some tools and we saw how to debug performance issues with the generated trace output. I hope that it will be useful to you. And I really, really hope that you won't have to deal with any performance issues. But if you do, here you have the overview of the steps that you can take to improve the code base. Okay, so that's all from me. All of that you can find on my website if you go to alexana.code and to my speaking page. And if you find the TypeScript congress entry, you'll find the notes from this presentation.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Our understanding of performance
&
user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.
JSNation 2023JSNation 2023
26 min
When Optimizations Backfire
Ever loaded a font from the Google Fonts CDN? Or added the loading=lazy attribute onto an image? These optimizations are recommended all over the web – but, sometimes, they make your app not faster but slower.
In this talk, Ivan will show when some common performance optimizations backfire – and what we need to do to avoid that.

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
WorkshopFree
- Introduction
- Prerequisites for the workshop
- Fetching strategies: fundamentals
- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)
- Test your build and serve it on Vercel
- Future: Server components VS Client components
- Workshop easter egg (unrelated to the topic, calling out accessibility)
- Wrapping up
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:
What’s CLS and how it’s calculated?
How fonts can cause CLS?
Font loading strategies for minimizing CLS
Recap and conclusion
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.