In this talk we will learn how to solve performance issues. We will learn how the JS engine works, see use cases from production and come out with practical tips that can help you to boost your app's performance 90%!
Practical Web App Performance Problem Solving
AI Generated Video Summary
This Talk discusses runtime performance in software development. It explores the concept of garbage collection and the importance of optimizing functions for faster execution. The event loop and profiling are also highlighted as essential tools for analyzing and improving runtime performance. Real-life examples demonstrate the benefits of profiling in optimizing functions and improving app performance. Overall, the Talk emphasizes the significance of understanding and optimizing runtime performance in software development.
1. Introduction to Runtime Performance
Hi, I'm Jonathan, a software architect at Vonage. Today we'll be talking about runtime performance. Garbage collection is the process in JavaScript where unnecessary objects are removed from memory. Let's compare two functions, build Array and build Array2, to see the difference in their runtime performance. Optimizing functions for faster execution is crucial.
Hi, I'm Jonathan, and I'm a software architect at Vonage. I'm also a runner. This is me winning a half marathon, and this is relevant because today we'll be talking about runtime performance. So here's the proof that you can take my word on it.
What is runtime performance? Let's see through an example, garbage collection. Garbage collection is the process in JavaScript in which JavaScript takes objects that are not needed anymore and removes them from memory. That's in one sentence. What can be the problem with that? Let's see that.
We have two arrays here, two functions. One is build Array, which creates an array and iterates n times and pushes items to an array. This is build Array2. It pre-allocates the array and then iterates n times and puts the same items into the same invoices. Two functions doing similar things, but let's see if they differ in something. Here we can see the profiling of these two functions. The build Array took longer to run than build Array2, and we can actually use this profile to see why. If we go deeper, we can see that in build Array, we had like 1,250 recurrences of minor garbage collection. If we look at build Array2, we see it's around 200. This is a big difference, and this is in essence runtime performance, profiling and optimizing functions to take less time to run. Why is it important?
2. Understanding the Event Loop and Profiling
The event loop is crucial for running the main thread smoothly in both the browser and Node.js. Profiling applications in the browser allows us to analyze runtime performance and optimize tasks. The same can be done in Node.js using the Chrome inspect page. Real-life examples demonstrate how profiling helped optimize functions and improve app performance. To summarize, profiling is essential for optimizing runtime performance, and there are plenty of resources available to learn more.
The event loop is what's running our main thread. This is where the code of our application is running. If it is blocked, then our code is not running, the other codes that need to run, for instance, on server side in API response, or in the browser, a user can't click anything, or animations will be stuck.
So this is in the browser, and this is in Node.js. And, again, the important thing to take from here is that you want the tasks to be as optimized as possible, and let's see how we can see the tasks, and how we can optimize them.
So this is a function, something quite noticeable, it should be familiar for you, instead of N we have a million, it creates an array of million elements. But we have the set interval. Set interval is a timer, and a timer is one of the things that add tasks to the event loop. So we can see that every second, something quite noticeable will be aided to the event loop and will be ran as a task. Let's see it in a demo. This is our function here. It's running in the browser, we go to the performance tab, and we hit record. So we can record for around five seconds, so we should have around five repeats of this function. And we can see these bumps here. Okay? We can actually see these bumps here, and if we just look a bit... So we can actually see this in the flame chart. These bumps are coming every second. This is our set interval. And we can see that it adds a task on every time, and the task is something quite noticeable. So we can actually see everything that happens during the runtime and analyze it for optimization. We have a summary tab that shows us, for instance, if we look at the whole runtime, it shows us how long our app was busy scripting versus idle. Or we can look at the cold tree, for instance. Let's look at one task and see what happened during this task, or we can look at the whole recording and look along all the calls for something quite noticeable too, and here we can see some minor GC. So this is the gist of profiling applications in the browser. Let's see how you can do this in Node.js. In Node.js you have the Chrome inspect page, and you have to start your application with a dash-dash inspect flag. The app is running, and you start the open dedicated DevTools for Node, go to the profiler tab, start profiling, let's profile for around five seconds again, we stop profiling, and we see our bumps here again. Again, it's the same as it is in the browser. If you know how to optimise in the browser, you can do it in Ogs, and vice versa. How can this help you in real life? Let's see a real-life example. In an app we built, we used Seasium, which is a 3D visualiser of the globe, and we had to put a lot of entities on this map, and this caused the UI to get stuck, so we profiled and we found out that two functions took a long time to run every frame. This is the updates of the label and the billboard, and we investigated these functions and we found out that if we add an update once dirty flag to the entities, only when we update them, we can optimise it so entities that did not get an update won't be processed by these functions. And the results are that from 50 per cent of the time scripting, we went down to two per cent of the time scripting and the app was saved and people could interact with it so the main thread was not blocked. To summarise, we saw the event loop and how it manages our main thread, so we don't want to block it. I can't stretch enough the importance of profiling while optimising your runtime performance, and I really like you to try it, learn it, and enjoy it. There's a lot to read about it. You can read it in my blog, you can read it in the Google Web Dev blog, and lots of stuff around it on the internet.