Nodejs runs big systems today. Sometimes, you can improve user experience and save on cloud expanses optimizing your nodejs scripts. In this talk I will share tips from production on how to improve nodejs runtime performance.
Nodejs Runtime Performance Tips
AI Generated Video Summary
This Talk focuses on the importance of runtime optimization in software development. It discusses the impact of unoptimized functions and the role of garbage collection. The Talk also highlights the use of profiling tools to identify and improve performance issues. Additionally, it emphasizes the importance of memory profiling to prevent memory leaks and optimize application performance.
1. Introduction to Runtime Optimization
Many years ago, I encountered a critical issue in a big system where a stuttering microservice caused delays of 2-3 seconds. As a software architect at Vonage, I have dedicated years to optimizing runtime techniques. In this part, we will explore how to identify and improve the performance of unoptimized functions, using a simple example. We will also discuss the impact of garbage collection and the importance of runtime optimization.
Many years ago, I was working on a big system, a critical life and death kind of system. The system was working well, until one day I got a call from a customer. Yonatan, he says, the system is not answering my calls. What he was experiencing was a stuttering micro service in the system. There was one unoptimized function in the pipeline that was taking 2-3 seconds to finish instead of milliseconds. For my customer, these 2-3 seconds were critical, especially when looking at a system at scale.
My name is Yonatan Kra, software architect at Vonage. I'm an egghead instructor, blogger, and a full-time geek. I also enjoy running. I have spent years optimizing my runtime techniques, which is important when you need to run away from bullies. Today I'm going to show you how to spot unoptimized functions, and how to improve runtime performance in your applications.
We'll begin by looking at a simple example. We have two functions here that do the same. They both create an array and push elements into the array. BuildArray1 creates an empty array and dynamically pushes the indices into the array. BuildArray2 pre-allocates the array and just sets the right value in the right index. The result is the same. Let's take a look at their profile. In their profile, you can see that BuildArray1 took around 40 milliseconds to run, while BuildArray2 took around 7 milliseconds to run. That's a huge difference for functions that do the same thing. If we look deeper, we'll be able to see that BuildArray1 had lots of gray bars inside and these are garbage collection instances, while BuildArray2 did not have these garbage collection instances. So this is how we use profiling in order to compare performance of different implementations of the same functions, for instance. So we can see if improvements really improved our applications or search for the right solution.
2. Optimizing Functions and Profiling Performance
When optimizing your functions, it's important to ensure smooth running and prevent delays that can affect API calls and promises. Profiling tools like Chrome inspect page and dedicated dev tools for node can help identify and improve performance. Memory profiling can also uncover memory leaks and provide solutions. By clearing referencing arrays and monitoring allocations, you can prevent memory leaks and optimize your application's performance.
So if this function is running, no API calls are being taken care of, no promises are being resolved, and your application is just stuck and everything else is waiting. So this is a good reason to optimize your functions and make sure everything is running smoothly.
Let's see an example for that. For this, we'll go to the IDE. Look at this function, something quite noticeable. You should be familiar, it's an array that a million elements are being pushed to and we've added a set interval that makes sure it's being called once a second.
So if we run our node with the dash-dash inspect flag, sorry, it starts node in debug mode and we could go to the Chrome inspect page, open dedicated dev tools for node, and go to the profiler page. This time, we'll start recording for like two, three seconds, finish, and here we go. We have our intervals, one per second, and if you dip dive into it, we actually see our call to something quite noticeable and exactly how long it took the function to run. This way, we can profile everything in our node application. For instance, you can start an API call using Postman and track everything that happens from the moment the API call gets to the server until it just runs out and you can see how long it took every function to run and if you see a long running function, you might want to optimize this function.
Another issue in runtime performance is memory and we'll soon see how to profile memory and maybe even solve memory links. In this time, something quite noticeable, it pushes into an array that is created outside, so on every interval, we just increase this array and we do not garbage collect the internal array we had before. So, let's see how this looks when we profile memory.
So, I start again in debug mode. This time I'll go to the memory tab and I make sure allocation instrumentation on timeline is working. I start recording and I see these blue bars appear. A blue bar means a memory that was allocated and was not garbage collected. And if I focus on one of them, I actually see these 100 specials as expected. On every second 100 specials were allocated and not cleared. If I focus on one of the specials, I can see it was its index in the array, the name of the referencing array, and even the line in the code that allocated this object. I can easily see if I want this object to be garbage collected or not. If I do want it to be garbage collected, then I have a leak.
Let's fix this leak quite easily by just clearing the referencing array on every interval. I'll stop the server, restart it, and let's record again. Now, the blue bar becomes gray. Gray means that we had an allocation here, but it was garbage collected. Again, you can call your API, see if you have a blue bar that becomes gray after the API finish running, and if not, it might have a memory leak in your API handler, for instance.
Usually, when I speak to developers about performance or help developers solve performance issues, I see a lot of confusion in regards to how to handle performance problems. I hope this talk helped you understand the profiling tools that you have, the powerful profiling tools that you have, and how you can help your functions run faster and prevent memory leaks in your applications. I truly hope I piqued your interest to learn more about this subject. I'm very passionate about it, and to enjoy it as much as I do.