Demystifying Memory Leaks in JavaScript

Rate this content
Bookmark

Today it is possible to write complex applications with only a handful of developers in a short time frame relying upon Frameworks and tooling. But what happens when the application crashes with out of memory errors? How is it possible to guarantee fast response times? These problems are still considered difficult to solve.

Let's dive into the Node.js internals and learn about profilers and the garbage collector. Understanding how the system works in depth let's you write code that is safer, faster and less error prone.

Let's make sure you always provide the best experience for everyone. Yourself and your customer. Let us find that memory leak and fix it.

Learnings

  •         Participants understand the Node.js memory handling and their shortcomings.
  •         Participants know when to profile their application to identify memory leaks and slow code.
  •         Participants are able to find and address most memory leaks.

33 min
24 Jun, 2021

Video Summary and Transcription

The Talk discusses demystifying memory leaks in JavaScript, covering topics such as memory allocation, typical memory leaks and issues, handling file descriptors and event listeners, tools and techniques for identifying memory leaks, fixing memory leaks and restarting applications, and Ruben's personal experience with memory leaks.

Available in Español

1. Demystifying Memory Leaks in JavaScript

Short description:

Hello, everyone. My name is Ruden Britschwurter. I'm a NodeJS TC member. I work as a principal software architect at MyConvolt, and I'm happy to be here today at the Node conference. Today I'm going to talk about demystifying memory leaks. So memory leaks are often considered something difficult and hard to solve. But is that actually so? And how can we ease the process? Memory leak is when a computer program incorrectly manages random access memory. In a way that memory, which is no longer needed, is not released. And the blue line is clearly a memory leak because over time you allocate just more and more memory without actually freeing it again. And this is bad. So how do we handle memory in JavaScript in particular? Because we do not have to worry about it, right? This is all done transparently and this is perfect. And memory is just freed. Why should there be a memory leak in the first place?

Hello, everyone. My name is Ruden Britschwurter. I'm a NodeJS TC member. I work as a principal software architect at MyConvolt, and I'm happy to be here today at the Node conference.

And today I'm going to talk about demystifying memory leaks. So memory leaks are often considered something difficult and hard to solve. But is that actually so? And how can we ease the process?

To get to that, I would first like to answer the question what a memory leak actually is. And I'm consulting Wikipedia to answer that question. So memory leak is when a computer program incorrectly manages random access memory. In a way that memory, which is no longer needed, is not released. As such, we are going to stack up more and more memory over time. And the program, in its worst case, might just crash because there is no more memory to allocate. And that's really the worst-case scenario. Or maybe you're in a cloud environment and you're going to have to pay much more money because you have an auto-scaling active and more and more memory is allocated there.

So here we have a graph which clearly shows you how memory leak would look like in comparison to the memory usage and the time the program runs. And the yellow line is a perfect program pretty much. It starts up and you allocate some memory and then there are some ups and downs and this is perfect for each program pretty much. Sometimes spikes might be higher or going a little bit down again, but on average it's the line, the flat line. And the blue line is clearly a memory leak because over time you allocate just more and more memory without actually freeing it again. And this is bad.

So how do we handle memory in JavaScript in particular? Because we do not have to worry about it, right? This is all done transparently and this is perfect. And memory is just freed. Why should there be a memory leak in the first place? So memory is divided into a stack memory and a heap memory. The heap memory is a dynamically allocated memory and the stack is done by the operating system in a typical case. Each thread has some stack memory. It is the last in-first-out algorithm, also called lethal. It's very simple, it's super fast and the memory that is allocated on the stack is automatically reclaimed as soon as the function exits. When we compare that with the heap memory, there is a lot of things going on because the stack memory normally only contains pointers to the function that is currently running. And if you push onto the stack, the currently running pointer, then we get to the heap, which is the dynamically allocated memory. And here we have a nice overview from V8 that is done for the Times of India.

2. Memory Allocation and Garbage Collection

Short description:

There is a lot of things going on. JSObjects are allocated, we have JavaScript source code, optimized code, we have regular expression code, strings, and much, much more. The heap memory, in V8 in particular, is divided into three areas. We have the young generation, intermediate generation, and old generation. The garbage collector is responsible for automatically freeing the memory we allocate, but it can sometimes go haywire.

There is a lot of things going on. JSObjects are allocated, we have JavaScript source code, optimized code, we have regular expression code, strings, and much, much more. So, this is the main memory for our program in this case.

And the heap memory, in V8 in particular, is divided into three areas. Again, we have the young generation. So, as soon as we allocate a new variable, let's say we say let the pool is the string test, then you are going to allocate the memory test and it's going to be put into the young generation. This memory is relatively small. Mostly JavaScript will have intermediate variables that you use to compute the next value immediately. And you do not need the variable as soon as you compute your next value. This is all done synchronously. So, we want to dispose all of that memory that we do not need as soon as possible. So, this young generation will only survive one so-called scavenge run. This is the first run where our program to create a memory is going to try to get rid of those not-anymore-used variables. And if any variable survives that run, it is then pushed into the intermediate generation. And if it survives the second run, then it's pushed into the old generation. This is the bigger part of the application. It's normally used for long and used variables. So, things that you reuse in each application that might hold pointers to a lot of things. And we use a different algorithm to free the memory in this case. And the algorithm used there is called MarkSweep. We start off from our root object. The root object in a browser would be the window object, and in Node.js, it's global. While in modern JavaScript, it would be in both, just global disk. And we use a somewhat like, similar to a recursive algorithm where we start off with the root object and we just connect each dot. We try to connect each node that is somewhat connected to their root object. All other nodes, all other variables, or allocations are going to be freed. And this should be mostly ideal to free all the memory that is not used.

So, I already spoke about that we as a developer do not have to worry about the memory that we allocate because it's going to be automatically freed. What is done in the background is a so-called garbage collector is run and that garbage collector has a lot to do and at some point it might actually even go haywire and not, does not work as we anticipated it would be. So we have to look into that a little bit closer.

3. Typical Memory Leaks and Issues

Short description:

A wrong data type and running too many things in parallel can cause memory issues. Event listeners and closures can also contribute to memory leaks. File descriptors, when not closed properly, can cause problems as well. Understanding JavaScript implementation details is crucial.

What are typical memory leaks or issues? This is in not a specific order and not all are actual memory leaks, for example, the first two. So, a wrong data type is still something that we have to look into because this is a very typical thing I frequently encountered. Often programs use memory types that are not efficient for the task that you want to solve. And then, when you handle like this big data, you're going to allocate a huge amount of memory, even though you would not need that, by just using a different data type.

So using the correct data type can also help you to prevent your program to crash in case you once encounter bigger data. And the second one is that we also try to run a lot of things in parallel. All of us probably love promises and async await, and we use promise all to improve the performance of the application, because normally, like with Node, things are all run on a single thread and we still want to do multiple things at a time, pretty much, to have multiple remote procedure calls. But when you just use too many of those at the same time, we also have to allocate memory call all of these steps. And when they are not yet done, they might also just cause an exception in your program because there's no more memory left. So this is also important to look into.

Now let's go to the first actual memory loop, event listeners. So something a lot of people know is in browsers, like some while ago, it was typical to just attach event listeners immediately to a specific DOM element in a list that you would care about an event to trigger from. And then we sometimes have to just add lots and lots of event listeners to all of these elements. Instead, you want to have a single event listener at a top node, that would then listen to all the events coming from all these nodes below. And then you don't have to attach so many listeners because event listeners also add up in memory. And sometimes we just add a new event listener for each event incoming. This would be a program at arrow, but it's frequently occurring.

And more difficult is even a closures. Because sometimes closures prevent the garbage collector from knowing when a variable is not used anymore. And when we are able to dispose that memory. So this is a tricky one. Then file descriptors can also cause issues. Because when you open up a file, there's a limited amount of files you can open at the same time. And when we just run our program, it normally works, even though we do not close in the file after opening it, that just can cause issues as well. And so we always have to make sure to close it on both success and non-success. So you should always have like a file link that no matter if you are successfully opening in that file or not. Because sometimes there might be something in between and you still open it. But there was an error somewhere else. And you still have to close that one. The most tricky part is that in JavaScript, and when we look in Chrome or any Chromium browser and Node.js is also run with V8, then we have to know some implementation details.

4. Reading Data in Chunks

Short description:

Reading too much data in a single chunk can be inefficient and lead to memory allocation problems. Instead, streaming data in small chunks is faster and eliminates the need for excessive memory allocation.

I'm going to go into some of those in a bit. Here's an example for just how bad it can be if we read too much data in a single chunk as well. So we have an HTTP server and we read a big file, the user requesting a file. This is going to take a while and it's not going to be efficient and it's going to allocate a lot of memory because we first have to read all that memory into the application and as soon as it's fully loaded then we are going to start sending out that data to the user, not before. And when there is not a single user but a lot of users requesting the same route at the same time we might also have a memory problem because and then just the total available memory is not going to be sufficient anymore and the application is going to crash. Instead, we should just care about streaming the data in small chunks. It's going to be much faster and we don't have to worry about the memory allocation anymore. We just get the first part in the file that we want to read and no matter if it's like multiple gigabytes in size, we are still able to send it out immediately in let's say a few kilobytes. And as soon as it's done, it's going to be done. The memory in the line is going to be very flat. We don't have to allocate more than the small chunks that we want to read.

5. Handling File Descriptors and Event Listeners

Short description:

And with the file descriptors, we also cared about handling the error. We open up the file, we want to write something into it, but we never close that file again. The event listeners could look something like that. Let's imagine you have a database that is event based and you want to write something into it. Strings are, like I mentioned, we have this very long string, a million characters long, and we just want to care about the very first 20 characters. It's still difficult to prevent, but we should care about it. Otherwise the application might just crash and this is the worst situation. And if it happens, well, we have to deal with that situation and dealing with those memory leaks can be very troublesome. We have to know how to identify the spots that cause that memory leak. And this is the most important part for me. So this is where we are going to focus upon now, how to detect and fix those memory leaks.

And with the file descriptors, we also cared about handling the error. We open up the file, we want to write something into it, but we never close that file again. And if we do that too many times, an application would also have a problem. So we should always make sure to close that file descriptor. And that's done in the final week blog, as I spoke about before.

The event listeners could look something like that. Let's imagine you have a database that is event based and you want to write something into it. And we have an event listener on data. So there is HTTP server, and it wants to connect to a database and it wants to write something into it. So we write into it and then there is going to be a response at the data event. But in this case, we made a mistake because on each user request, we add a new event listener to that database instead of just a single one upfront. And then we have a lot of event listeners in the end that are all very tiny, but it's going to end up as a blue line that you see earlier. And this is the worst one, pretty much. It's a very frequently seeing that is reported, both on the Node.js issue reporting and on the V8 one because this is a V8 specific part that you just have to know about.

Strings are, like I mentioned, we have this very long string, a million characters long, and we just want to care about the very first 20 characters. So we slice out the 20 and normally you would imagine the rest is just read because we don't use it anymore. Instead, we just have reference because internally in V8 there's a lot of optimization going on and it tries to be very fast and memory efficient at the same time. It's a heuristic that is used that might not always work as anticipated. Let's imagine we have the string test and the string ABC and you want to compare those. Instead of allocating a new memory chunk that would consist of test A, B, C, it would just allocate a new, very tiny thing and that points to the test string and to the ABC string and it would say, I'm the combination of both of these. It's like a tree structure that you could imagine. When we want to slice something out of it again, it's also just going to point to the original string and says, hey, I have this starting point and this endpoint of that string, but it's not going to free the rest of the string even if it's not used anymore. And that could end up in a bad situation where we litter our program over and over and as we all know, we should not litter our environment. It's still difficult to prevent, but we should care about it. Otherwise the application might just crash and this is the worst situation. And if it happens, well, we have to deal with that situation and dealing with those memory leaks can be very troublesome. We might even get some anger issues and you spend a lot of time looking into something without really knowing what to look for. So we have to know how to identify the spots that cause that memory leak. And this is the most important part for me. So this is where we are going to focus upon now, how to detect and fix those memory leaks.

6. Tools and Techniques for Identifying Memory Leaks

Short description:

There are tools and flags to identify memory leaks in Node.js, such as 'inspect' and 'trace GC'. The 'inspect' flag allows you to connect to the Chrome inspector or other tools. The 'trace GC' command shows what the garbage collector does at different times. The 'abort on uncaught exception' flag causes a heap dump when the application crashes. Tools like LM node can also be used. To learn more, visit nodes.js.org guides debugging getting started. I'll now show you some coding examples, including a program that confuses the garbage collector. We can use the 'inspect' flag to see the remote target and explore heap snapshots.

There are a lot of tools and also flags to identify memory leaks in Node.js. The first one is a dash dash inspect that you can just pass to the Node.js runtime. And then you're able to connect, for example, to the Chrome inspector, but also to other tools. And we also have a flag that is called trace GC. With the trace GC command, it is possible to see what the garbage collector does at a different time. It's also interesting to know about.

Sometimes it is very difficult to really identify the bug. And then you might want to use the dash dash abort on uncaught exception, because this is automatically going to cause a so called heap dump. And heap dump is the state of the memory at that current point of time. So when the application crashes, due to that exception, we can just then look at what the application was memory wise at that state. And there are more tools like LM node to look into those. If you care about more tooling, just check out this website, nodes.js.org guides debugging getting started, and you will find a lot of information to do that.

And I want to show you some coding actually. For example, for the string one. So in this case, I have the small program and I'm using the V8 module from Node.js and we have a variable that is count. We just have a set to zero and we have a variable pool. We have the function run, and this is going to do some things that we are going to look into in a second. We have this interval. It runs every millisecond. The run function is triggered every millisecond and then we create snapshots every two seconds. We know what the state of the application is at that point of time. And here we have this inner function and it is going to reference the original one, while original is again going to reference pool and pool is updated on each run. So here we just confuse the garbage collector due to the way these variables are all connected with each other. And let's see what would happen in this case. This would be this one. So I want to check that program out. So now I just open drone inspect and due to the dash dash inspect flag, I'm able to now see the remote target. So I'm going to inspect this. And here you see this overview, select profiling pipe, heap snapshot, allocation sampling, etc. And I want to look into some of these heap snapshots.

7. Analyzing Memory Allocation

Short description:

So I'm going to load one. The program creates new ones over time. We can compare one of these heaps and see where all your memory went into. For example, there's a big difference in the size delta of 2.4 megabytes. We see there's a lot of strings. We now know that it has something to do with funk. We definitely know we allocated way too many of these strings because you can see them all. They're very big and not free. You have to look into that. This is also possible to use in a browser. You can start it.

So I'm going to load one. And as you can see, the program creates new ones over time. So let's open up the first one. And let's open up at the last one. And I'm also going to stop the program. Now we have these two, because here you can look into it and there is like a lot of data going on. And it's not really sure what to look for. But we are able to compare it by just clicking on comparison. And now we can compare one of these heaps, not just snapshots with the other one. And then it's way easier to see where all your memory went into. For example, here we can see there's a big difference in the size delta of 2.4 megabytes. So this is clearly a point that you want to look into. And when we look into it, we see there's a lot of strings. These strings, again, when we look at them, it's first in this variable, in this string. And the string is in a specific object at that memory point. And this is in original and system context. Okay, context is in funk. Ah, this is something we know about in our program. So we know funk was here. This is our funk. And then you can just identify the actual cause. We now know that it has something to do with funk. Even though you don't see it immediately, we definitely know we allocated way too many of these strings because you can see them all in here. And they're very big. And they are not free. So you do know, you have to look into that.

My time is running short, so I have to hurry up a little bit. But I want to say this is also possible to use in a browser. Because this tooling is just available there as well. And for example, you can start it.

8. Fixing Memory Leaks and Tools

Short description:

Click on Inspect, go to memory, and view the overview. Another program, StringCrash, exemplifies the problem. Steps to fix the memory leak: monitor memory, profile with tools like Inspect, and take snapshots. Be cautious as snapshots are expensive. Compare the snapshots to identify the leak and fix it. Numerous tools are available to help, but remember that memory is expensive and can slow down your system.

Click on it. And by clicking on Inspect, and then going to memory. And here is the overview of that as well. I want to show you a little bit more of another program right quick. For example, this is StringCrash. So here, I have that problem that I had spoken about with the V8 one. We can look into the code. This is just doing exactly what I've spoken about earlier. And the program crashes very fast because it cannot allocate more memory.

All right. So steps to fix the memory leak. First of all, please monitor your memory. Use, for example, an APM or something like that. And then you know how your memory profile looks like. Profile it with tools like the Inspect one. And take heaps of snapshots. But be aware, they are very expensive. You do not want to do that in production in most cases. Always try to do it in a controlled environment that is the airport testing. If you have to do it in a production setting, then you might do it with a very strict setting in a timeout, only a specific container or something like that, to only do it very infrequent. And then you can compare such heaps of snapshots. That's one of the most efficient ways to identify the leak out of my perspective. And then you can just identify it and fix the problem as soon as you know what it is all about.

There are lots of tools that may help you. You can check it out online on the website that I have spoken about. Memory is expensive, so not only cost-wise when you have to pay dollars or whatsoever currency for it, but also in the performance way, because when you allocate more and more memory your system is going to slow down. So you really want to prevent that. Keeps snapshots are expensive for a similar reason. It is going to be like a frozen application at that point of time. It can't do anything else while gathering up all the memory of the application.

9. Fixing Memory Leaks and Restarting Applications

Short description:

Carefully choose your data type and your data structure. Try to fix the actual issue instead. One of my most productive days was throwing away 1,000 lines of code. 71% is a common issue that people encounter. Restarting the application is not a durable solution.

Carefully choose your data type and your data structure, and only restart your application at the last resort. Try to fix the actual issue instead.

So thank you very much for being here. I want to end with one of my favorite quotes from Ken Thompson. One of my most productive days was throwing away 1,000 lines of code, and I can very well relate to that. Have a great day.

Hello. Hey, Ruben. Good to see you here on stage with me. Honored to have you, of course. So what do you think? 71%, is this the percent that you were expecting? Pretty much. It's really a common thing that people encounter. And it's something that often people just try to work around instead of really digging into the problem and to fix it. Like, I've seen applications to be restarted once a day, at least, because they had a memory leak that they were not able to find, or that they tried to just reduce memory usage in general or increase the memory that they were able to use, and things like that. And before they had to restart the application, that's something, obviously, that's not nice. You have to pay a lot of money for it. So fixing those is really important. Yeah, it really is. And yeah, well, rebooting is not really a durable solution, of course.

10. Handling String Slice Example

Short description:

To avoid keeping the original string in memory when using the string slice example, you need to understand the internals of V8. Certain operations, like flattening strings, can allocate a new memory chunk and free the original memory. The 'flat string' library on NPM can help with this by internally creating a new string as a different data type. It's important to be cautious when allocating strings to prevent memory issues.

I want to go to the first question from one of our audience members. It's a question by I Am. The question is, how would you best handle the string slice example to avoid the original string being kept in memory? So in this case, you have to know about the internals of V8 pretty much, and to understand how these things work. And there are some operations that would flatten the strings. So they would indeed allocate a complete new memory chunk for the path that is required for your use case. And then there is no hard reference anymore to the original memory that you allocated, and then it would be freed. There is a library out there that is actually doing that as a very simple operation. Originally, it was just converting the string to a number in between. Then it would be internally cast pretty much. And it tries to create a new string internally in V8. It's just using a trick to make that string internally a different data type. And it's called flat string, written as F-L-A-T-S-T-R. And you can have a look at it on NPM. Yes, exactly. You can have a look at it there, and just check out some benchmarks. But mostly, you should just be careful about how you allocate strings to prevent it in the first place. Try to avoid it, of course. But instead of patching it later on.

QnA

Identifying Memory Leaks and Tools

Short description:

There are specific tools for identifying memory leaks, which can be found on the Node.js website. When identifying a memory leak, it is important to monitor your resources and use tools like an APM to detect leaks. To inspect and identify memory leaks in AWS Lambda or Google Firebase functions, you need to run the code in an environment where you have full control and can create heap dumps. It is recommended to do this in a staging environment. Automated testing for memory leaks is not common, but monitoring can help identify when memory usage exceeds expected limits. Git bisect can be used to track changes that may have caused memory leaks.

Next question is from CreoZot. Is there any specific tools for identifying memory leaks you can advise? So I did point out the web page that you might go to. And there are a lot of different tools that you can use. It's on the Node.js website. And I can highly recommend you have a look at that.

All right. The next question is from Alexey. How to inspect and identify memory leaks in AWS, Lambda, or Google Firebase functions? It should be similar, anyway. As long as you have access to the, yeah, it should be. Well, first of all, you have to identify the leak, that you have a leak. That's normally what is the monitoring, too. You normally monitor your resources. That's the first part. You use, for example, an APM for that. And as soon as you identify that there is indeed a leak, you want to run the code in any environment that you have full control over and that you are able to introspect. And you are able to, for example, always create heat dumps in any environment. So that would be one way to do that. That would be one of the ways that you might just use. I personally prefer heat dumps, and you should obviously try to not do it in production. Always try to do it on a staging environment, if possible.

Next question is from André Calazans. Have you seen or done automated testing for memory leaks? And do you know if there's any way to differentiate live leaks from real memory leaks? So I have not seen actual testing for memory leaks, but if you monitor it, then you should normally have a notification as soon as anywhere in your application memory is going out of hand, because you would normally set a limit for the memory that you're expected to use. And then, as soon as it reaches that point, you should have a closer look. And then maybe go back to your Git history and see what's changed in there. It's a manual process from there on, but at least you have a timestamp. It went wrong after that deployment. So in this case, you could easily use git bisect. And git bisect is a very powerful feature. So write a test that would trigger the memory leak.

Identifying and Fixing Memory Leaks

Short description:

Write a test to trigger the memory leak and identify the code causing it. Use the bisect method to find the commit that introduced the leak.

So write a test that would trigger the memory leak. So you know at least what somewhere causes it, even if you did not yet identify it. And then you want to identify the code. So you know, OK, when I run this code, and then at some point, for example, the application is going to crash after a couple of seconds or so. And then you know the leak is there. Then you bisect it, and that's a little rhythmic thing where you just step through from both sides. And if it does not crash, then you would know, OK, this code is not a leak. And so you can bisect it until you find the very commit that introduced the leak. Smart.

Large JSON Objects and Memory Limits

Short description:

Can large JSON objects kept in memory cause memory leaks? There is a hard limit for each data type. For strings, it's roughly 2 to the power of 28 characters. Exceeding this limit will result in an error. However, I'm not aware of any limit for JSON objects. Luckily, I don't typically reach those limits.

Next question is from Alexei again. Can large JSON objects be kept in memory? Sorry. Can large JSON objects kept in memory cause memory leaks? The JSON structure is not in particularly more causing memory leaks than anything else as a data structure. So yeah. But I think Alexei means that is it a problem if a JSON object becomes too big. In general, there is a hard limit in the data types, for each data type. Very much so. For example, for strings I believe, if I remember correct, it's roughly 2 to the power of 28 characters that a string might be of size. And as soon as you exceed that limit, there would be an error. And we would probably have that case if you have the JSON as a string. And if you have it as an object. I don't know of any limit in this case. There might be one. And there is definitely one for some data types. But luckily, I do not normally reach those limits.

Ruben's Experience with Memory Leaks

Short description:

Ruben shares his experience with memory leaks, recalling a time when he had to restart an application once a day due to a memory leak. He mentions that he was unable to debug the application at the time. Ruben also mentions that he has been fortunate to not encounter any personal memory leaks in recent years.

Lucky you. Last question that we have time for. And it's a question from George Turley. Since we're talking about memory leaks, what was your worst memory leak you encountered? And if you were able to find it, how did you fix it if you were able to? So the worst one I personally encountered... it's already a while back. It's probably the one that I spoke about earlier was restarting the application once a day. I was using the application, I was developing it, and I did not have the possibilities to debug it. But it was definitely the one that bugged me most. I... I'm not certain anymore about my personal one that I fixed. I was lucky in the past recent years and did not personally encounter any of those. Let's keep it like that, then, Ruben.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 
JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.
JSNation Live 2021JSNation Live 2021
19 min
Multithreaded Logging with Pino
Top Content
Almost every developer thinks that adding one more log line would not decrease the performance of their server... until logging becomes the biggest bottleneck for their systems! We created one of the fastest JSON loggers for Node.js: pino. One of our key decisions was to remove all "transport" to another process (or infrastructure): it reduced both CPU and memory consumption, removing any bottleneck from logging. However, this created friction and lowered the developer experience of using Pino and in-process transports is the most asked feature our user.In the upcoming version 7, we will solve this problem and increase throughput at the same time: we are introducing pino.transport() to start a worker thread that you can use to transfer your logs safely to other destinations, without sacrificing neither performance nor the developer experience.

Workshops on related topic

Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Workshop
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
WorkshopFree
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.
React Summit 2022React Summit 2022
164 min
GraphQL - From Zero to Hero in 3 hours
Workshop
How to build a fullstack GraphQL application (Postgres + NestJs + React) in the shortest time possible.
All beginnings are hard. Even harder than choosing the technology is often developing a suitable architecture. Especially when it comes to GraphQL.
In this workshop, you will get a variety of best practices that you would normally have to work through over a number of projects - all in just three hours.
If you've always wanted to participate in a hackathon to get something up and running in the shortest amount of time - then take an active part in this workshop, and participate in the thought processes of the trainer.
TestJS Summit 2023TestJS Summit 2023
78 min
Mastering Node.js Test Runner
Workshop
Node.js test runner is modern, fast, and doesn't require additional libraries, but understanding and using it well can be tricky. You will learn how to use Node.js test runner to its full potential. We'll show you how it compares to other tools, how to set it up, and how to run your tests effectively. During the workshop, we'll do exercises to help you get comfortable with filtering, using native assertions, running tests in parallel, using CLI, and more. We'll also talk about working with TypeScript, making custom reports, and code coverage.