Node.js startup snapshots

Rate this content
Bookmark

V8 provides the ability to capture a snapshot out of an initialized heap and rehydrate a heap from the snapshot instead of initializing it from scratch. One of the most important use cases of this feature is to improve the startup performance of an application built on top of V8. In this talk we are going to take a look at the integration of the V8 startup snapshots in Node.js, how the snapshots have been used to speed up the startup of Node.js core, and how user-land startup snapshots can be used to speed up the startup of user applications.

28 min
14 Apr, 2023

Video Summary and Transcription

The Talk discusses the Startup Snapshot initiative in Node, which aims to improve startup performance by adding new features and optimizing initialization costs. Startup snapshots, serialized binary blobs, are used to speed up startup and can be generated for both the core and user applications. Custom snapshots allow deserializing a heap from a specified snapshot, skipping parsing and compilation. The Talk also addresses misconceptions and limitations of startup snapshots, and highlights the different use cases for heap snapshots and startup snapshots.

Available in Español

1. Introduction to Startup Snapshot in Node

Short description:

I'm Joy, working on the startup performance strategic initiative in Node. The initiative has been renamed to Startup Snapshot. Node has been adding new features, requiring additional setup during startup. From LTS 18 to upcoming 20, support for VEJ, web crypto, file API, blob, web strings, and APIs under Yotel has been added. Node core is half in JavaScript and half in C++.

As mentioned, I'm Joy. I work at Egaleo and I work on Node and V8. So I've been working on the startup performance strategic initiative in Node for a while. The initiative has recently been renamed to Startup Snapshot as we have done the integration within Node core and we are enabling this feature for userland applications, which is what I'm going to talk about today.

So let's get started. So a bit of history. The Startup Snapshot integration started while Node started gradually dropping the old small core philosophy and adding a lot more built-in features. This includes new globals, in particular, new web APIs, new built-in modules, and new APIs in existing modules. These new features either require additional setup during the startup of Node core or require additional internal modules to be loaded during the startup.

So to give you an overview from the last LTS version 18 to the upcoming 20, we've added support for VEJ, web crypto, file API, blob, a bunch of web strings, and a bunch of new APIs under Yotel, such as the argument parser and the MIE type parser. The list is longer than that, but you get the idea. Like Node is growing a lot. Another part of this challenge is that the Node core is written about half in JavaScript and half in C++. So a lot of those internals are actually implemented in JavaScript.

2. Startup Performance and Initialization

Short description:

The startup performant is harder to maintain as the JavaScript code needs to be parsed and compiled before execution. To mitigate potential prototype pollution, JavaScript buildings are not copied for internal use. Node core uses multiple strategies to control startup initialization costs, including lazy loading, precompiling internal modules, and using V8 Startup snapshots. The snapshots are serialized binary blobs capturing the VA heap and execution contacts. They are used for isolates and contacts in NOE.

The upside of this is that this lowers the contribution barrier. In some cases, it reduces the C++ to JavaScript callback costs. But at the same time, this makes it harder to keep the startup performant. For one, the JavaScript code needs to be parsed and compiled before they can be executed, and that takes time. Also, most of the JavaScript code for initialization only gets run once during startup because it's just initialization, so it doesn't get optimized by the JavaScript engine.

When implementing a library in JavaScript, we have to take potential prototype pollution into account. You don't want the user to blow up the runtime just because they delete something from the building to a prototype, like string prototype that started with. So to mitigate this, no need to create copies of these JavaScript buildings as startup for the internals to use. They don't actually use the prototype methods that we expose to users. All this can slow down the startup as node grows.

So to keep the cost of the startup initialization under control, node core uses multiple strategies. First, we do not initialize all the globals and buildings as startup. For features that are still experimental or too new to be used widely or only serve a specific type of application, we only install accessories that will load them lazily when the user access them for the first time. And second, when building releases, we precompile all the internal modules to generate the code cache which contains bytecode and metadata, and we amp them to the executable so that when we do have to load additional modules, a user requests, we pass the code cache to V8, and V8 can skip the parsing and the compilation, and just use the serialized code when it updates, when it validates that cache. And finally, for essential features that we almost always have to load, for example, the web URL API, the FS module, which are used also by other internals, or like widely used timers, like time, widely used features like timers, we captured them in a V8 Startup snapshot, which helps simply skipping the execution of the initialization code and saving time during startup.

So this is kind of like how the node executable used to be built and run. Initially, we were just embedding the JavaScript code into the executable, at build time. And at run time, we need to parse it, we need to compile it, we need to execute it to get the node core initialized, and before we can run user code and process system states to initialize the user app. And then we introduced embedded code cache. So at build time, we precompile all the internal JavaScript code and generate compile code cache, and then we embed them into the executable. And the run time, we'll ask VA to use the code cache and skip the parsing and compilation process. We'll still keep the internal JavaScript code as the source of truth, in case the code cache doesn't validate in the current execution environment, but most of the time, the code used, and we just skip the compilation process. And now, with the starter snapshot integration, we just run the internal JavaScript code at build time to initialize a note heap and then we capture a snapshot and embed that into the executable. The other two are still kept as fallback, but at runtime we just simply deserialize the snapshot to get the initialized heap. So there is no need to even parse, compile, execute. The internal code is just like, you deserialize the result. So what exactly are these VA startup snapshots? They're basically the VA heap serialized into a binary blob. There are two layers of snapshots, one that captures all the primitives and the native bindings, and one that captures the execution contacts, like the objects and functions. So currently, NOE uses the isolate snapshot for all the isolates that you can create from the useland, including the main isolate and the worker isolates. We also have built-in contacts snapshots for the main contacts, the VM contacts, and the worker contacts, although the worker contacts snapshot currently only contains very minimal stuff.

3. Startup Snapshot and Useline Snapshot Generation

Short description:

With the default snapshot, the startup is generally twice as fast compared to launching without a snapshot. This gives us more sustainability as we grow Nucor while keeping the startup under control. Users can now create snapshots of their own applications, which is useful for applications where startup performance matters. The general workflow for building a snapshot is similar to building the core snapshot. Currently, the Useline snapshot only takes one file as input. There are two ways to generate the Useline snapshot: building null from source with the --null-snapshot-main configure option, or using the --build-snapshot-runtime option of the official node executable.

And we're still working on including more stuff there. Here. So with the default snapshot, the startup is generally like twice as fast compared to launching without a snapshot. For example, on like this MacBook, it goes from about 40 milliseconds to 20 milliseconds to start up Nucor itself. And from these, so on the left, that's the Nucor started up without the snapshot. And on the right, that's the Nucor startup started with the snapshot. And you can see like, even on just on the flame graph, there's less to be done, and it's obviously like much simpler, and it runs faster. So, yeah, and this also gives us more sustainability when we grow Nucor while keeping the startup under control. And we are still tweaking the internal snapshot to make sure that the built-in one contains just the right amount of essential features. But also, at the same time, the feature is now also available to users so that people can just create snapshots of their own applications. So this can be useful for applications where the startup performance matters, for example, like command line tools. In particular, if the application needs to run a lot of code during startup, or it needs to load a lot of system-independent data, these operations can be done when building the snapshot instead of being done at runtime. So the general workflow is similar to the workflow for building the core snapshot. So Null can take a user, provide a script, somewhere to do some essential initialization for the user applications, and it can run that script to completion. And after all the asynchronous operations are finished, for example, all the promises are resolved, Null can take a snapshot of the heap and write it somewhere, either into one binary along with the Null executable itself or as a separate block on disk. And when starting up again, Null can just get the pre-built snapshot from somewhere and then deserialize a user heap from it to skip the setups. So currently, the Useline snapshot only takes one file as input. So you'll have to bundle the setup code into one file. But we are also looking into Useline module support in the snapshot building script. So yeah, that's also coming. And currently, there are two ways to generate the Useline snapshot. So first is the tougher one, which is building null from source with the dash dash null snapshot main configure option, which tells the 2 chain to generate a snapshot using the provided user script and replace the default snapshot with a custom snapshot. And the final node executable would contain the user snapshot. So for example, we have a file here that contains something like global.js data. And I put some string there. Well, you can put many other complicated things, but this is like just an example. And then you go to no source directory, and then you build it with that configure option. And then the final executable that gets produced by the compilation process will contain a binary that already has the snapshot that contains this thing that you put on the global this. And another option that does not require building node from the source is using the dash dash build snapshot runtime option of the official node executable. So that might come handy if you just don't want to build node from source, which can take a lot of time.

4. Custom Snapshots and Runtime Synchronization

Short description:

By default, a snapshot.blob is generated in the current working directory. You can specify the input/output path using the --snapshot-blob option. Launching Node with a custom snapshot allows you to deserialize a heap from a specified snapshot, skipping parsing, compilation, and execution. A work in progress is the new single executable application feature, which allows generating and adding a snapshot to a single executable without compiling Node from source. Node offers JavaScript APIs to synchronize runtime states in the snapshot script, refreshing states like process.env and process.argv. Users can use snapshot synchronization APIs to reset and synchronize states during serialization.

So by default, this generates a blob called snapshot.blob to the current working directory using the given script. But you can also specify the input output path with the dash dash snapshot blob option. And when you launch node using a custom snapshot, you can again use that dash dash snapshot blob option as a way to tell node to deserialize a heap from the specified custom snapshot instead of setting up a default Node Core heap. And that will help you skip the parsing compilation and execution of maybe your own code and help you run faster.

And now, there is another option that's work in progress, which is the new single executable application feature. And the snapshot can be layered on top of that. And this means that it will be possible to generate a snapshot and add it into a single executable with Node itself without having to compile Node from source. So, a quick preview of the current design is, so, the user can create a JSON configuration like that one, snapshot. You specify the main script with snapshot main, the path, and then specify where you want the output to be written. And then you use the official Node executable to take this JSON configuration and generate the blob. And then you copy the executable to your destination path because it's going to inject that blob. And you use, for example, the post-ject command line to the, is maintained official by Node to inject that blob into the binary, which is C there, single executable application. And then after you're injecting the blob into that binary, that will contain a snapshot and Node would just know that, oh, I have a snapshot embedded into this binary. When I am launched, I will just initialize that. And with this, you don't have to compile no phone source to use a embedded snapshot. So that's still a work in progress, but it's coming. It probably will land in 20. And we're also thinking about, instead of doing all this, we'll provide some kind of single line utility to just get a JSON configuration. And then it generates a single executable that you can just run without doing all this.

And to help users create custom snapshots, Node also offers several JavaScript APIs to help synchronize the runtime states in the snapshot script. So by default, after deserializing a snapshot blob, Node will refresh the runtime states, like process the env and process the argv. And if the user code pre-computes something from these states or caches these states before the snapshot gets serialized, then this should be refreshed during deserialization. So for example, if the user code computes some debug level here based on the environment variable debug level like this in the snapshot script. So they might already been doing this. When they are building the snapshot, then at runtime, they can synchronize this with the snapshot synchronization API that we provide through the V8 Startup Snapshot namespace. So for example, in the startup snapshot script, you can add a few callbacks that will be called. First will be a callback that can be called during the serialization process to reset that debug level. And then you add another serialized callback that can reset the debug level according to the environment variable configured at runtime. And that will help you synchronize these states back. Or you can just defer the computation until the serialization if you are building a snapshot.

5. Design and Misconceptions of Startup Snapshots

Short description:

There is a getter called building a snapshot that you can use from the API to determine whether the code is being run to build a snapshot. Another useful API is setDigitalizedMainFunction, which allows us to specify a main function in the snapshot without passing another main script. By including a main function in the snapshot, we no longer need additional input and it's faster. We have integrated startup snapshots into NodeCore to speed up the startup of the core. There is experimental support for user-line snapshots with JavaScript APIs in the DBA startup snapshot namespace. We are also working on support for a single executable application and more features in the snapshot. Thank you to all the contributors and supporting organizations.

That's also like there is a getter called it's building a snapshot that you can use from the API to determine whether the code is actually being run to build a snapshot. And another useful API is setDigitalizedMainFunction, so which allows us to specify a main function in the snapshot without having to pass another main script.

So for example, if the snapshot we have, there is a database of, for example, greetings in different languages. One way to log the greeting, according to some environment variable during runtime, is, for example, you can pass a separate main script that does the logging. But that means we'll need an additional input, which also needs to be parsed and compiled at runtime. So instead, you can use the JavaScript API, which can include a main function into the snapshot. So the code of this main function will be also compiled and serialized into the snapshot. So a runtime node can just deserialize this main function and just run it. Then we no longer need additional input. And it's also faster, because there is now no need to compile more code.

So a summary, we have been integrating startup snapshots into NodeCore to start up, to speed up the startup of the core. And now there is experimental support for user-line snapshots with some JavaScript APIs in DBA startup snapshot namespace to help building them. And we are also working on support for a single executable application as well as more features in the snapshot. Okay, so finally I like to thank all the people who have been contributing to this feature, including Anna, Colin, James, Chen Zhong, Dasham, and many others that I'm forgetting about in the slides, sorry. Also personally, I like to thank Bloomberg and Ingaria for supporting my part of work on this. And that's it, thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Do I, okay. Thank you, Eversome, you know the deal. Yeah, let's come, let's come have a chat. Hello, thank you so much for that talk. Audience, both online and in the room, please do feel free to ask questions. Slido.com, 1404, or the QR Code, which is? Same diff. Let me pull up the questions. I'm just gonna open with, while I'm faffing around and getting questions and waiting for them. So far, what have you found the common misconceptions in the design, that people's common misconceptions are with the design of Startup Snapshots? I think one common misconception would be that this is somehow related, the feature itself is somehow like, people miss Heap Snapshots with Startup Snapshots a lot.

QnA

Different Tools for Different Means and Q&A

Short description:

They used the same underlying infrastructure, like NVA to serialize a heap. But they are kind of different tools for different means. Heap Snapshots is designed for doing diagnostic on the heap, and Startup Snapshots is meant to be rehydrated, whereas Heap Snapshots is not, Startup Snapshots is. We've got loads of questions. All right, first question with the most upvotes. Can snapshots also be used to start up AWS Lambdas faster? Next question is, is there any chance of accidentally leaking sensitive environment variables to the snapshot during the build time? How much of a startup boost has been tested to be achieved for user land startup snapshots?

They used the same underlying infrastructure, like NVA to serialize a heap. But they are kind of different tools for different means. Heap Snapshots is designed for doing diagnostic on the heap, and Startup Snapshots is meant to be rehydrated, whereas Heap Snapshots is not, Startup Snapshots is. I think that's, there are so many snapshots in VA, so people can get confused a bit. Yeah, that's something that's commonly seen. Yeah, which also makes sense. They have similar names, people that are starting to learn as well.

We've got loads of questions. We went on my phone from having no questions to having absolutely loads of questions, so thank you to everyone in the room online who's been submitting them, wow.

All right, first question with the most upvotes. Can snapshots also be used to start up AWS Lambdas faster? So this is not something that, because in AWS, it's not something you, as a user, can control, like how Node gets started. So like the responsibility of using this kind of falls on whoever that runs Node. I think in this case, it kind of falls on Amazon. So, yeah, as a user of this, I was an embedder of Node, that's kind of out of your control, but if you are someone who can run Node yourself, or like if you can pass additional flags to Node, then yes, you can do that. Awesome, thank you.

Next question is, is there any chance of accidentally leaking sensitive environment variables to the snapshot during the build time? Which one? Oh, sorry, we can go through them together. It's the top one that is highlighted there. Is there any chance of accidentally leaking sensitive environment variables to the snapshot during build time? So in Node core, we have some internal assertions to make sure you don't leak them. So we are also considering exposing this to the use land. But if you're deserializing a startup snapshot, usually you don't get to see them. It's very difficult that you get to see them because Node will refresh all of them. So if you kind of like intentionally put something in there, that's possible. But also you can refresh them later. All right. Yeah, although I imagine if people were doing it, it probably wouldn't be an intentional act. Definitely something I've been guilty of. How much of a startup boost has been tested to be achieved for user land startup snapshots? So there is literally a TypeScript compiler in Node Core as a test fixture to test that we can snapshot the TypeScript compiler. So that's like you get still around, it's still in the range I mentioned about. It's like two times faster. Before it's like 200 milliseconds, for example, on my test machine.

Snapshot Performance and Limitations

Short description:

And then when you turn the snapshot on, it's like 100 milliseconds. If you do a lot of initialization in your application, you can get like two times faster. The limitations of snapshots include async operations that need to be finished before taking a snapshot. Putting all the code in a snapshot to optimize the startup of the whole app is doable, but may not provide enough compilation cache.

And then when you turn the snapshot on, it's like 100 milliseconds. It kind of also depends on how much initialization you do in your application. If you do a lot, then you're going to save more because you're not even going to run code to initialize your application, you're just going to destroy the things. So yeah, in general, I would say you can get like two times faster.

Cool, thank you. We have time for one more and we have a bunch that have two ticks in. Let's actually go with the one that's right here at the top. What are the limitations of snapshots? For example, a TCP connection to a database won't be serialized or perhaps there's other things to consider as well.

Yeah, that's what I mentioned earlier. Like, because for example, that would be a async operation that needs to be finished before you take a snapshot because I'm pretty sure like it might be possible to somehow be able to deserialize a in-flight request, but also that's not currently a goal of this feature. So you kind of have to make sure there are no async requests before you take the snapshot and you need to resolve the promises. And yeah, those are the current limitation of these starter snapshots. Yeah, and hopefully start to build people's mental models around when it is appropriate to do this and when perhaps it may not make sense for a project.

Let's actually do just one more here. How good of an idea is it to put all the code in a snapshot to optimize the startup of the whole app? So one thing that you can do is just wrap everything in a function that you don't actually invoke when you build the snapshot. And then you put that function as the deserialize main function in the snapshot. So when you deserialize, it runs the whole function. So that's doable, but also you probably don't get enough compilation cache with that because if they're not on the top level, VA would selectively compile some of those but not all of them. There are some hints. So it's something that you can do, but also if you want optimized optimal result, you can try to put more of them at the top level. Yeah, makes sense, awesome.

Look, there are so many more questions. They fall down the bottom of the page, but we are out of time. So I will remind everyone, both in the room and online, that the speaker Q and A room is where to go to continue to ask questions about this topic. The physical space is out by reception to the left of the door as you're kind of facing to exit it and those online, you can use the Q and A room in the spatial chat, but please join me in a massive round of applause for Joy. Thank you so much, what a fantastic talk. Thank you. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 
JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.
JSNation Live 2021JSNation Live 2021
19 min
Multithreaded Logging with Pino
Top Content
Almost every developer thinks that adding one more log line would not decrease the performance of their server... until logging becomes the biggest bottleneck for their systems! We created one of the fastest JSON loggers for Node.js: pino. One of our key decisions was to remove all "transport" to another process (or infrastructure): it reduced both CPU and memory consumption, removing any bottleneck from logging. However, this created friction and lowered the developer experience of using Pino and in-process transports is the most asked feature our user.In the upcoming version 7, we will solve this problem and increase throughput at the same time: we are introducing pino.transport() to start a worker thread that you can use to transfer your logs safely to other destinations, without sacrificing neither performance nor the developer experience.

Workshops on related topic

Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Top Content
Workshop
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
WorkshopFree
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.
React Summit 2022React Summit 2022
164 min
GraphQL - From Zero to Hero in 3 hours
Workshop
How to build a fullstack GraphQL application (Postgres + NestJs + React) in the shortest time possible.
All beginnings are hard. Even harder than choosing the technology is often developing a suitable architecture. Especially when it comes to GraphQL.
In this workshop, you will get a variety of best practices that you would normally have to work through over a number of projects - all in just three hours.
If you've always wanted to participate in a hackathon to get something up and running in the shortest amount of time - then take an active part in this workshop, and participate in the thought processes of the trainer.
TestJS Summit 2023TestJS Summit 2023
78 min
Mastering Node.js Test Runner
Workshop
Node.js test runner is modern, fast, and doesn't require additional libraries, but understanding and using it well can be tricky. You will learn how to use Node.js test runner to its full potential. We'll show you how it compares to other tools, how to set it up, and how to run your tests effectively. During the workshop, we'll do exercises to help you get comfortable with filtering, using native assertions, running tests in parallel, using CLI, and more. We'll also talk about working with TypeScript, making custom reports, and code coverage.