Node.js is great - easy to develop, performant, easy to scale. But there are tasks that are less suited for it - heavy computations or data processing. Join me and learn how you can incorporate Rust as well as WebAssembly into Node and JavaScript and take your performance to the next level!
🚀 Supercharge your NodeJS with Rust

AI Generated Video Summary
In this Talk, Dmitry Kudravtsev discusses how to supercharge JavaScript and Node.js using Rust. He introduces NEON, an open-source library for integrating Rust and JavaScript, and explains how to use it to export Rust functions to JavaScript. Dmitry also explores the performance benefits of using native modules written in Rust and WebAssembly. He compares the two approaches and highlights the faster performance of Rust native modules. He concludes by recommending WebAssembly for its ergonomics and portability, while suggesting native modules for extending Node.js with performance code.
1. Introduction to Rust and Node.js
Hi, my name is Dmitry Kudravtsev, I'm a senior software engineer passionate about JavaScript and Rust. Today, I want to talk about supercharging your JavaScript and Node.js experience using Rust. Node.js has a great ecosystem, but it can be slow for CPU tasks. Writing native modules in Rust is a solution. Rust is a modern language with better tooling and memory safety compared to C and C++.
Hi, my name is Dmitry Kudravtsev, I'm a senior software engineer and I'm very passionate about two things, JavaScript and Rust. And so today I want to talk with you, how you can supercharge your JavaScript and Node.js experience using the Rust programming language. So let's dive in.
We all know that Node.js is great. It has a great ecosystem of packages. I think it's the biggest ecosystem of packages among all programming languages. It has a very nice development experience. You can pretty much write and rest the server in a few lines of code. TypeScript is making the experience even greater. So you can get type checking that's not statically typed language, but still the type checking, which is nice.
But Node.js is also slow sometimes, especially if you are doing like CPU tasks. Let's say, the generation, maybe image processing. I've seen people do very creative solutions to these problems like outsourcing CPU bound task to message due to another process, which can work on that. And some people do Lambda servers that you can call on demand if you have some heavy computation. But there is also other solution. And the solution is actually writing native modules in either C or C++ or Rust. Now you probably ask why Rust, or what is Rust, why not just use C or C++. So let's try to answer this question first.
And as you probably know, C and C++, they're pretty old, they already showed their age. They're still in development. I think C++ has the 21st version now. I'm not pretty sure, I don't follow it pretty much, but nevertheless, they're old, they're a bit messy. They lack modern tooling. There is no decent dependency manager and they have a relatively poor standard library. So most of the time when you do need heavy containers or iterators, you have to put a library that's called Boost. Many things from Boost get into the C++ specification, but it still lacks a lot of things. The biggest downside, in my opinion, is that they are not memory-safe. So you probably saw this message that we all hate, core dumps, implementation fault, because somewhere you forgot to stop your for loop and you iterated too much on your array or you access the memory that is no longer owned by the application. This makes development in C and C++ very hard. Rust on the other hand, is a strongly typed and compiled language like C and C++.
2. Integrating Rust and JavaScript with NEON
Rust has a rich standard library and powerful tooling, including Cargo. It guarantees memory safety and checks for errors at compile time. To integrate Rust and JavaScript, we can use NEON, an open-source library for embedding Rust in Node.js. NEON allows us to write glue code to convert JavaScript types into Rust types. We can export Rust functions back to JavaScript using NEON. To build NEON projects, we use the Cargo-cprtfacts tool. By requiring the native library in Node.js, we can access the exported functions.
It has a rich standard library, so you get smart call, iterators, everything is built into the language itself. You don't need any third-party packages to add support for such things. It has a model tooling, so you have Cargo, which is an NPM equivalent, and you can run tasks with it, you can install packages with Cargo.
The biggest pro in my opinion is that Rust is memory safe. The way they achieve memory safety is very interesting, I'm not going to dive deep into it, you can read about it if you're interested. The notion with Rust is that if it compiles, it will run, so there will be no memory errors. The memory is actually checked on compile time, it's a big pro in my opinion, compared to C or C++. You still can write on SafeRust, it is possible, but by default, all Rust that you write is SafeRust and checked in compilation time.
Well, great to know, but you and I, we all write JavaScript, so how can we integrate between the two? So enter NEON. NEON is a library in the 2Chain for embedding Rust in Node.js. It's an open-source project, and it's a very cool project, I suggest you check it. And let's take a look at the Fibonacci function that we write in Rust and export it into JavaScript world. So, below is the code. Don't worry, I'll have links to my Github later in the presentation, so you can find executable examples. But for now, let's focus on this example, and let's break it into a few buckets so it will be easier to analyze.
On lines 1-4, we have the import and require statement equivalents from Node.js. So we bring some stuff from the NEON library. Line 6-12 is the actual Fibonacci function. Nothing fancy in here, it's a recursive function that goes and searches for the required Fibonacci number. Now, lines 14-18 are what I call a glue layer between the JavaScript world and the Rust world, and since the two languages are different and architectured in a different way, we need a way to convert JavaScript types into Rust types. So you always have to write some glue layer in NEON that will convert your JavaScript into Rust calls, and this is what we are doing on those five lines. We are converting the JavaScript call into a Rust call, and we actually call a Fibonacci function, return the result back to JavaScript, and as with every executable, we need to have a main function. In case of NEON, the main function functions is an export statement and so you can export functions from Rust back to JavaScript world. In that case, we export the Fibonacci API function as Fibonacci-rhs, so in JavaScript, we can access it as Fibonacci-rhs. In order to build this, there is another tool that the NEON team maintains. It's called Cargo-cprtfacts. It copies the artifact that Cargobuild produces, and what it does behind the line, actually generating a dynamic library, so with DLL or SOA equivalent, if you're on Windows or Unix, but it has all the wrappings of NEON and Node.js APIs, because Node.js does not support foreign function interface. Actually, JavaScript does not support foreign function interface, but Node.js supports, so this is why we can write native libraries for Node.js. In order to call the native library, we require it like a regular Node.js module. We can see on line 1, the index node that we've generated previously.
3. Rust, NEON, WebAssembly, and Performance
We require it like a regular function, and then we call it like a regular function. This will generate TypeScript definitions for you, so you can write native Rust code and have the TypeScript definitions in your JavaScript code. The integration process is pretty easy. I suggest you check NEON, it's a great library. It allows us to embed Rust in JavaScript code. WebAssembly is a portable binary format that can be a compilation target for other languages, including Rust. You can compile Rust code to WebAssembly or use AssemblyScript, a limited version of TypeScript. Here's an example of writing the Fibonacci function in WebAssembly. Wasm-bind converts Rust code to WebAssembly, but you need custom mappings for complex structures. For simple functions, the conversion is straightforward. I did performance testing on Fibonacci numbers, and the results show improved performance compared to JavaScript.
We require it like a regular function, and then we call it like a regular function. The cool thing is that this will also generate TypeScript definitions for you, so you can write native Rust code and have the TypeScript definitions in your JavaScript code.
The integration process is pretty easy, in my opinion. And it's cool. I suggest you check NEON, it's a great library. And it allows us to embed Rust in JavaScript code.
Now, if you're familiar with WebAssembly, then you probably ask, okay, but why not WebAssembly? Before I answer this question, let's understand what is WebAssembly. Now, WebAssembly is a portable binary format. And the corresponding text format. You can think of it as assembly from the X86 architecture. It's a bit different, it's a web-version and it's actually executed by a virtual machine, so the compiled binary is not native. It's native to the VM. And it's important to remember this. It's supported in all major browsers, so across the stacks, you can use it in Firefox, Edge, Safari.
The cool thing about WebAssembly is that it can be actually compilation target for other languages and Rust among them. So you can take a code written in Rust and compile it to WebAssembly, which is pretty cool. But if you don't want to do that, there is also a special language called AssemblyScript, which is a limited version of TypeScript, I'd say, that you can write and it will compile to WebAssembly.
So let's take a look at an example of how we can write the same Fibonacci function, but this time it will compile into WebAssembly. It's a lot shorter. And aside from the function itself, we need only two things, the use statement and the macro that we do on line 3. And since WebAssembly is typed, you have types, then the Wasm bind, which is the package that converts the Rust code to WebAssembly, will do all the magic that needs to be done in order to convert this code into WebAssembly. Keep in mind that if you have complex structures like a custom struct and custom functionality, you need to have a custom mapping for it. The Wasm bind doesn't know how to convert non-primitive types like integer, float, and strings to WebAssembly. But if you have simple functions like this Fibonacci, the conversion is straightforward because WebAssembly supports integers and floats out of the box. And so Wasm bind is able actually to convert this code to native WebAssembly.
Now, I showed you two ways to write native modules in JavaScript, and you're probably wondering, well, which one is faster. So hey, benchmarks. I did some performance testing on four different Fibonacci numbers. What you're seeing here is the mean running time that I got from Hyperfine, which is a benchmarking tool. see in green is the performance improved compared to JavaScript.
4. Performance Improvement with Native Modules
You can see an anomaly in the shortest Fibonacci number. Performance improvement is negligible for low Fibonacci numbers. But as the Fibonacci numbers increase, there is a tremendous boost in performance, around 60% for Rust and 45% for WebAssembly. Rust is 45% faster than WebAssembly. Always run your own benchmarks to ensure the performance gains are worth it.
You can see an anomaly here in the shortest Fibonacci number. You can see that the performance improvement is pretty much neglectable, and this is a good reminder to everyone who is doing performance and wants to improve something. Always, always have performance tests, because if you need to calculate low Fibonacci numbers, for example, the performance improvement you get from native models does not work the hassle. But if you go up in the Fibonacci numbers, you can see that we gain a tremendous boost in performance improvement, which is actually around 60% if you go to the Rust and around 45% if you go to the WebAssembly, which is a lot. It's twice as fast compared to JavaScript. We can also draw a second conclusion, the Rust is around 45% faster than Wasm. And there is also a note that you always need to run your own benchmarks to make sure that what you are benchmarking, the performance you're gaining from it, worth the hassle. Because as I said, you can see at the farthest Fibonacci number, the performance gains are not that spectacular compared to 44 or 45 Fibonacci numbers.
5. Comparison of Native Models and WebAssembly
We've seen two ways to write native models: directly or via Rust that compiles to WebAssembly. Rust native models are faster due to bypassing the virtual machine, but WebAssembly is also fast and supported in major browsers. Reusability is more complicated, as native libraries can be reused in other languages through a foreign function interface, while WebAssembly is only portable across different WebAssembly VMs.
So we've seen two ways to write native models. We can either do native models directly or we can do a Rust that compiles to WebAssembly and you probably have a question, which one should they use and when? So let's look at a few examples, a few categories that I have outlined to help you to make a better decision when to use native models, when to use WebAssembly.
If we are talking about performance, I think it's pretty clear that Rust native models will be faster, native will always be faster than anything executed in the VM, because we don't have a conversion layer, which is the VM itself. Take a look at C or C++ compared to Java, obviously you can write non-performant code in any programming languages, but if we're talking about, you know the language, you know how to write performant code, you will always achieve more performance if you bypass the virtual machine, but it's also worth noting that WebAssembly is very fast. I was surprised to learn that even though it's VM executed, it's pretty cool to know that WebAssembly is at our disposal, that is supported in all major browsers, as well as Node.js.
Let's talk about reusability. When I talk about reusability, I speak about taking the same code and running it on different platforms. In that case, it's a bit complicated because native libraries, they can be reused in other languages through something that's called a foreign function interface, which is a FFI. So, for example, you can have your business logic written in Rust, and you can use it in Node.js, you can use it in Java, and you can use it in your iOS application that is written on Swift. WebAssembly, on the other hand, is only portable across different WebAssembly VMs. So your WebAssembly code will run on Edge, in Firefox, in Safari, in Node.js, but you won't be able to execute it on Android, for example, unless you have a virtual machine that can execute WebAssembly.
6. Comparison of WebAssembly and Native Models
If you're talking about ergonomics, I'm pretty sure that it's clear that WebAssembly is a better, way better ergonomics. The code is way shorter and the binding example I showed you requires pretty much nothing to convert the native Rust function into WebAssembly. If we talk about stdlib, which is everything, the system networking, if you want to access the operating system, then your only way to do this is using Rust native modules. Portability, it works on my machine, so every developer ever, you always need to remember native code is machine dependent. And if we talk about entry barrier for new developers, I think that WebAssembly is easier. If you want to introduce performance code in your Node.js ecosystem, I suggest you go to WebAssembly because native models can be written only in C, C++ or Rust. If we're thinking about Node.js, or browser, even though we're more focused on Node.js here, JavaScript is still king in the browser. So native models can be used only in Node.js but not in the browser. WebAssembly, on the other hand, can be used in every place where you have a WebAssembly virtual machine available. If you have a code that you share between the backend and the frontend, your pretty much solution is to go in the WebAssembly. If you want to squeeze performance only from your backend code, you can look into REST native models. In my opinion, native models are meant to extend the Node.js with performance code. And WebAssembly, in my opinion, is a way to replace non-performant JavaScript pieces of code. Thank you very much.
If you're talking about ergonomics, I'm pretty sure that it's clear that WebAssembly is a better, way better ergonomics. The code is way shorter and the binding example I showed you requires pretty much nothing to convert the native Rust function into WebAssembly. It is able to convert basic types like integer and floats. Neon on the other hand needs a glue layer to convert your Rust and JavaScript into Rust. So the ergonomics suffer a bit with the native, native example compared to the WebAssembly one.
If we talk about stdlib, which is everything, the system networking, if you want to access the operating system, then your only way to do this is using Rust native modules. Because WebAssembly cannot access standard libraries, so you can't access file system, you can't access networking, unless you're willing to wait for a WebAssembly system interface, which is still in development, but they are working on making it accessible to WebAssembly as well.
Portability, it works on my machine, so every developer ever, you always need to remember native code is machine dependent, so if you compile it on macOS, you will not be able to run it on Windows, so if you're developing on macOS and your production environment is bare Linux containers, you need to compile it for bare Linux containers, so you can use Docker for that. WebAssembly, on the other hand, if I compile my code in WebAssembly and I give you the executable, you will be able to run it on your machine browser that supports WebAssembly, which is not true for native models that compile it on my machine. Good chance that you won't if your machine is different. For example, if you have an Intel-based Mac and I have an ARM-based Mac, then it needs to be recompiled, especially if you have a Windows or Linux machine, it needs to be compiled as well.
And if we talk about entry barrier for new developers, I think that WebAssembly is easier. If you want to introduce performance code in your Node.js ecosystem, I suggest you go to WebAssembly because native models can be written only in C, C++ or Rust. Those are pretty complex languages to grasp for beginners, for junior people. I'm not saying it's impossible, but the investment will be pretty big if you just want to rewrite one or two functions into native performance. Wasm, while they can be compiled from C, C++ or Rust, there is also partial support for Python, Ruby and Go. And moreover, you have the specialized language called AssemblyScript that is dedicated to to Wasm, so you can always start with AssemblyScript and go and get your WebAssembly version, and if you need to squeeze every possible inch of performance, you can always then introduce Rust. If we're thinking about Node.js, or browser, even though we're more focused on Node.js here, JavaScript is still king in the browser, and sometimes we need to run things in the also need to remember that native models can't be used in browsers, because JavaScript does not have support for foreign function interface. So native models can be used only in Node.js but not in the browser. WebAssembly, on the other hand, can be used in every place where you have a WebAssembly virtual machine available. So it's all the Node.js versions, plus all the browsers, except Internet Explorer. I'm sorry if you need to work with Internet Explorer, but other than Internet Explorer, the WebAssembly support is across the entire stack, including mobile browsers. So see, if you have a code that you share between the backend and the frontend, your pretty much solution is to go in the WebAssembly. If you want to squeeze performance only from your backend code, you can look into REST native models.
Here we have the summary, I will leave it to you to read, and I will just want to outline two main points. In my opinion, native models are meant to extend the Node.js with performance code. So if you want performance improvement into your Node.js ecosystem, native models is the way to do that. And WebAssembly, in my opinion, is a way to replace non-performant JavaScript pieces of code. So if you have a non-performant function, I suggest you try to rewrite it in WebAssembly whether through Rust that is compiled to WebAssembly or by using AssemblyScript. And if you want to add performance code to your Node.js, you want to extend your Node.js ecosystem, then you can look at native models with Rust for example. Thank you very much. You can find me on Twitter, GitHub, LinkedIn, and you can scan the QR code to take you to my blog. You can find links to two articles that this talk is based on. On my GitHub, you can find a link to the GitHub repositories for both the Rust and WebAssembly versions, which you can clone, you can run, you can play with them, you can experiment. Thank you for listening. I hope you learned something new. It can be performant not by itself, but with the help of Rust. So I hope you will find use for your new knowledge.