Making My Node.js API Super Fast

Rate this content
Bookmark

Node.js servers need to process a large number of requests concurrently as the scale grows.  A Node.js micro service which receives an API request needs to do multiple actions, like parsing JWTs, use caching, work with the databases and more. In this talk, Tamar will show strategies to improve your REST API performance - starting from new Node.js frameworks that can work faster, better parsing of request parts, efficient work with caching and DB , better parallelism and more strategies. The talk will include demos, benchmarks and profiling of code to see the improvements. At the end of this talk, developers will have practical knowledge on how to improve API performance in various Node.js platforms.

Tamar Twena-Stern
Tamar Twena-Stern
34 min
04 Apr, 2024

Comments

Sign in or register to post your comment.

Video Summary and Transcription

This talk focuses on improving performance in Node.js API development. It covers various areas such as optimizing database work, connection pool, JSON parsing, logging, and web framework selection. Key highlights include the use of Native Mongo Driver for better database performance, optimizing connection pool for improved throughput, replacing Express serializer for faster serialization and deserialization, and choosing Festify as an efficient web framework. Caching and authentication's impact on performance is discussed, along with recommendations for caching types. The talk also emphasizes considering environmental factors and human impact on performance. Fastify is highlighted as a recommended tool for Node.js performance optimization.

Available in Español

1. Introduction to Performance Optimization

Short description:

Hi everyone, welcome to my session about improving performance in your Node.js API. I'm passionate about JavaScript and Node.js. Today we'll discuss improving performance in your API layer, covering areas such as database, serialization, deserialization, logging, web frameworks, and caching. Let's start with database work. Optimize DB queries, use efficient indexes, denormalize data, and consider solutions like read-write replicas and sharding.

Hi everyone, happy to be here and welcome to my session about how to improve performance in your Node.js API and make it really fast. So before starting to talk about technical stuff, let me introduce myself. My name is Tamar. You can follow me on those links and contact me on those links. I'm really passionate about JavaScript and Node.js and my interest in those technologies started when I founded a startup of my own and I wrote the entire backend in Node.js and from there the rest is history.

And enough talking about myself, let's start talking about performance, which is one of my favorite things that I like to talk about. And today we are going to talk about how we're going to improve performance in your API layer. And we're going to revise, we're going to look at those areas, we're going to look at database. We're going to look at serialization and deserialization of your Jasons. We're going to talk about logging. It looks trivial to you, but that can impact your performance. The web framework that you choose and caching.

So let's start with database work. First of all, before we go into JavaScript related stuff, there are some things that you need to keep in mind for every technology that you work with, for every programming language. First of all, of course your DB queries have to be optimized. All of the indexes have to be very efficient and enable really fast fetching and efficient operations. And then you need to denormalize your data. What does it mean? If you have a table of 50 columns and you only need three, don't fetch the entire 50. Fetch only what you need, for example. And save the data in a way which is efficient. Don't save inefficient things. And then you need to think about solutions in the DB layer. That can be read-write replicas that can improve your performance. Working with a replica set, matching the DBs that support that, and sharding.

2. Improving Node.js Application Database Work

Short description:

Let's discuss improving Node.js application-related database work by comparing Mongoose and Native Mongo Driver. Key measurements for performance benchmarking include latency (looking at the 99th percentile), and throughput (average requests per second and total requests). In the benchmark, both Mongoose and Native Mongo Driver were used to post a person object to the database. The results showed that Native Mongo Driver performed significantly better than Mongoose, with an increase in average requests per second and a much smaller 99th percentile.

But we are in a Node.js conference, and after talking a little bit about generic tips, let's go into Node.js stuff. How can I improve my Node.js application-related database work? When we are starting to work with Node.js, we are starting to work with the first tutorial that we are all doing, Node.js, express and Mongoose. Let's start with that, and let's take a benchmark and see how that is going to perform. When we are going to do a benchmark, we have to keep in mind the following things. First of all, the measurements that are the most important is the latency. You need to look at the 99th percentile, not the average. Because when you are telling you have a contract with a third party, and you are telling them all of my requests are faster than this, you need to give them the 99th percentile in the latency. Not the median, not the average. That is really important. Second, measuring throughput. For measuring throughput, we have two important things that we need to look at. First is the average request per second, and then the total request. Those measurements will give you a good idea about how your throughput is improving.

Now we are going to compare Mongoose and Native Mongo Driver. I wrote a server that works with Mongoose, and that server exposed one post request, that is posting a person object, something really simple. The code is not something really interesting. We are building a server with Express and Mongoose, building a Mongoose schema of several parameters, and then that is the function that we are going to measure. Post, and then we are creating a new person object, and saving it as you can see here. The Native Mongo Driver code is going to do exactly the same. A post request, which also is posting a person object, and that is because we want to compare apples to apples. When we are doing performance benchmark, we want to compare things that are doing the same. Here is the code that is running with the Native Mongo Driver. As you can see, this is also doing insert1 into the database.

Cool. Those are the results that we have with Mongoose. You can see if we are going to look at average requests per second, we are around 5700 requests, and the total request is around 64,000, and the 99th percentile is 4 millisecond, but as we are using Mongo Driver, we can see that things improved hugely. We can see that the average requests per second went up to 9500, which is a lot. It's more than 30%, I think. The 99th percentile is much smaller, by 50%, and we were able to serve, in the time of the benchmark, around 30,000% more requests. As you can see, if we are going to see it in a graph, Native Mongo Driver performs much better than Mongoose.

3. Optimizing Connection Pool

Short description:

We compare total requests and average requests per second to see significant performance improvements. ORMs like Mongoose provide an abstraction layer above the database but can harm performance due to overhead and inefficient queries. When scaling up, it's best to work directly with the database. Connection pools are a group of available database connections that can execute queries and improve throughput. By optimizing the connection pool, we achieved a 15 to 20 percent performance improvement in average requests and total requests, along with shorter latency.

We are going to compare total requests and average requests per second, and you are going to see that it is much better. The question is asked, why is it much better? Why is it performing much, much better? To make a long story short, Mongoose is a tool from a type called ORMs, and usually when you are using those tools, they are much more out there, they give you an abstraction layer above your database that helps you, but it adds an overhead of serialization, dereferentialization, sometimes inefficient queries, so usually most ORMs harm your performance. Those are good for applications in low-scale, medium-scale, but when you are going to high-scale, usually you are starting to work natively with your database, whether it is Mongo, or SQL, or whatever. Then let's talk a little bit about connection pools. What are connection pools?

When we are working with a database, then we want to have a pool of connections to the database that will be available, each one of them can execute a query, and when it ends, it goes back to the pool waiting for another client, another utility to use it. If we are going to compare results, when we are working with a connection pool to your database, and working without a connection pool, so now let's do that comparison, again, we are comparing apples to apples, and we are comparing the same server, and that server is doing that POST request that we spoke about. So, when we have no connection pool, let's look at the throughput. We see that the average request per second is around 8100, and the total request is about 81,000, wow, we have a multiply of 10 here, and then you can see that if we are optimizing our connection pool, making it bigger, making it more efficient to our machine needs, to our environment, then I succeeded to get around 15 to 20 percent improvement. You can see that the average request went up to 9500, the total amount of request is a little bit close to 100,000 requests, and if we are going to look at that with graphs, then we can see that when we were optimizing our connection pool, then the performance went up by 20 percent. So, we are looking at the total request, the average request, also the latency became shorter. So another thing which is important to keep in mind is to work with a connection pool, and make sure that you are optimizing it.

4. Efficient JSON Parsing and Stringify

Short description:

When working with HTTP requests, serialization and deserialization can be a synchronous operation that blocks the event loop. By replacing the default Express serializer with FastJSONStringify, we achieved a 15% improvement in the Express server's performance. The average request increased to above 10,000 requests per second, and the total requests improved by around 50%. FastJSONStringify provided a performance advantage, but there are other libraries available to consider for efficient serialization and deserialization.

All right, now let's talk about efficient JSON parsing and stringify. So, when we are working with an HTTP, when we are using a POST request, when we are implementing a POST request, we are deserializing the HTTP request, the body of the request, and when we are implementing a GET request, we are serializing the response body and giving it back to the server. But serialization and deserialization is a synchronous operation that always blocks your event loop. I really hope that you have Node.js architecture in mind, that you are familiar with the event loop that is processing activities, processing work, and when we have a synchronous operation which is long, it's blocking it. And serialization and deserialization are exactly those things.

So, now I'm going to take the same server and replace the response serialization with another library called FastJSONStringify and let's see what we are going to get here. So if we are going to work, I'm implementing a GET API here, and I'm working with Express and I'm using Express.json, as you can see, the default serializer, and that is a function that I'm testing. And Express.json is executed when we are doing result.json. OK, so we are doing find one and returning the response. Now I added a little bit of addition of my own. I created another server. I used FastJSONStringify. That library needs to receive a schema, so I gave it a schema, as you can see here, and I created a middleware of myself. I replaced the default Express serializer with my immediate middleware, and that middleware is using FastJSONStringify, that custom middleware. And if we are going to compare those results, we are going to see that that gave me also around 15% improvement to my Express server. For example, the average request went up to above 10,000 requests per second, and also the total requests, those went up in around 50%. I was able to serve 50% requests more, so the throughput went up, and also the latency is smaller. So FastJSONStringify gave me the performance advantage here. But you know, that's not the only library that exists around. There are more libraries that exist, and I really recommend you to look for other serializers, understand what is more efficient in doing your test, and maybe yes, consider making this layer of serializing and deserializing your request and response content more efficient.

5. Logging and Performance

Short description:

Logging is essential for debugging and troubleshooting in applications. However, logging can cause performance degradation. By comparing servers with and without logs, we observed a 15% decrease in throughput. PinoJS is currently the most efficient log library, providing the least overhead and better overall performance. When logging, it is important to let the log library handle string concatenation to avoid performance issues in large and complex applications. Consider using PinoJS over other log libraries for better performance. Additionally, it's worth exploring alternatives to Express, as it is an older framework (version 4 was released in 2014).

Alright, let's talk a little bit about logging. In a perfect world, our system is working perfectly, but actually usually in reality, there are a lot of problems, applications crash, there are random bugs, data is missing, so logging always helps us to debug, and logging is something that we all need in our applications. But logging can cause performance degradation.

So I was, or I'm sorry, it cannot, it will always cause performance degradation. So I was comparing one server with a POST request that is, you know, with no logs, and another server which have only one log line, and that's it. And look what we have here. If with no log, the average request per second is around 9600, then when we have only one log line, the throughput is down by around 15%. Look at here. The average request per second is around 8400. In the last results, we were able to serve like around 100,000 requests in the total benchmark. Now we're able to serve like 93,000. That is like 10% degradation.

So when you're thinking about logging, that's also something that you need to take in mind to take a library which is performance efficient. Different log libraries that are, you know, popular right now are, let's say, Winstone, PinoJS, and other libraries that I took for my benchmark comparisons are Banyan and Log4j. So when we're looking at the overall, we're gonna see that PinoJS is the library that currently gives you the less overhead over your code, meaning that that is like for now, for my comparisons, the most efficient log library performance-wise, and it would give you the best results for not harming your application performance. So we can see that in the average request per second, and we can also see it in the total request as you see that Pino, the server with Pino was able to serve the amount of requests which is closer to a server with zero logs. So if we're conclusioning that, so if you're thinking about performance and you're thinking about performance efficiency, so prefer PinoJS over other libraries. That can change, but that is like for now, the most efficient log library out there. Another tip that I would like to give you, which is important, is, you know, when we're doing logs, we have a lot of string concatenations, but the log library should manage this string concatenation. Don't do the string concatenation in your application by yourself. As you can see in the example which is below, because it seems like minor to you. What is actually happening behind the scene, that if my log level for now is like error, then that string concatenation won't be done by the log library. If I'm doing it by myself, it will always happen. So it looks minor, but really in large application, complex application, those things add up. So always prefer that the string building would happen with an interface, which is suitable for that from the log library, which receives parameters. Okay, now actually I was doing like tons of examples with Express. So the question is whether we can do something else, maybe not Express. I mean, Express is, to be honest, is kind of old. I don't know if you know that, but Express release version is version 4, and that was released in 2014. That was quite a long time ago.

6. Comparing Web Frameworks

Short description:

When benchmarking different web frameworks like Express, NestJS, and Festify, Festify outperformed the others with almost twice the requests per second and lower latency. Choosing a more efficient web framework can provide a significant performance boost and edge, especially when starting a new microservice. Festify currently performs the best among all the frameworks analyzed.

I know that the first tutorial is Express, but we have a lot of other interesting players in the market. We have Nest, we have Festify, we have Koa. So maybe if we're going to take something else, that can give us an advantage that we need.

So I was benchmarking Express and Festify and NestJS, and of course when benchmarking, it's very important to, as we said, compare apples to apples. If I'm comparing a get request that saves an object, a person object with three parameters, I have to do the same thing with Festify and with Nest, the code has to be the same. And on the database, if we're doing insert one, we need to do insert one on the other framework. Okay, let's see what we get. So when we're working with Express, we were able to serve around 95, 94 hundred requests per second, and the throughput is around 100,000 requests per second. Latency, two milliseconds. Nest gave us something which is slower. Average request per second, 5,400. We were able to serve around 60,000 requests, but Festify currently gave us the best results. We were able to serve, as you can see, close to 200,000 requests per second. The latency went down to one millisecond. So Festify did give us the performance edge and here is the comparison. If we're going to look at it visually, then we're going to see that Nest was performing about 50% slower than Express, but Festify is currently around twice as fast than Express, at least from my benchmarking. We can see that with multiple measurements that was done. So actually, in most companies, it is a little bit hard, let's say it, to refactor your microservices code. Usually you get a code that you need to work on, and in that case, well, you have what to do. You can make your DB work more efficient. You can use caching. We're going to talk about caching right away. You can improve serialization, but if you have a possibility to choose a more efficient web framework, or if you have a possibility to start a new microservice with a different web framework in your organization, you should really consider that, because that can give you a real boost and edge. And, as we said, Festify currently is the framework which is performing the best among all the frameworks that I looked at.

7. Caching and Authentication Performance

Short description:

Adding JWT authentication to the server caused a significant decrease in performance. By implementing caching, we were able to recover performance close to the initial state without authentication.

Now, I hope that your mind is not blown, but we're going into the last thing that we're going to talk about, which is caching. So for that, I had implemented a server which is authenticated with JWT, and token validation is done in JWT authentication on both sides. The client has to validate the token, and the server has to validate the token as well. But I'll give you a spoiler. Adding a JWT authentication to the server has caused the performance to decrease in a really bad way. Let's see that. So that is the middleware that I added, and for every request I'm doing, JWT verify for the token. Okay, the code is not really complicated here.

So that is the middleware that is used. And look what happened. So from a situation where we're serving around 100,000 requests in total, we went to around 15,000. That is like 80% degradation. That is huge. So yeah, like things and also the latency, you know, went up to around 10 milliseconds. It was like massive. So to solve the problem, well, the best thing to do here to solve the problem is to use caching.

So those are the results when we have no authentication. That's what we got. We know that around 9,500, 9,600. Those are the results that we received when we added the authentication. You see that the average request per second went down to 2,500. And when we're using caching, you can see that we're really close to what we had in the beginning. We had like without the authentication, we had around 95 requests, 9,500 requests per second. Here we have 9,100. That is kind of close. And also we can see that the latency is back to 2 milliseconds, the 99th percentile. That's what we're looking at all the time. And here you can see it like visually. When we were adding caching, it is close to what we have when we're doing no authentication at all. So adding caching to the authentication layer solved the problem.

8. Caching Types and Recommendations

Short description:

Caching is an effective solution to solve performance bottlenecks. In-memory caching is not recommended for running services in multiple replicas. It's better to use technologies like Redis or EDCD for distributed caching. Recommended items to cache include DB queries, crypto stuff, and HTTP requests from third parties. In summary, optimize DB queries, use efficient DB architecture, work with native DB drivers, and optimize your connection pool. Cache when necessary.

We can see it with different measurements. And we can see that the latency is back to what it was. That was the bottleneck and the performance problem was solved.

Types of caching that we can do. We can do in-memory but actually that is not really recommended. Because most of the time if you're doing in-memory caching, you cannot run your service in multiple replicas. You cannot create multiple instances of your service. So it's not recommended. Better to take technologies like Redis. Redis has other competitors like EDCD for example. But better take technologies like this. And in that case the cache is distributed. All your services are accessing the cache. And it's much better.

What you should cache. Or what is recommended to cache. DB queries, crypto stuff. For example we saw JWT. HTTP requests from third parties. Those kind of things. That would improve your performance a lot. Okay. We are at the end of this. Which is good. So let's do a quick summary of everything that we've seen. Optimize DB queries. Make sure you have efficient DB architecture as we've talked about. Better to work with native DB drivers in large scales. Have your ability to optimize your connection pool. Cache stuff when you need.

QnA

Performance Optimization and Audience Questions

Short description:

Choose an efficient web framework, log libraries, and serializer. Look into areas such as latency, request time, throughput, and query optimization to improve service performance. Environmental factors can also impact performance. Optimize database queries, connection pool, and logging level. Consider other factors before optimizing JavaScript code. Let's move on to the audience questions.

And also make sure that you choose an efficient web framework for performance. You should choose efficient log libraries. And you choose efficient serializer. That was me. I really hope you enjoyed it. And you can follow me on those links. And thank you very much.

Which areas should we improve performance in our services? And while Columbia is a great answer. A lot of folks are. Definitely. But people have said things like latency, request time, throughput, query optimization. So what do you think? Do these results surprise you? What would you answer to such a question? What would I answer? Okay. So first of all, it could be that the problem is maybe not in your application. Maybe it's like an environmental problem. Maybe if the machine is running in the cloud, then there are problems with another container. Which is cutting the machine, which is cutting all the resources. So you have to make sure that you're not into that situation. By the way, I think that optimizing your code is something that you do kind of less. You have to look, first of all, into your database. And see whether you need to optimize your queries. And you need to see whether your connection pool is optimized as well. And you need to also see your logging level. By the way, never run production applications in logging level in the bad mode. Always run them in error mode or warning, because logging can affect a lot on your application. So before starting to optimize your JavaScript code, there are a lot of things that you can do.

Fantastic. That's a great answer. But I want to also move on to the questions from the people who tuned in and have questions for you. So let's head over to those questions. And I actually think that's a good segue into what the first person asked.

Performance Bottlenecks and Tips for Node.js

Short description:

The bigger performance bottleneck: technology or humans? Node.js performs better due to non-blocking I.O. and event loop model. Humans also impact performance through software architecture issues. Isolate performance bottlenecks in Node.js by choosing new frameworks for better performance overhead. Work with asynchronous operations and run tasks in parallel to utilize worker threads power.

Which is, what's the bigger performance bottleneck? The technology? Or the humans in the process who are implementing the technology? What do you think is the bigger issue for performance? Well, that technology can impact. For example, I think that Node.js is performing much better than other languages such as Java. Because of its unique non-blocking I.O. and event loop model. But I think that as humans, we also have to make sure that we are dividing our software into good architecture. We have to try to divide our software and microservices in a way that it can scale. So, it might be that humans will be able to... Humans are the bigger problems, to my opinion. Humans are the bigger problem. That's it. Like it's come to... Layer seven. Yeah, like database work which is not correct. Logging work which is not correct. Microservices architecture that is not scaling. Caching that is not used. That's why we need to welcome our machine overlords. Now is the time.

Awesome, that's a great answer. So what do you recommend to isolate performance bottlenecks in Node.js applications? What's your best tip? What's my best tip? By the way, I do recommend that if you start a new project, then try to look at new frameworks. Don't go to like, I don't know, express what everybody is doing. Try to take new frameworks that can give you that performance overhead. And try to work with asynchronous operations, by the way. Whenever you're talking about Node.js, that is crucial. Try to work with asynchronous operations. Everywhere that you have an asynchronous API, work with it. Work with Promise sol wherever you can. Always try to run things in parallel. Because, you know, when we're working with async await, we like to run things one by one. But if you can, run them in parallel to use like your entire worker threads, like worker threads power.

Optimizing Performance in Node.js with Fastify

Short description:

Run things in parallel for CPU intensive synchronous operations. Use worker threads for Node.js specific performance optimization. Fastify is recommended for its efficiency in serialization, logging, and database operations.

So do that. Run things in parallel. And if you need to do like CPU intensive synchronous operations, then use the core model of worker threads to run them in a thread pool. So that is like the Node.js specific performance optimization that are important.

So that, okay. So you mentioned that Express is the default. And do you have, that's also the question here, NestJS, Express. Like what about Fastify, other tools, what would you recommend? What's another non-typical?

I think currently Fastify, it gives you like the whole package. It runs much faster than others. It uses a much efficient serializer. In my talk, the reason that I showed the serializer is for people that work with Express and cannot change technology. So that could be like a quick win for them. But Fastify gives you like the efficient serializer, the efficient logger. It gives you like the whole package. So it works efficiently with a database, not with Mongo. So I think it is recommended now.

Amazing. Thank you so much for that awesome Q&A and your excellent talk and for joining our community. It was wonderful having you. Thanks so much, Tamar. Thank you very much for having me.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
Towards a Standard Library for JavaScript Runtimes
Node Congress 2022Node Congress 2022
34 min
Towards a Standard Library for JavaScript Runtimes
Top Content
You can check the slides for James' talk here.
Get rid of your API schemas with tRPC
React Day Berlin 2022React Day Berlin 2022
29 min
Get rid of your API schemas with tRPC
Do you know we can replace API schemas with a lightweight and type-safe library? With tRPC you can easily replace GraphQL or REST with inferred shapes without schemas or code generation. In this talk we will understand the benefit of tRPC and how apply it in a NextJs application. If you want reduce your project complexity you can't miss this talk.
Out of the Box Node.js Diagnostics
Node Congress 2022Node Congress 2022
34 min
Out of the Box Node.js Diagnostics
In the early years of Node.js, diagnostics and debugging were considerable pain points. Modern versions of Node have improved considerably in these areas. Features like async stack traces, heap snapshots, and CPU profiling no longer require third party modules or modifications to application source code. This talk explores the various diagnostic features that have recently been built into Node.
You can check the slides for Colin's talk here. 
ESM Loaders: Enhancing Module Loading in Node.js
JSNation 2023JSNation 2023
22 min
ESM Loaders: Enhancing Module Loading in Node.js
Native ESM support for Node.js was a chance for the Node.js project to release official support for enhancing the module loading experience, to enable use cases such as on the fly transpilation, module stubbing, support for loading modules from HTTP, and monitoring.
While CommonJS has support for all this, it was never officially supported and was done by hacking into the Node.js runtime code. ESM has fixed all this. We will look at the architecture of ESM loading in Node.js, and discuss the loader API that supports enhancing it. We will also look into advanced features such as loader chaining and off thread execution.
Step aside resolvers: a new approach to GraphQL execution
GraphQL Galaxy 2022GraphQL Galaxy 2022
16 min
Step aside resolvers: a new approach to GraphQL execution
Though GraphQL is declarative, resolvers operate field-by-field, layer-by-layer, often resulting in unnecessary work for your business logic even when using techniques such as DataLoader. In this talk, Benjie will introduce his vision for a new general-purpose GraphQL execution strategy whose holistic approach could lead to significant efficiency and scalability gains for all GraphQL APIs.

Workshops on related topic

Node.js Masterclass
Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Top Content
Workshop
Matteo Collina
Matteo Collina
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Building GraphQL APIs on top of Ethereum with The Graph
GraphQL Galaxy 2021GraphQL Galaxy 2021
48 min
Building GraphQL APIs on top of Ethereum with The Graph
WorkshopFree
Nader Dabit
Nader Dabit
The Graph is an indexing protocol for querying networks like Ethereum, IPFS, and other blockchains. Anyone can build and publish open APIs, called subgraphs, making data easily accessible.

In this workshop you’ll learn how to build a subgraph that indexes NFT blockchain data from the Foundation smart contract. We’ll deploy the API, and learn how to perform queries to retrieve data using various types of data access patterns, implementing filters and sorting.

By the end of the workshop, you should understand how to build and deploy performant APIs to The Graph to index data from any smart contract deployed to Ethereum.
Build and Deploy a Backend With Fastify & Platformatic
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Matteo Collina
Matteo Collina
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
Hands-on with AG Grid's React Data Grid
React Summit 2022React Summit 2022
147 min
Hands-on with AG Grid's React Data Grid
WorkshopFree
Sean Landsman
Sean Landsman
Get started with AG Grid React Data Grid with a hands-on tutorial from the core team that will take you through the steps of creating your first grid, including how to configure the grid with simple properties and custom components. AG Grid community edition is completely free to use in commercial applications, so you'll learn a powerful tool that you can immediately add to your projects. You'll also discover how to load data into the grid and different ways to add custom rendering to the grid. By the end of the workshop, you will have created an AG Grid React Data Grid and customized with functional React components.- Getting started and installing AG Grid- Configuring sorting, filtering, pagination- Loading data into the grid- The grid API- Using hooks and functional components with AG Grid- Capabilities of the free community edition of AG Grid- Customizing the grid with React Components
0 to Auth in an Hour Using NodeJS SDK
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
WorkshopFree
Asaf Shen
Asaf Shen
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
Building a Hyper Fast Web Server with Deno
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
WorkshopFree
Matt Landers
Will Johnston
2 authors
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.