The Need for Speed: How AWS New JS Runtime is Redefining Serverless Latency

Rate this content
Bookmark

In today’s world of modern applications, swift responsiveness is essential. Users expect seamless interactions where every action triggers an immediate response.

Serverless services such as AWS Lambda, allows developers to build modern applications without the need to manage traditional servers or infrastructure. However, Serverless services might introduce additional latency when new execution environments are provisioned and due to (by design) having less resources than traditional servers or containerized environments.

To mitigate this problem, AWS have developed an experimental JavaScript runtime, called LLRT, built from the ground up for a Serverless environment. LLRT (Low Latency Runtime) is a lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications. LLRT offers more than 10x faster startup and up to 2x overall lower cost compared to other JavaScript runtimes running on AWS Lambda.

In this session you will discover how it's different from what's already out there, see its performance in action and learn how to apply it to your Serverless functions.

25 min
04 Apr, 2024

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Serverless services like AWS Lambda allow developers to build modern applications without provisioning servers or additional infrastructure. LLRT is a low latency runtime designed specifically for serverless environments and JavaScript applications. LLRT uses a lightweight JavaScript engine called Quick.js, achieving fast execution and performance with minimal memory consumption. LLRT is ideal for latency-critical applications, high-volume functions, and integration with AWS services. It significantly improves performance, reducing cold starts and providing consistent warm start times. Users are encouraged to test LLRT and contribute to its development.

Available in Español

1. Introduction to LLRT

Short description:

Serverless services like AWS Lambda allow developers to build modern applications without provisioning servers or additional infrastructure. However, cold starts can introduce latency. LLRT is a low latency runtime designed specifically for serverless environments and JavaScript applications. LLRT does not incorporate a just-in-time compiler, conserving CPU and memory resources and reducing application startup times. It offers virtually negligible cold starts and uses ECMAScript 2020 with many Node.js APIs.

Hello, everyone. In today's world of modern applications, swift responsiveness is essential. Developers expect excellent experience where every action triggers an immediate response. Serverless services such as AWS Lambda allows developers to build modern applications without the need to provision any servers or additional infrastructure.

However, these services sometimes introduce or add a bit of latency when provisioning a new execution environment to run the customer code. This is sometimes referred to as a cold start. And even though production metrics shows that cold starts typically occur for less than 1% of all invocations, and sometimes even less, it can still be a bit destructive to the seamless user experience that we're targeting.

What if I told you that there is a solution to cold starts? What if I told you that you can run JavaScript applications on AWS Lambda with virtually negligible cold starts?

My name is Richard Davison. I work as a partner solution architect, helping partners to modernize their applications on AWS using serverless and container technologies. And I am here to talk about the project that I've been building for some time called LLRT and how it redefines serverless latency.

So LLRT is short for low latency runtime. And it's a new JavaScript runtime built from the ground up to address the growing demand for fast and efficient serverless applications. Why should we build a new JavaScript runtime? So JavaScript is one of the most popular ways of building and running serverless applications. It also often offers full stack consistency, meaning that your application back end and front end can share a unified language, which is an added benefit. JavaScript also offers a rich package ecosystem and a large community that can help accelerate the development of your applications. Furthermore, JavaScript is recognized as being rather user-friendly in nature, making it easy to learn, easy to read and easy to write. It is also an open standard known as ECMAScript, which has been implemented by different engines, which is something that we will discuss later in this presentation.

So how is LLRT different from Node, Abun and Ordino? What justifies the introduction of another JavaScript runtime in light of these existing alternatives? So Node, Abun and Ordino represent highly proficient JavaScript runtimes. They are extremely capable and they are very performant. However, they're designed with general purpose applications in mind, and these runtimes were not specifically tailored for the demands of serverless environments, often characterized by short-lived runtime instances with limited resources. They also each depend on a just-in-time compiler, a very sophisticated technological component that allows the JavaScript code to be dynamically compiled and optimized during execution. While a just-in-time compiler offers substantial long-term performance advantages, it often carries computational and memory overhead, especially when doing so with limited resources.

So in contrast, LLRT distinguishes itself by not incorporating a just-in-time compiler, which is a strategic decision that yields two significant advantages. The first one is that, again, a just-in-time compiler is a notably sophisticated technological component introducing increased system complexity and contributing substantially to the runtime's overall size. And without that JIT overhead, LLRT conserves both CPU and memory resources that can be more effectively allocated towards executing the code that you run inside of your Lambda function, and thereby reducing application startup times. So again, a just-in-time compiler would offer a long-term substantial performance increase, whereas a lack of a just-in-time compiler can offer startup benefits.

LLRT is built from the ground up with a primary focus, performance on AWS Lambda. It comes with virtually negligible cold starts, and cold start duration is less than 100 milliseconds for a lot of use cases and tasks, even doing AWS SDK v3 calls. It uses a rather recent standard of ECMAScript, so ECMAScript 2020, with many Node.js APIs. And the goal of this is to make it a rather, such a simple migration from Node as possible.

2. LLRT Performance and Demo

Short description:

LLRT has embedded AWS v3 SDKs, leading to performance benefits and cost savings. It uses a lightweight JavaScript engine called Quick.js, which is less than one megabyte in size compared to over 50 megabytes for engines like v8 and JavaScript core. LLRT is built in Rust, adhering to Node.js specifications, and has a total executable size of less than three megabytes. A demo in the AWS Lambda console shows a cold start duration of over 1.2 seconds with the regular Node.js 20 runtime, consuming almost 88 megabytes of memory.

It comes with what we call batteries included. So LLRT and the binary itself has some AWS v3 SDKs already embedded, so you don't need to ship and provide those, which also has performance benefits. And speaking of performance benefits, there is also a cost benefit. And more stable performance, mainly due to the lack of a just-in-time compiler, can lead up to 2x performance improvement versus other Javascript runtimes, and a 2x cost-saving, even for warm starts.

So what makes this so fast? What is under the hood? So it uses a different Javascript engine compared to Dino or BUN. So Dino and BUN uses engines called v8 and Javascript core. So v8 comes from Chrome browser and the Chrome team. So the Chrome team has created a Javascript engine for its browser called v8, whereas BUN uses an engine called Javascript core that has diverged from Safari. But Quick.js on the other hand is a very lightweight engine. It's very capable, but it's also very lightweight. So the engine itself, when compiled, is less than one megabyte. If you compare this with both Javascript core and v8, they're over 50 megabytes inside of Node and BUN. So LLRT is also built in Rust, using Tokyo asynchronous runtime. Many of its APIs that is implemented inside of the runtime are adhering to the Node.js specification and are implemented in Rust. The whole executable itself is less than three megabytes, and that is including the AWS SDK.

I think it's time to take a look at a quick demo to see how it performs in action. So here I am inside of the AWS Lambda console. In this example, I have imported the DynamoDB client and the DynamoDB document client to put some event that comes into AWS Lambda, to put it on DynamoDB. I also add a randomized ID and stringify the event, and I simply return a status code of 200 and OK. Let's now first execute this using the regular Node.js 20 runtime. This time we see a cold start. So let's go to the test tab here and hit on the test button. Now it has been executed. And if we examine the execution logs here, we can see that Node.js executed with a duration of 988 milliseconds and an in-it duration of 366 milliseconds. So in total, this is somewhere around a little over 1.2, 1.3 seconds, actually. And we consumed almost 88 megabytes of memory while doing so. What I'm going to do now is go back to the code. I scroll down to runtime settings, click on edit and change to Amazon Linux 2023. Always only runtime. Save it.

3. LLRT Execution and Performance

Short description:

LLRT executes code almost instantly with a duration of 69 milliseconds, consuming only 20 megabytes of memory. Warm starts are also fast, up to two times faster than the Node.js equivalent. No code changes are required, only a change in runtime settings.

And now let's execute it with LLRT. As you can see, this was almost instant. And examining the execution logs, we can see that we now have a duration of 29 milliseconds and an in-it duration of 38. Which means that we have a total duration of 69 milliseconds. So 69 milliseconds versus 1,300 or slightly above for Node.js. While doing so, we only consumed about 20 megabytes of memory.

And notice that if I run the code again, for warm starts, it's also very fast. We have 45 milliseconds here, 16, 13, 14, 9, etc. So there's also no sacrifice in warm performance. And in fact, it can be up to two times less than the Node.js equivalent. Mainly due to the fact of the lack of a just-in-time compiler and a simpler engine for less complexity. Also notice that I didn't change a single line of code. What I simply did was to change the runtime settings here. And I have prepared this demo by putting the LLRT bootstrap binary here. So I simply downloaded LLRT, renamed the binary bootstrap and put it together with my sample code here.

4. LLRT Use Cases and Performance

Short description:

LLRT is ideal for latency-critical applications, high-volume functions, data transformation, and integration with AWS services. It can also run server-side rendered React applications and handle applications with a lot of glue code. However, LLRT is not suitable for simulations, large data transfers, or handling thousands of iterations. LLRT achieves its speed by eliminating the just-in-time compiler, optimizing the AWS SDK for Lambda, and writing code in Rust. The small JavaScript layer on top of Rust provides a lightweight runtime environment.

Okay, let's get back to the presentation. So what can be good use cases for LLRT? The good use cases can be latency-critical applications, high-volume functions, data transformation, integration with different AWS services. And server-side rendered React applications can even be executed with LLRT. And also applications consisting with a lot of glue code. What I mean by this is that applications that integrate to other third-party sources or other AWS services, that is the glue between one service to the other.

When it's not good to use LLRT is when you're doing simulations or handling hundreds or thousands of iterations in loops. Or doing some sort of Monte Carlo simulation or transferring large objects or large sets of data in tens or even hundreds of megabytes. This is where the just-in-time compiler really shines. Which is a feature that is not available in LLRT. But what is best right now is to measure and see. And I'm pretty confident that a lot of your use cases would benefit from running LLRT.

And again, how can it be so fast? So it has no JIT. And the AWS SDK is optimized specifically for Lambda. This means that we have removed some of the complexities that involves the AWS SDK such as the cache object creation. We convert the SDK to Quick JS byte code. And we're leveraging some other techniques that optimizes for code starts on Lambda. For instance, we do as much work as possible because the Lambda run times have CPU boost when they're being initialized. We also write most of our code in Rust. In fact, we have a policy that says as much as possible should be written in Rust. So the more code we can move from JavaScript to Rust, there will be a performance benefit. So in contrast with Node.js, almost all of its APIs are written in JavaScript. And they heavily depend on the just-in-time compiler of the V8 engine to achieve great performance. Since we're lacking this capability and writing the most of the code in Rust, we get performance benefits while still keeping the size down and get an instant performance benefit without having to rely on the JIT profiler to optimize the code of longer running tasks. And basically, everything that you're using in LLRT is written in Rust. So the console, the timers, crypto, hashing, all of that is written in Rust. There's just a small JavaScript layer on top of that. And of course, your code will be running in JavaScript as well. And it's also, again, very lightweight. It's only a few megabytes. And we try to keep it as lightweight as possible, minimizing dependencies, but also minimizing complexity.

5. LLRT Compatibility and Performance

Short description:

LLRT has some trade-offs and is not fully compatible with every Node.js API. However, it is constantly being developed and is available as a beta version. To use LLRT, download the latest release from the GitHub page, add the bootstrap executable with your code, and select custom runtime on Amazon Linux 3 in Lambda. LLRT runs on ARM or x86-64 instances, with ARM offering cost savings and slightly better performance. In terms of performance, LLRT starts almost six times faster than Node.js, showcasing its lightweight nature. Additionally, benchmark data shows significant benefits in cold start and warm start times compared to Node.js.

So what's the catch? This is a very high-level compatibility matrix. And you can see there's an exclamation mark here, and there are a few check marks. So obviously, there has to be some sort of trade-offs in order to achieve this level of performance. And one of the trade-offs is that not every Node.js API is supported, but we support some of them. And they're not also fully supported. Even though there's a check mark here, it doesn't mean that it supports, for instance, the full FS module or FS promises module. It's partially supported. But we're constantly building this runtime, and it's available as a beta today that you can check out. And I will have links to it later in this presentation.

And how to use it? So, like you saw in the demo, I just downloaded the latest release from the GitHub page, which is github.com slash AWS labs slash LLRT. I add the bootstrap executable together with your code. I can also use a layer, if that's your thing, or package it as a container image. I then select custom runtime on Amazon Linux 3 inside Lambda as my runtime choice. And LLRT runs on either ARM or x86-64 instances. There's a slight benefit of using ARM because you have a cost savings benefit, and it's also slightly better performance. So, this is something that I recommend.

Now, let's take a look at some benchmark data. So, as we saw in the demo, we did a very quick sample, where we saw that the cold start benefits and also warm starts benefits were significant versus Node.js. This slide here showcases some startup benefits when running on my local machine. So, as you can see here in the demo, highlighted by the arrow, that LLRT starts almost six times faster than Node.js. This is a pretty unexciting demo where we just do basically a print, but it showcases the lightness of the engine, where it doesn't have to load a lot of resources in order to start. So, it can be even faster than Dino and Bunn. But bear in mind that a lot of these speeds come from the simplicity. It's very simple to introduce a new runtime with a limited API and say it's faster, but this is one of the trade-offs, right? So, we make it very lightweight, hence it's also naturally faster.

Let's now take a quick look at some performance numbers when running LLRT for a longer period of time. So, this is again doing a DynamoDB PUT. So, it's the same sample code that we saw in the demo, but however, this is now running for 32,000 invocations on ARM64 with 128 megabytes of memory. So, notice here that the P99 latency, meaning that 99% of all invocations are below this number, we have 34 milliseconds for warm starts and 84 milliseconds for cold start. In comparison, we have the fastest possible warm start that is only 5.29 milliseconds and the fastest possible cold start that is 48.85 milliseconds. If you compare this with node 18, we can see that we have the P99 latency of 164 for warm starts and 1,306 for cold starts.

6. LLRT Performance Improvements

Short description:

LLRT significantly improves performance with 23 times faster cold starts and 15 times faster worst case performance. It also reduces the number of cold starts compared to Node.js, with only 109 cold starts versus 554. LLRT provides more consistent warm start times, with a duration range of 29 milliseconds for P99. In terms of latency and cost, LLRT offers a 3.7 times time-saving and a 2.9 times cost-saving compared to Node.js over 32,000 invocations.

For the slowest times and for the fastest times, we have 5.85 and 1,141 milliseconds for cold starts. This means that there is 23 times performance improvements for this exact demo for cold starts and a 15 time performance improvement for the worst case. So, this is the best case versus the worst case. Also notice that the number of cold starts that you can see here. So, in Lambda, even though cold starts may not be that super critical for your application, so if we can keep them lower, it also means that they are less likely to occur. Because every time Lambda has to process two consecutive events and has not done so before, meaning that there are no ready instances, it has to spin up a new one, meaning that you will introduce an additional cold start. So, in my example here, we can see that LLRT only introduced 109 cold starts versus Node.js that had 554 cold starts. Again, this is due to the cold starts being much shorter, also less likely to occur in the first place. Also notice here that the duration span of the warm starts, we have 158 milliseconds for the slowest all the way to the fastest invocation for the warm start versus only 29 for P99 with LLRT. And again, this is due to the lack of adjusting time compiler, making the execution much more consistent. If we take a look at the latency and cost breakdown, we can see that we have a build duration of 22 minutes and 19 seconds for Node.js versus LLRT, we only have 7 minutes and 48 seconds, which translates to a cost saving of 2.9 times and a time saving of 3.7 times. And the reason why these two differ is that it depends on how they're being charged. So provided runtimes is charged a bit differently in Lambda than custom runtimes, but we still have a cost saving of 2.9x for this particular example over 32,000 invocations.

QnA

Conclusion and Q&A

Short description:

I highly encourage you to test LLRT. Follow the QR link to try it out. LLRT is suitable for simpler serverless functions, integrating with downstream services or data sources. It may not be ideal for heavy JavaScript applications with extensive iterations. Making Node.js faster is complex, and many are working on it. Node's ecosystem and developers' familiarity with JavaScript make it a priority for optimization.

And that's it for me. I highly encourage you to test LLRT. So you can follow the QR link here and please test it out. It's still a very experimental runtime. So don't run it in production just yet, but we're building more capabilities every day. And we hope that you provide feedback. And again, I'm very, very thankful that you took the time to listen to me today. And I hope you enjoy the rest of Node Congress. Thank you.

And first of all, let's take a look at the poll question that Richard also provided here. What would you like to see in the next evolution of JavaScript engines and runtimes? And Richard, looking at this, improved support for new language features and enhanced performance optimization being sort of the top two answers here. Are you surprised by that? Actually not. I mean, I think given the innovation that has happened last couple of years, even in the Node ecosystem, I'm not surprised that people want to see more capabilities, but also enhanced performance. So yeah, it's kind of aligned with what you see coming up now with all the innovation and all the engines and all the frameworks and everything that's happening right now. Yeah, super interesting. Well, thank you so much, everyone, for participating. We're now going to jump to your questions, and we'll have Richard actually go through them. Don't worry, Richard, I'll read them out for you.

Why should you use LLRT and when should you use it? And also, when should you not? So a little bit, you know, what would sort of be the case against it? And also interesting question, why don't you focus on making Node.js faster? Yeah, thanks. It's an excellent question. So when should you use it? I think a good place to use it is when you have simpler serverless functions. We have functions that integrates with downstream services or downstream data sources or do simpler data transformation, not those super heavy, very JavaScript intense or JavaScript heavy applications where you do, you know, hundreds of thousands of millions of iterations because then performance will likely be better with with the runtime like Node, Deno or BUN. But for that glue code, I think that's an excellent place for to run LLRT on. And why don't I make it seem to make Node faster? I think that's also a good question. And it's very complicated to make Node faster. If it was straightforward or if I could do it, I would try to do it. I think a lot of people are working on that to try to improve the performance of Node, whereas traditionally that has not been the main focus of the project. And maybe also just really briefly, if you could answer that, I mean, what's your what's your answer if somebody goes like, well, Node is slow, right? They might as well just go and maybe spin up Lambda function with Go, for example, write a compiled language. Why focus on Node.js and making that runtime faster? I think that, you know, the Node ecosystem is so huge. There's already a lot of software and a lot of developers that know JavaScript and want, you know, are very productive with that language.

Winter CG Compliance and User Contribution

Short description:

Winter CG compliance is the target for production use. Users are encouraged to try it out and provide bug reports. Users can contribute by checking the issues on the repository and helping with Winter CG compliance.

So it's not as simple as switching to like Rust or Go or to a compiled language. They want to stick with JavaScript and make it a good world for developers. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. We encourage users to try it out and provide bug reports on the open source repository. Users can contribute by checking the issues on the repository and helping with Winter CG compliance.

Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use.

Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use. Winter CG compliance is the target for production use.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Our understanding of performance & user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.
Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Top Content
Workshop
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.