Making JavaScript on WebAssembly Fast

Rate this content
Bookmark

JavaScript in the browser runs many times faster than it did two decades ago. And that happened because the browser vendors spent that time working on intensive performance optimizations in their JavaScript engines.

Because of this optimization work, JavaScript is now running in many places besides the browser. But there are still some environments where the JS engines can’t apply those optimizations in the right way to make things fast.

We’re working to solve this, beginning a whole new wave of JavaScript optimization work. We’re improving JavaScript performance for entirely different environments, where different rules apply. And this is possible because of WebAssembly. In this talk, I'll explain how this all works and what's coming next.

29 min
10 Jun, 2021

Video Summary and Transcription

WebAssembly enables optimizing JavaScript performance for different environments by deploying the JavaScript engine as a portable WebAssembly module. By making JavaScript on WebAssembly fast, instances can be created for each request, reducing latency and security risks. Initialization and runtime phases can be improved with tools like Wiser and snapshotting, resulting in faster startup times. Optimizing JavaScript performance in WebAssembly can be achieved through techniques like ahead-of-time compilation and inline caching. WebAssembly usage is growing outside the web, offering benefits like isolation and portability. Build sizes and snapshotting in WebAssembly depend on the application, and more information can be found on the Mozilla Hacks website and Bike Reliance site.

Available in Español

1. Introduction to WebAssembly

Short description:

Hi, I'm Lynn Clark, and I make code cartoons. Today, I want to explain what it is about WebAssembly that enables optimizing JavaScript performance for different environments. Let's start by understanding how we're running JavaScript inside a WebAssembly engine.

Hi, I'm Lynn Clark, and I make code cartoons. I also work at Fastly, which is doing a ton of cool things with WebAssembly to make better edge compute possible. And I'm a co-founder of the Bytecode Alliance. We're working on tools for a WebAssembly ecosystem that extends beyond the browser. And it's one of those tools that I wanted to talk to you about today.

JavaScript was first created to run in the browser, so that people could add a little bit of interactivity to their web pages. No one would have guessed that 20 years later, people will be using JavaScript to build all sorts of big complex applications to run in a browser. What made this possible is that JavaScript in the browser runs a lot faster than it did two decades ago. And that happened because the browser vendors spent that time working on some pretty intensive performance optimizations. Now, this started with the introduction of just in time compilers around 2008. And the browsers have built on top of that, continuing these optimization efforts. Now we're starting work on optimizing JavaScript performance for an entirely different set of environments where different rules apply. And this is possible because of WebAssembly. So, today I want to explain what it is about WebAssembly that enables this. But first, I want to give you a heads up. This talk is structured a bit differently than speaking experts would tell me I should be structuring this presentation. I'm going to start with telling you how we're making this work at all. And once you've heard that, you might not be onboard. You might think that this is a pretty ridiculous idea. So, that's why I'm going to explain why. I'm going to explain why you would actually want to do this. And then once you're bought in, and I know you'll be bought in, then I'm going to come back and explain exactly how it is that we're making this fast.

So, let's get started with how we're running JavaScript inside a WebAssembly engine. Whenever you're running JavaScript, the JS code needs to be executed as machine code in one way or another. Now, this is done by the JS engine using a variety of different techniques. From interpreters and JIT compilers. And I explained this in more detail in my first set of articles about WebAssembly back in 2017. So, if you want to understand more about how this works, you can go back and read those articles. Running this JavaScript code is really quite easy in environments like the Web where you know you're going to have a JavaScript engine available. But what if your target platform doesn't have a JavaScript engine? Then you need to deploy your JavaScript engine with your code.

2. Running JavaScript in WebAssembly

Short description:

So, that's what we need to do to bring JavaScript to different environments. We deploy the JavaScript engine as a WebAssembly module, making it portable across different machine architectures and operating systems. The JavaScript environment is bundled into the module, and once deployed, you can run JavaScript code. However, running JavaScript inside WebAssembly is slow because it can only use the interpreter, not the JIT. But what if we could make it run fast? This approach would be useful in platforms where a JIT is not allowed due to security concerns, like iOS devices or smart TVs. It would also help with startup times in serverless functions, reducing latency.

So, that's what we need to do to bring JavaScript to these different environments. So, how do we do this? Well, we deploy the JavaScript engine as a WebAssembly module, and that makes it portable across a bunch of different machine architectures. With WASI, we can make it portable across a bunch of different operating systems as well. This means that the whole JavaScript environment is bundled up into the WebAssembly module. And once you deploy it, all you need to do is feed in the JavaScript code, and that JavaScript engine will run that code.

Now, instead of working directly on the machine's memory like it would for a browser, the JavaScript engine puts everything from byte code to the garbage collected objects that the byte code works on into the WebAssembly memory's linear memory. For our JS engine, we went with SpiderMonkey, and that's the JS engine that Firefox uses. It's one of the industrial strength JavaScript virtual machines, because it's been battle tested in the browser. And this kind of battle testing and investment in security is really important when you're running untrusted code or running code that processes untrusted input. SpiderMonkey also uses a technique called precise stack scanning, which is important for some of the optimizations that I'll be describing a bit later in the talk.

So far, there's nothing revolutionary about the approach that I've described. People have already been running JavaScript inside of WebAssembly like this for a number of years. The problem is that it's slow. WebAssembly doesn't allow you to dynamically generate new machine code and run it from within pure WebAssembly code. So, this means that you can't use the JIT. You can only use the interpreter. Now, given this constraint, you might be asking why. Since JITs are how the browsers made JS code run fast, and since you can't JIT compile inside of a WebAssembly module, this just doesn't make sense. But what if, even given these constraints, we could actually make this JavaScript run fast? Let's look at a couple of use cases where a fast version of this approach could be really useful. There are some places where you can't use a just-in-time compiler due to security concerns. So, for example, iOS devices or some smart TVs and gaming consoles. On these platforms, you have to use an interpreter. But the kinds of applications that you run on these platforms are long running and they require lots of code and those are exactly the kinds of conditions where historically you wouldn't want to use an interpreter because of how much it slows down your execution. If we can make our approach fast, then these developers could use JavaScript on JITless platforms without taking a massive performance hit. Now, there are other places where using a JIT isn't a problem, but where startup times are prohibitive. So, an example of this is in serverless functions and this plays into that cold start latency problem that you might have heard people talking about. Even if you're using the most paired down JavaScript environment, which is an isolate that just starts up a bare JavaScript engine, you're looking at about five milliseconds of startup latency. Now, there are some ways to hide this startup latency for an incoming request. But it's getting harder to hide as connection times are being optimized in the network layer with proposals such as QUIC. And it's harder to hide when you're chaining serverless functions together.

3. JavaScript Engine Initialization and Runtime

Short description:

Platforms that use techniques to hide latency often reuse instances between requests, which can lead to security issues. Developers often don't follow best practices and stuff multiple functions into one serverless deployment, resulting in a larger blast radius. By making JavaScript on WASM fast, we can provide a new instance for each request, eliminating state between requests and reducing the blast radius. To achieve this, we need to understand the two parts of the JavaScript engine: initialization and runtime. Initialization involves setting up resources and reading through the source code, while engine initialization starts up the JS engine and adds built-in functions to the environment.

But more than this, platforms that use these kinds of techniques to hide latency also often reuse instances between requests. And in some cases, this means that global state can be observed between different requests, which can be a security issue. And because of this cold start problem, developers also often don't follow best practices.

They stuff a lot of functions into one serverless deployment. So, this results in another security issue, which is a larger blast radius. If one part of the serverless deployment is exploited, the attacker has access to everything in that deployment. But if we can get JavaScript startup times low enough in these contexts, then we wouldn't need to hide startup times with any tricks. We could just start up an instance in microseconds.

With this, we can provide a new instance for each request, which means that there's no state lying around between requests. And because the instances are so lightweight, developers could feel free to break up their code into fine-grained pieces. And this would bring their blast radius down to a minimum for any single piece of code. So, for these use cases, there's a big benefit to making JavaScript on WASM fast. But how can we do that?

In order to answer that question, we need to understand where the JavaScript engine spends its time. We can break down the work that a JavaScript engine has to do into two different parts. Initialization and runtime. I think of the JS engine as a contractor. This contractor is retained to complete a job, and that job is running the JavaScript code and getting to a final result.

Before this contractor can actually start running the project, though, it needs to do a little bit of preliminary work. This initialization phase includes everything that only needs to happen once at the very start of the project. So, one part of this is application initialization. For any project, the contractor needs to take a look at the work that the client wants it to do and then set up the resources that it needs in order to complete that job. So, for example, the contractor reads through the project briefing and other supporting documents and turns them into something that it can work with.

So, this might be something like setting up the project management system with all of the documents stored and organized and breaking things into tasks that go into the task management system. In the case of the JS engine, this works more like reading through the top level of the source code and parsing functions in the byte code, or allocating memory for the variables that are declared and setting values where they're already defined. So, that's application initialization. But in some cases, there's also engine initialization. And you see this in contexts like serverless. The JS engine itself needs to be started up in the first place. And built-in functions need to be added to the environment. I think of this like setting up the office itself.

4. Improving Initialization with Snapshotting

Short description:

Doing things like assembling the Ikea chairs and tables and everything else in the environment before starting the work can take considerable time, making cold start an issue for serverless use cases. The speed of running the code, known as throughput, is influenced by language features, code predictability, data structures used, and the code's runtime. To make the work in the initialization and runtime phases faster, we use a tool called Wiser. With Wiser, we achieved a six times faster startup time in a small markdown application, with 80% of the time spent on engine initialization. Snapshotting, a technique where the JavaScript code is run to the end of the initialization phase and the resulting byte code is stored in the linear memory, enables fast startup by attaching the memory as a data section to a Wasm module.

Doing things like assembling the Ikea chairs and tables and everything else in the environment before starting the work. Now, this can take considerable time. And that's part of what can make the cold start such an issue for serverless use cases.

Once the initialization phase is done, the JS engine can start its work. This work of running the code. And the speed of this part of the work is called throughput. And this throughput is affected by lots of different variables. So, for example, which language features are being used. Whether the code behaves predictably from the JS engine's point of view. What sorts of data structures are used and whether or not the code runs long enough to benefit from the JS engine's optimizing compiler.

So, these are the two phases where the JS engine spends its time. Initialization and run time. Now, how can we make the work in these two phases go faster? Let's start with initialization. Can we make that fast? And spoiler alert, yes, we can. We used a tool called Wiser for this. And I'll explain how that works in a minute. But first I want to show you some of the results that we saw. We tested with a small markdown application. And using Wiser, we were able to make startup time six times faster. If we look in more depth at this case, about 80% of this was spent on engine initialization. And the remaining 20% was spent on application initialization. And part of that is because this markdown render is a very small and simple application. As apps get larger and more complex, application initialization time just takes longer. So we would see even larger comparative speedups for real world applications. Now, we get this fast start up using a technique called snapshotting. Before the code is deployed, as part of the build step, we run the JavaScript code using the JavaScript engine to the end of the initialization phase. And at this point, the JS engine has parsed all of the JS byte code or JS and turned it into byte code, which the JS engine module stores in the linear memory. And the engine also does a lot of memory allocation and initialization in this phase. Because this linear memory is so self-contained, once all of the values have been filled in, we can just take that memory and attach it as a data section to a Wasm module. When the JS engine module is instantiated, it has access to all of the data in the data section.

5. Memory Management and Module Separation

Short description:

Whenever the engine needs a bit of memory, it can copy the necessary section into its own linear memory. This eliminates the need for setup when the engine starts up. The data section can be shipped separately, allowing the JS engine module to be reused across different applications. The application-specific module contains the JavaScript bytecode and the JS engine state, making it easy to move and deploy.

Whenever the engine needs a bit of that memory, it can copy the section or rather the memory page that it needs into its own linear memory. With this, the JS engine doesn't have to do any setup when it starts up. All of this is pre-initialized, ready and waiting for it to start its work.

Currently, we attach the data section to the same module as the JS engine, but in the future, once WebAssembly module linking is in place, we'll be able to ship the data section as a separate module. This provides a really clean separation and allows the JS engine module to be reused across a bunch of different JS applications. The JS engine module only contains the code for the engine. That means that once it's compiled, that code can be effectively cached and reused between lots of different instances.

Now, on the other hand, the application specific module contains no WebAssembly code. It only contains the linear memory, which in turn, contains the JavaScript bytecode, along with all the rest of the JS engine state that was initialized. This makes it really easy to move this memory around and send it wherever it needs to go. It's kind of like the JS engine contractor doesn't need to set up its own office at all. It just gets this travel case shipped to it, and that travel case has the whole office, with everything in it all set up and ready to go, for the JS engine to just get to work. And the coolest thing about this is that it doesn't rely on anything that's JS dependent. It's just using an existing property of WebAssembly itself. So, you could use the same technique with languages like Python or Ruby or Lua and other run times, too.

6. Optimizing JavaScript Performance

Short description:

So, with this approach, we can achieve superfast startup time. For short running JavaScript, the throughput is similar to the browser. However, for longer running JavaScript, the JIT starts to make a noticeable difference. While JIT compilation is not possible in a pure WebAssembly module, we can apply similar thinking to ahead-of-time compilation. One optimizing technique is inline caching, which stores translations of frequently interpreted code for reuse. These translations, called stubs, are based on the types used. By parameterizing the IC stubs, we can create a single stub that loads values from memory and covers common patterns in JavaScript code. With just a few kilobytes of IC stubs, we can cover the majority of JS code, such as 95% of the JavaScript in Google's Octane benchmark.

So, with this approach, we can get to this superfast startup time. But what about throughput? Well, for some use cases, the throughput is actually not too bad. If you have a very short running piece of JavaScript, it wouldn't go through the JIT anyways. It would stay in the interpreter the whole time. So, in that case, the throughput would be about the same as in the browser. And so, this will have finished before a traditional JavaScript engine would have finished initialization in the case where you need to do engine initialization.

But for longer running JavaScript, it doesn't take all that long before the JIT starts kicking in. And once this happens, the throughput difference does become pretty obvious. Now, as I said before, it's not possible to JIT compile code within a pure WebAssembly module at the moment. But it turns out that we can apply some of the same thinking that comes with just-in-time compilation to an ahead-of-time compilation model. So, one optimizing technique that JITs use is inline caching, which I also explained in my first series about WebAssembly. When the same bit of code gets interpreted over and over again, the engine decides to store its translation for that bit of code to reuse next time. And the stored translation is called the stub.

Now, these stubs are chained together into a linked list, and they're based on what types are used for that particular invocation. The next time that the code is run, the engine will check through this list to see whether or not it has a translation that is available for those types, and, if so, it will just reuse the stub. Because ICE subs are commonly used in JITs, people think of them as being very dynamic, and specific to each program, but it turns out that they can be applied in an AoT context, too. Even before we see the JavaScript code, we already know a lot of the ICE stubs that we're going to need to use to generate. And that's because there are some patterns in JavaScript that just get used a whole lot. A good example of this is accessing properties on objects. This happens a lot in JavaScript code. And it can be sped up by using an IC stub. For objects that have a certain shape or hidden class that is where the properties are laid out in the same order, when you get a particular property from those objects, that property will always be at the same offset.

Traditionally this kind of IC stub in the JIT would hard code two values. The pointer to the shape and the offset of the property. That requires information that we don't have ahead of time. But what we can do is parameterize the IC stub, so we can treat the shape and the property offset as variables that get passed in for the stub. And this way we can create a single stub that loads values from memory and then use that same stub code everywhere. We can just bake all of the stubs for these common patterns into the AOT compiled module regardless of what the JavaScript is actually doing. And we discovered that with just a couple of kilobytes of IC stubs, we can cover the vast majority of all JS code. For example, with two kilobytes of IC stubs, we can cover 95% of the JavaScript in Google's Octane benchmark.

7. Optimization Efforts and Conclusion

Short description:

From preliminary tests, we can see that the optimization efforts are showing promising results. We still have a lot of work to do to find the clever shortcuts that can be used in this context. If you're excited about this and want to contribute or try to make this work for another language, like Python or Ruby or Lua, we'd be happy to hear from you. Thank you to the organizers for inviting me to speak here today and thank you all for listening.

And from preliminary tests, that percentage seems to hold up for general web browsing as well. Now, this is just one example of a potential optimization that we can make. Right now, we're in the same kind of position that the browser JS engines were in in the early days when they were first experimenting with just-in-time compilers in the first place. We still have a lot of work to do to find the clever shortcuts that we can use in this context. But we're excited to be starting that work and excited for the changes to come.

If you're excited like we are about this and want to contribute to the optimization efforts, or if you want to try to make this work for another language, like Python or Ruby or Lua, we'd be happy to hear from you. You can find us on the messaging platform, Zulip, and feel free to post there if you want to ask for more info. You can also find links to the projects that I mentioned in my recently published blog post on the Bytecode Alliance blog. I want to say thank you to the organizers for inviting me to speak here today and thank you all for listening.

QnA

WebAssembly Usage and Q&A

Short description:

With 55% saying no and 40% saying they want to, it's not surprising that many people are using WebAssembly without realizing it. It's still early in terms of tooling for the ecosystem, but as things progress, more people will start targeting WebAssembly. While it's not widely used in the Netherlands, companies may adopt it in the future. Now, let's move on to the Q&A.

Hi there. Good to see you.

Good to see you too. So, with 55% we have no, but I want to and 40% said no. Does this surprise you? This is what you were expecting? No, it doesn't surprise me. It's still pretty early in terms of the tooling for the ecosystem. There are a lot of people that are using WebAssembly as users without realizing it. So, of course, if you're using Facebook, you're using WebAssembly when you upload a picture. So, there are lots of folks that are actually on the user side using it under the hood and not really realizing it. There are lots of people also that are using it because it's embedded in modules that they're using. And so, that all makes sense. A lot of people don't realize that they're using WebAssembly. I think that as things progress, people are going to realize more and more that when they are using WebAssembly, they're start targeting WebAssembly in certain cases like with the JavaScript to WebAssembly work.

Yeah again, here you're talking more about that people are using it as a consumer but not as a developer.

Yeah. Yeah, it doesn't surprise me at all. I mean, it's even though it's been around for a long time. I actually hop around from client to client every year and I never hear anywhere where it's used yet here in the Netherlands. So, of course, there will be companies but not to my experience. So, it doesn't surprise me.

So, let's jump to the Q&A. We have some questions from our audience, and if you still have any questions for Lynn then you can jump to the Community Track Q&A channel on Discord. I want to make one little note, of course, you make your cartoons, and well, I think everyone here must have fallen in love with your slides, and you will see a big spike in your traffic on the CodeCartoons website. Really great styling of your slides. So, I wanted to give some compliments on that. Thank you. And the CodeCartoons website has not been updated in a while. I need to actually take care of that. So, a great place to find them is the bycode alliance blog or Mozilla's hacks website. And on your Twitter, I think.

WebAssembly's Role in Web Development and Beyond

Short description:

On the web, keeping work in JavaScript makes sense, except for computationally heavy pieces. Outside the web, WebAssembly offers benefits like isolation, small footprint, and portability. We'll see interesting work happening outside the web, and as people want to use those applications on websites, they'll bring it back to the web. We may see WebAssembly implementations of highly used JavaScript modules. To incorporate WebAssembly in an existing application, consider porting computationally heavy parts to WebAssembly. Outside the web, platforms like Fastly allow easy deployment of WebAssembly modules. In a microservices architecture, one service can be implemented in WebAssembly.

Yes, and on my Twitter. For the latest and greatest from Lin, check her Twitter.

Let's see, the first question is from Alexius. Do you think that WebAssembly will become a major part of web development, bringing other languages to the web, or will it stay in the area of computational heavy applications? So, I think that this wasn't the plan, but I think that we're actually going to see more of the interesting development happening outside of the web and then coming back to the web through WebAssembly because I think on the web, for most things, keeping your work in JavaScript does make sense. Keeping it in JavaScript that's running natively in the browser does make sense, except for those computationally heavy pieces. But when you're talking about outside the web, you get so many benefits from WebAssembly. You get the isolation, you get the small footprint, you get the portability that you don't have with a lot of other technologies. I think we're going to see a lot of uptake and interesting work happening outside of the web. As people want to start using those applications in their own websites, they'll bring it back into the web and start plugging it in there. That's my prediction.

Next question is from Happy to Collaborate. I think this is someone that wants to help out. Do you believe that there are many standard, highly used modules that currently exist in JavaScript would be mimicked or duplicated or replaced by equivalent Wasm-based modules? This would maybe offer better security to the consumer? I'd be interested. You said managed modules? Well, highly used modules, standard modules. So I'm thinking it means something like underscore. Okay, so those underscore is like an NPM module. It could be talking about built-in modules, which are the standardized modules in JavaScript. And those are browser built-ins. And then there's also the NPM ecosystem that has a lot of commonly used modules. And yes, I do think that we're going to see implementations of that same kind of functionality in WebAssembly sometimes more performing. Yeah, that would be awesome.

Next question is from our attendee Keith. What would be a good way to incorporate WebAssembly in an existing application to minimize risk but start to get comfortable using it and taking advantage of it? How to use WebAssembly in an existing project? So it depends on whether or not you're talking about the web or outside of the web. If you're talking about on the web, you probably want to be using an application where if you have something that's computationally heavy, then it makes sense to take that little part that is computationally heavy and port that little bit to WebAssembly. If you're working outside of the web, there are platforms like the one that we have at Fastly where you can just easily put up a WebAssembly module as your, you know, that is the artifact that you put up to run on a computed edge platform. So in that case, you're just going to start up a service doing WebAssembly. And so if you have a microservices architecture, then you can can do one of your services in WebAssembly. Awesome.

Next question is from Warlock.

Build Sizes and Snapshotting in WebAssembly

Short description:

Build sizes and bandwidth costs of using WebAssembly depend on the application. WebAssembly was designed to be compact, but the size can vary depending on the computation. Snapshotting is possible in V8, but the engine snapshotting is only available with the WebAssembly runtime. To learn more about WebAssembly, check out my blog posts on the Mozilla Hacks website or the Bike Reliance site.

Next question is from Warlock. What are build sizes in general, bandwidth costs of using Wasm? So it really depends on what kind of application you're doing. The build sizes are the standard was designed to be very compact. I actually wrote about this in my first series on WebAssembly, which you can find on the Mozilla Hacks blog. So they're as compact as possible, basically. But it, of course, depends on exactly what kind of computation you're doing. Sometimes it will actually still be, you know, the equivalent JavaScript would still be smaller, depending on what you're doing.

Last question we have time for. And before ... I have to sneeze. Sorry. Okay. I got it. I got it. Next question is from Bartos. Is snapshotting possible with V8 too, or has it only been explored in the context of WASM? So I think one thing that people sometimes don't realize is the snapshotting that you have in V8, that is applications. And this is something that ... That part is not new. Application snapshotting is not new. The snapshotting the engine is something that you wouldn't have without the WebAssembly runtime to do that snapshotting. Because if you're opening up a V8 isolate, what you're starting up is the isolate itself. So that engine initialization is also part of this. All right. Awesome. So that was the last question. But I have one question that I feel is too important to skip. And it's from Richard S. And he says, where can I learn more about WebAssembly? So I've done a number of blog posts about WebAssembly. You can find those on the Mozilla Hacks website or on the Bike Reliance site.

Resources and Q&A

Short description:

You can find more information on the Mozilla Hacks website or the Bike Reliance site. If you want to get involved in WebAssembly, there's a GitHub repo where all the standardization happens. Join Lin on Spatial Chat for further discussions.

You can find those on the Mozilla Hacks website or on the Bike Reliance site. If you want to get involved in WebAssembly, there's a GitHub repo where all the standardization happens. That's if you really want to get low level deep into the details.

Nice. All right. Thanks, Lovely. So there's some more questions in the Discord channel that we don't have time for. But I will invite everyone that still has questions for Lin to join Lin on a room on Spatial Chat, where she will be going now. So Lin, thanks a lot for joining me here and enjoy your Spatial Chat Speaker Room. Thank you. See y'all.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Top Content
Do you have a large product built by many teams? Are you struggling to release often? Did your frontend turn into a massive unmaintainable monolith? If, like me, you’ve answered yes to any of those questions, this talk is for you! I’ll show you exactly how you can build a micro frontend architecture with Remix to solve those challenges.
Vue.js London Live 2021Vue.js London Live 2021
8 min
Utilising Rust from Vue with WebAssembly
Top Content
Rust is a new language for writing high-performance code, that can be compiled to WebAssembly, and run within the browser. In this talk you will be taken through how you can integrate Rust, within a Vue application, in a way that's painless and easy. With examples on how to interact with Rust from JavaScript, and some of the gotchas to be aware of.
Remix Conf Europe 2022Remix Conf Europe 2022
37 min
Full Stack Components
Top Content
Remix is a web framework that gives you the simple mental model of a Multi-Page App (MPA) but the power and capabilities of a Single-Page App (SPA). One of the big challenges of SPAs is network management resulting in a great deal of indirection and buggy code. This is especially noticeable in application state which Remix completely eliminates, but it's also an issue in individual components that communicate with a single-purpose backend endpoint (like a combobox search for example).
In this talk, Kent will demonstrate how Remix enables you to build complex UI components that are connected to a backend in the simplest and most powerful way you've ever seen. Leaving you time to chill with your family or whatever else you do for fun.
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.

Workshops on related topic

React Day Berlin 2022React Day Berlin 2022
86 min
Using CodeMirror to Build a JavaScript Editor with Linting and AutoComplete
Top Content
WorkshopFree
Using a library might seem easy at first glance, but how do you choose the right library? How do you upgrade an existing one? And how do you wade through the documentation to find what you want?
In this workshop, we’ll discuss all these finer points while going through a general example of building a code editor using CodeMirror in React. All while sharing some of the nuances our team learned about using this library and some problems we encountered.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
WorkshopFree
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.
Node Congress 2023Node Congress 2023
63 min
0 to Auth in an Hour Using NodeJS SDK
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.JS backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.
Table of contents- A quick intro to core authentication concepts- Coding- Why passwordless matters
Prerequisites- IDE for your choice- Node 18 or higher
React Summit US 2023React Summit US 2023
96 min
Build a powerful DataGrid in few hours with Ag Grid
WorkshopFree
Does your React app need to efficiently display lots (and lots) of data in a grid? Do your users want to be able to search, sort, filter, and edit data? AG Grid is the best JavaScript grid in the world and is packed with features, highly performant, and extensible. In this workshop, you’ll learn how to get started with AG Grid, how we can enable sorting and filtering of data in the grid, cell rendering, and more. You will walk away from this free 3-hour workshop equipped with the knowledge for implementing AG Grid into your React application.
We all know that rolling our own grid solution is not easy, and let's be honest, is not something that we should be working on. We are focused on building a product and driving forward innovation. In this workshop, you'll see just how easy it is to get started with AG Grid.
Prerequisites: Basic React and JavaScript
Workshop level: Beginner
Node Congress 2023Node Congress 2023
49 min
JavaScript-based full-text search with Orama everywhere
Workshop
In this workshop, we will see how to adopt Orama, a powerful full-text search engine written entirely in JavaScript, to make search available wherever JavaScript runs. We will learn when, how, and why deploying it on a serverless function could be a great idea, and when it would be better to keep it directly on the browser. Forget APIs, complex configurations, etc: Orama will make it easy to integrate search on projects of any scale.
Node Congress 2022Node Congress 2022
128 min
Back to the basics
WorkshopFree
“You’ll never believe where objects come from in JavaScript.”
“These 10 languages are worse than JavaScript in asynchronous programming.”
Let’s explore some aspects of JavaScript that you might take for granted in the clickbaitest nodecongress.com workshop.
To attend this workshop you only need to be able to write and run NodeJS code on your computer. Both junior and senior developers are welcome.
Objects are from Mars, functions are from Venus
Let’s deep-dive into the ins and outs of objects and then zoom out to see modules from a different perspective. How many ways are there to create objects? Are they all that useful? When should you consider using them?
If you’re now thinking “who cares?“, then this workshop is probably for you.
Asynchronous JavaScript: the good? parts
Let’s have an honest conversation.
I mean… why, oh why, do we need to bear with all this BS? My guess is that it depends on perspective too. Let’s first assume a hard truth about it: it could be worse… then maybe we can start seeing the not-so-bad-even-great features of JavaScript regarding non-blocking programs.