Turbopack. Why? How? When? and the Vision...

Rate this content
Bookmark
32 min
02 Dec, 2022

Video Summary and Transcription

The Talk discusses TurboPack, a successor to Webpack, aiming to create a framework-independent, flexible, and extensible tool for the open-source community. It addresses performance challenges by integrating SWC into Next.js. The challenges with Next.js and Webpack include orchestration issues, backward compatibility constraints, and cache invalidation problems. TurboEngine and TurboPack provide constant performance in incremental builds, leveraging Rust's predictable performance and parallelism. The Talk also covers topics like dependency tracking, task graphs, cache invalidation, lazy asset graphs, and the integration of TurboPack with Next.js. The future plans involve reconfiguring Webpack and TurboEngine, moving computations to the cloud, providing insights into builds, and facilitating migration and integration with JavaScript projects.

Available in Español

1. Introduction to TurboPack

Short description:

I started with web development 10 years ago, founded and maintained Webpack for 10 years. Now working on TurboPack, the successor of Webpack, with the mission to create a tool similar to Webpack that aligns with its goals. We aim to build something framework independent, for the open source community, and as flexible and extensible as Webpack. Our goal is to create a building block for the next ten years of web development.

Thanks for having me. My name is Tobias Cauras, I'm from Germany, and I'm going to tell you something about I started with web development 10 years ago, when I started to found Webpack and maintained it for 10 years, so it's pretty old now.

And nearly 2 years ago I started to join Versil and worked with the Next.js team on Next.js and integration with Webpack and Next.js and performance stuff and stuff like that. And then now for 10 months I'm working on TurboPack and I'm going to tell you something about that.

First of all, what's our mission with TurboPack? So our mission is to create the successor of Webpack. We want to align with the goal of Webpack, we want to make some kind of tool which is really like Webpack, similar to Webpack and fulfills at least the goals of Webpack. So I know that's a really ambitious mission and it will take years or a long time to get there, but at least that's our direction we are trying to head to. And this basically motivates us for our project goals.

We don't want to build something that's only for Next.js. We want to build something that's framework independent. We do not want to build something that's only for Verzi. We want to make something that's for the open source community, which is a community effort, and we want to align with the goals and the motivation behind Webpack. We also want to make sure that we are building something that's as flexible and as extensible as Webpack. So we want to follow Webpack's footsteps in that kind of way. So we actually want to create a building block for the web development for the next ten years of web development. Ambition's goals. Yeah.

2. Creation of TorbaPack and Performance Challenges

Short description:

We wanted to solve some developer experience challenges, one of which was performance. Next.js, being mostly built on Javascript-based tooling, faced challenges in leveraging the full power of the computer. To address this, we integrated SWC into Next.js, which resulted in improved performance. However, there were trade-offs made for performance, such as assuming that module requests and Node modules don't change. The Next.js team also faced implementation challenges.

Okay. So let's look into what's led to the creation of TorbaPack into the past and also how it works, and what's exactly our vision with TorbaPack. So I started when I was joining Versa and worked with the Next.js team, and basically we wanted to solve some developer experience challenges, and one of these challenges was performance. It's kind of working well, but there are some kind of challenges with performance, especially as Next.js is mostly built on top of Javascript-based tooling, and Javascript-based tooling for compute-heavy work has a really hard time leveraging all the power of your computer, so leveraging multiple CPUs and really Javascript might not be the best language for compute-heavy work, or for build kind of things. The Next.js team and I started to work on porting some part of Next.js or of the compiler infrastructure of web development into Rust world, so SWC was integrated into Next.js, and it really has a lot of benefits performance-wise. But there are also some changes integration-wise. There's always a boundary between JavaScript world and the Rust world, and you have the serialization problems. So, still, there's challenges while working on that. There are also some kind of trade-offs we had to do in Next.js for performance. One example was that we are resolving module requests in Webpack, and we had to be really optimistic about this to make it performance-like. Once we successfully resolved something, we just assumed that this is not changed. We also assumed that Node modules usually don't change, and this is kind of working well for the 99 per cent of cases, but it's kind of a trade-off, and we don't want to be forced to choose that. But there are also some implementation challenges in the Next.js team.

3. Challenges with Next.js and Webpack

Short description:

Currently, Next.js is using multiple Webpack compilers for different purposes, but the orchestration between them is challenging and prone to errors. The architecture of Webpack, which is over a decade old, is not designed for large-scale incremental build performance. Fixing this is difficult due to the backward compatibility constraints and the reliance on plugins. Cache invalidation and lookup costs are also issues, especially when dealing with a large number of modules. Despite these challenges, there are opportunities to improve the build process by leveraging tools like TurboRepo and learning from their features such as granular caching and remote caching. Additionally, developers are seeking more insights into the build process, including the size of components and dependencies, and the impact of pull requests on application performance.

Currently, Next.js is using four to five Webpack compilers to make all this work, one for client, for server rendering, for edge rendering, for several components, and a fallback compiler for error messages. All this kind of light orchestration work between these compilers is not that easy. It's kind of working, but it's a source of errors, where you can make mistakes or forget something, or the communication is not working correctly, and that's not something we want to be forced to.

So, there are also some challenges on the WEPAC side, so WEPAC was built ten years ago, and it still has the architecture from ten years ago where applications were like hundreds of modules, and web development scaled a lot in the last decade, and, yes, so WEPAC architecture was not really built for this kind of large-scale incremental build performance kind of thing. That is a problem, and we can't easily fix that because there are a lot of plugins that depend on the architecture of WEPAC, and it's really hard to make backwards-compatible changes to the architecture, we can't really change the architecture while being backwards-compatible. We don't want to break everyone's use cases by changing the architecture of WEPAC, so that's not really a good opportunity to fix that.

On the other hand, we fixed a lot of things while I'm working on Next.js in WEPAC to make it more performant, but we had a limit on the kind of optimisation we can do without rewriting everything. There are also some other challenges like cache invalidation. In many cases it's either too sensitive, so like you change something and it affects, it has to rebuild a larger part of the application, while it's probably not affected. In an example, if you change a comment, why do we have to rebuild the module? It could be more granular in what we can cache. One problem of the architecture is this cache, lookup, cost problem. What we do when doing an incremental build is we basically start similar to a full build, start building it, and for every work we want to do we first check if it's in the cache, if it's already cached, and then we can skip doing that work, and that sounds great, but on that scale, if you look up, like, 10,000 modules of modules in the cache, then you have this serious cost of looking things up in the cache, and, yes, that's kind of a problem.

But, yes, we are talking about lookup problems like when you're talking about a few seconds of rebuild. If you work in the native world, you might know that they are incremental times of minutes or something like that. Yes, we have a really luxurious problem in the web development world, I guess! But there were also some opportunities that we could address while working on a new Bundler. So at best, we have this tool called TurboRepo, and when combining this with Next.js, we can a lot of stuff from each other. Next.js has this cool granular caching, like if you change one module you only rebuild one module and some boilerplate stuff. But in the TurboRepo tool, you always have this caching granularity of whole commands. And then TurboRepo can learn from Next.js by having more granular caching, but also can learn by having more of a development-focused watch mode where you can have this kind of incrementality-watch-thing experience by default. But on the other side, Next.js can learn from TurboRepo. TurboRepo has this cool feature about remote caching, so it can actually share your cache with your team and it's uploaded to the cloud or it is a self-hosted solution. And then you can share that with your team and you don't have to rebuild what your colleagues has had built. That's cool. We wanted to have that in Next.js, like sharing cache with team and sharing cache with deployment so you don't have to do the whole work twice. But there are some new opportunities. I've seen that many developers actually want to have more insight into the build process. What's the size of the page of components, what's the size of dependencies and how does pull requests affect the change of my application or performance change of my application. This kind of insights. And we want to offer more of these types. Also related to build performance, meta statistics, why is my build slow or how does this affect my build time, these kinds of statistics.

4. Building a Core Engine and Bundler

Short description:

We had the idea to build a core engine to solve common challenges like caching, invalidation, watch mode, and incremental builds. We then built a Bundler on top of this engine to avoid solving these problems repeatedly. The goal was to use this Bundler in Next.js and other frameworks to benefit from this new approach.

So we made this magic plan where we had this idea about building something new. and the idea was that we have these common challenges about caching and invalidation and watch mode and incremental builds and we wanted to abstract that from the Bundler. So we built the plan was to build some kind of core engine which solves these common problems and then build a Bundler on top of this core engine so we don't have to solve these problems all over again and take care of cache invalidation in every kind of code we write. And then after writing this Bundler we just want to use it in Next or in other frameworks to make like get the benefits out of this kind of new idea, and that's basically what we did.

5. Building TurboEngine and TurboPack

Short description:

We aim for constant performance in incremental builds, regardless of app size. We built a layering system using Rust as the base language, leveraging its predictable performance, parallelism, and memory safety. We provide both JavaScript and Rust as plugin interfaces, allowing developers to choose the most suitable option. TurboEngine, built on top of Rust, solves common problems like caching and invalidation. TurboPack, a bundler, can be used by Next.js and other frameworks. TurboEngine's magic lies in TurboFunctions, which enable function memorization for caching and cache invalidation.

And we always had this goal in mind about incremental builds should be independent of app size which means my application can grow as large as it wants to but my incremental builds should still be constant in performance so it should always be like the same time spent on incremental builds. So basically incremental builds should only depend on the size of the affected modules or affected change in my application, not of the total size. Which is not the case for Webpacking, for example.

So we, on top of this idea we built a kind of layering system. What we plan to do, we wanted to use Rust as a base layer, so as language. First reason was, we wanted to use SWC as parser, because we don't want to re-write a new parser and code generator and all this stuff. So we wanted to use that, which is based on Rust. So it was an obvious choice to use Rust. But Rust also fits good with our challenges. It has predictable performance, it has parallelism, which is easy to use and share memory and stuff. It also has a safe language, memory safety, which is a good point for remote stuff and security reasons. But there are also trades by using Rust. Rust is usually much harder to write compared to JavaScript. And this could be a problem, could be a developer experience problem, when we want to offer the developer to write plug-ins.

So we made the decision that we always want to provide JavaScript and Rust as a plug-in interface, so you can always ... So the plan is to always allow the developer to write the plug-ins either in JavaScript or in Rust. JavaScript might be a little bit slower, but in many cases, that might not be relevant. But in the end, you can still start with writing your JavaScript plug-in and port it to Rust afterwards if you figured out other stuff, and it's a performance problem. But in most cases, it will probably not be a performance problem.

And on top of Rust, we basically built TurboEngine, which is like this common core engine, which solves these common problems, caching, invalidation, and also some kind of abstract, these kind of common sources of external access, like file system, caching, networking, stuff like that. And on top of TurboEngine, we built TurboPack, which is just a bundler. It's basically doing all the stuff that a bundler is doing, like CSS, static assets, whatever, ECMAScript, typescript, images, fonts, there are a lot of things, actually. And then TurboPack can be used by Next.js as a kind of bundler in the replacement of Webpack. But it can also be used by other frameworks, like whatever. So let's zoom into how TurboEngine works. TurboEngine, all the magic about TurboEngine is the ability to write TurboFunctions, which is a function you annotate with some kind of annotation and it's made a TurboFunction. TurboFunctions means that you opt in into pair function memorization for caching. So it will automatically memorize or cache your functions. If you call it twice, it will not compute it twice. But we also do something for cache invalidation.

6. Dependency Tracking and Task Graph

Short description:

We automatically track dependencies of TurboFunctions and build a task graph. This graph enables cool graph analytics and automatic scheduling of work in parallel. By tracking dependencies, we can optimize scheduling and utilize TurboEngine effectively.

So we automatically track all dependencies of the function. So if you write this kind of TurboFunction and then you access some data, we automatically track them and build a large graph, which we call the task graph, out of all executions of TurboFunctions. So we have this large graph of tasks and dependencies between them, and it can also make all this kind of cool graph work, like graph analytics, on top of this, kind of compute graph or task graph. And we can also, by tracking dependency, we can also automatically schedule your work in a background or in a thread pool to make parallelism automatically. By tracking dependency, it means once you access data from different tasks or different TurboFunctions, we can evade it at that point, because we track it and make, like, scheduling automatically and transplant for the use of TurboEngine.

7. Task Graph and Incremental Builds

Short description:

A task graph consists of multiple tasks per module in a web application, allowing for granular tracking of dependencies and incremental builds. Changes can be propagated through the graph by following the dependencies backwards, updating only the affected tasks. This approach offers benefits such as optimizing performance by running tasks in parallel and avoiding the limitations of single-threaded JavaScript by leveraging Rust's automatic scheduling and parallelization.

Yeah. We have an example of how a task graph looks like. It's really a simplified version. In reality, a task graph has about 200 tasks per module of a web application, so it's really granular. But it's just a kind of idea what you expect from.

We have this graph where all those functions, invocations, tasks, are connected with other dependencies. And what we can do with this kind of graph, we can make incrementables super-cool, because this is like an initial build. We have to execute all the functions, obviously. Only once. But we have to execute them. But once you make an incremental build, you have some kind of source of change. Like an example of FileWatcher, which invalidates one of the tasks or functions. In this example, it invalidates the read task, and we basically start invalidating and recomputing the graph from that kind of external source. And you can basically bubble the change through the graph by following the HS backwards. And we compute only the tasks that are needed to accumulate this change into the... To update the graph to the new change. And this has a lot of benefits. Like we only have to touch tasks that are really affected by the change. And we can also stop bubbling the change. In this example, it's like shown here. Where some change might not affect some output of a task. In an example we change some kind of code which doesn't affect imports. And in this case, it will not follow up after getting the module references. We'll see the references are the same and it will stop bubbling the change through the graph and we can stop at any point. But you also see that we can automatically run code in parallel. Like, both of these tasks depend on parsing TypeScript. And we can run them just in parallel automatically. And you don't have to think about these kind of common problems when writing a bundle on top of TuberEngine. This solves a lot of problems. We don't have this problem about single-threaded JavaScript, because we are writing Rust and we have automatic scheduling in parallelization.

8. Solving Cache Invalidation and Lazy Asset Graphs

Short description:

TurboPack is a bundler that solves the problems of cache invalidation, too-sensitive cache invalidation, and many cache loadups. It allows for mixing environments in the asset graph, enabling cross-environment optimization and more opportunities for code optimization. TurboEngine takes care of incremental builds and introduces the concept of lazy asset graphs, where graphs are built on demand. The complete graph is lazily built, reducing the need to build all tasks from the start-up.

It also solves the problems of cache invalidation. You can't miss invalidating the cache, because it's automatically, you can't break it, or at least it's really hard to break. And it also solves the problem of too-sensitive cache invalidation, because we have this really granular function graph where we can follow changes and it depends in a really granular way, and so it makes cache invalidation really granular and correct.

It also solves the problem of many cache loadups, because for all these graphs which are gray We don't have to do anything. We don't have to look up in the cache. It's just sitting around, not being touched by the change at all. So it's just not have any cost for an inactive task. Which makes, basically you can have a large application, as large as you want to, and your change is only affected by the... Your change performance is only affected by the task you have to recompute, which is really minimal. So which, basically, gives us our goal that incremental builds should be independent of application size.

And on top of this kind of Turbo engine system we build TurboPack, which is basically a bundler. But with two major differences compared to WebPack. So it has a system of mixed environments. So in a TurboPack asset graph you basically can mix environments. So you can mix server and server is importing a client component or you're importing edge function from a server component, whatever. You can mix environments in the graph and it's just one compiler taking care of all the graphs. And this gives a lot of benefits, like we can do cross-environment optimization, like tree shaking between environments and that stuff. A lot more opportunities to optimize your code.

We also have this concept about lazy asset graphs. This means TurboEngine takes care of incremental builds. But what about the initial build? Do we want to build all the tasks from the start up? Probably not. So we want to have some kind of lazy system where we only build a graph in a bundle. We build multiple graphs, like a module graph and a chunk graph, and we build a graph in a way that they are derived from the previous graph. So the module graph is derived from the source code and the output graph is derived from the module graph. And in this way, we don't have to build a function which converts one graph to another graph. We build this derived graph and they use functions to get references, which is a Turbo function. And this way, you don't have to build the graph, you just build it on demand when you access it. So basically, this means everything is lazy by default. The complete graph is lazily built. And this means only if you do an HTTP request to the dev server, it will compute and read your files, build a modular graph.

9. Integration of TurboPack and Next.js

Short description:

We want to use TurboPack in Next.js and other frameworks, making it the default option. Our next steps include reaching feature parity with Next.js, starting dogfooding, and planning a beta release next year. Our vision is to make TurboPack the default for everyone, allowing for more advanced building blocks and innovative ideas. The next steps for TurboPack involve creating a plugin system, adding support for other frameworks, and providing more control at runtime.

Only for that kind of graph, you are needed for serving this HTTP request. Which makes it really lazy and should be a streaming experience when you open a page on your dev server.

And with this new bundle, we basically want to use it in Next. And for doing that, we try to move all the cool stuff from Next into TurboPack, so we have this in the core, in the bundle. And also available for other frameworks and Next.js, the build system, is only left with a few conventions and a few Next.js-specific runtime code. And that's basically Next.js on top of TurboPack.

And what are the next steps? So next steps for Next.js, we basically did an alpha release at the Next.js conf and it's open source and we basically seek for feedback. It's obviously not ready for consumption or not production ready because it's missing a lot of features. If we want to reach feature parity with Next, that's our next step. And we also want to start dogfording it for our own systems, which gives a lot of testing and direct connection with people testing it. But you can also test it. That would be really helpful. Of course, there are a lot of bug fixes to fix, like edge cases we didn't consider yet and that stuff. And next year we want to do a beta release to make it feature complete with Next.js and give it in public hands and let's test it. But our vision is larger. We want to have turbo not as opt-in. We want to make it default. It probably takes years to do that. But the vision is to make it turbo for everyone. And we also have a lot of ideas. When turbo's a default, we can get rid of the complexity in the old system, which really limits innovation we can do. So we want to have a more advanced building block for making more innovative new ideas.

And the next steps for TurboPack is basically to make a plugin system and move the Next.js support into a plugin and then add more plugins from other frameworks. We don't want to be Next.js specific. We really want to add plugin support and add more frameworks. And this should be something usable by everyone, by every framework. And it's not Next.js specific. But the vision is larger. So currently a bundle doesn't give you that much of a control at runtime. So it's really hard to test production build.

10. Reconfiguring Webpack and Turbo Engine

Short description:

You have to reconfigure the webpack to make production build or you have to do a full build and test it just to test it, if it's working, or if you have production-only bugs. We want to make it more interactive. We want to give you control. There's also more optimization opportunities. Currently, modules are the smallest unit of optimization. We can split up your modules into statements and declarations, making it more granular. For Turbo engine, the next steps involve making it consumable without TurboPack, stabilizing the API, and making it usable in other scenarios. The vision is to have a shared task graph for teams to share modules and computational tasks.

You have to reconfigure the webpack to make production build or you have to do a full build and test it just to test it, if it's working, or if you have production-only bugs. We want to make it more interactive. We want to give you control. In my vision, there is a slider in the dev server UI where you can slide it from development and production and test production version of your page just in the dev server, and this kind of experience should be something we want to give you.

And there's also more optimization opportunities. Currently the optimization ability in webpack is really limited by most of the more advanced optimization has a really large performance cost. So currently modules are the smallest unit of optimization. So what we can do instead is we can split up your modules into statements and declaration that makes this the smallest unit of optimization. This makes it really more granular tree shaking. You can split modules. We can split pre-bundled libraries into the building blocks. We can split modules into different chunks and make more optimization for the user at one time.

For Turbo engine, the next steps is already open source. But it's not really consumable stand alone. But we really want to make Turbo engine something that is also consumable without TurboPack. You can build on top of if you want to build something cool and incrementally optimize things. So there are some steps missing, like we don't have a logo for it. And we want to stabilize the API. So we are still involving. We can involve it by working on TurboPack. That's really great. And there's no documentation yet. But the plan is to make an alpha release, make it stand alone usable. And then it can be used in other scenarios than bundling maybe. But the vision is even larger. So currently the task graph is limited to one user to one process. And that's not really what's what we want to do in future. So in future it should be like shared with your team, like I mentioned before. It should be like one graph for your team. And they can share like modules, share tasks and computational tasks.

11. Moving Computations to the Cloud

Short description:

Computations can be moved to the cloud, allowing for faster application building. Public caching in the cloud can be leveraged to speed up the computation of common node modules.

And it barely works because you can trust your team that the computations that your college is working, doing all the works for your case. And we can even go further. Computations still happen on local machines. But we can move them, at least some of them to the cloud and make some kind of edge compute cloud, which computes part of your application, if you don't want to wait so long for your own computer to build it. This is cool because you can usually trust the cloud. If you trust the cloud, you can get something like public caching, where the cloud... When you ask the cloud to compute some node module, there's a good chance that somebody else on the world already computes this kind of node module before. So we can just use this public caching ability to make it even faster.

12. Insights into Builds and Granular Caching

Short description:

We want to give you more insights into builds and statistics on performance. TurboGPro aims to provide granular caching for everyone, extending it to other frameworks and common file operations. Thank you.

But we also want to give you more visibility. So we want to have more insights into builds, we want to have statistics of how your build is performing. Some of this, which says what's really affecting the current build time, and also some kind of linting or hinting system for performance, where you can describe... This should not take longer than blah time, or whatever. And also, this is bundle analyzer in Webpack, but it could also be like a build performance analyzer, where you can have the same kind of insights which a bundle analyzer gives you, but for the build process... How long are modules taking? What is affecting your performance? This giant, huge Node module is affecting all the performance, whatever. And the mission for TurboGPro is to get granular caching for everyone. All the granular caching we have with TurboPack with Next.js, we can also make it for more operations, like other frameworks, or for common file operations, and this kind of things.

Thank you. That was all I have to say. And if you want to find me afterwards, I'm either in the vessel booth, or in the Q&A room, or the performance discussion room, or in the afterparty. Busy day out there. Thank you. Please step into my office. Let's have a quick chat with Tobias about this stuff. We had a lot of questions in. We're going to get to a couple, and the rest, just as a reminder, you can go to the see him in the speaker room out by the reception. So, are they up? First question. I think you touched on this a little bit at the beginning. But why not release the same thing, but as Webpack 6? Why a new product? We thought about that. But the problem is that it is still a large breaking change to be, like, it would not be comfortable as Webpack 6 because it's, like, you basically get some Webpack 6 release, which is, like, Angular 2, which breaks everything. And that's not what the user is expecting. I think a new name makes sense for a completely new architecture. Which it would probably, if it would be Webpack 6, it would be incompatible with everything before. All the plugins would not work. That's not what we want to do. A new name makes sense for that. And it's also cooler. So it's mostly a different thing. It's a different thing, but with the same motivation.

13. Migration, Integration, and Future Plans

Short description:

We're working on a Webpack-style plugin to facilitate migration from Webpack to Jerbo. Although the Webpack config is complex, we plan to address this issue and make it easier. TurboPack can be integrated into JavaScript projects, and we will offer JavaScript-based plugins for easier integration. Rust is not required for plugin development. We expect TurboPack with Next to be in beta by early next year, and it should be suitable for basic Next.js usage.

But we're also working on making a good migration story from Webpack. So I guess it will be some kind of Webpack style plugin, which gives you the Webpack kind of thing. With much configuration, much like advanced things for Webpack. And in the tool, to make migration easier.

It's a perfect lead in to our next question, which is, will there be migration paths from Webpack to Jerbo? So once we – like, currently you can't really migrate yet, because we don't support most of the things, but once we have it done the way that you can easily migrate, we will have like an advanced migration gate with all the kinds of things. We really want to get Webpack people on to Webpack, and that's why we want to offer a good migration guide.

There's a bit of a spicy Webpack question, which is, the Webpack config was rather complicated, according to this asker, so is this problem also tackled somehow? So currently we don't have any config, so – but the plan is – Easy! Yeah, we probably will have some config, but we know that there's a problem with the Webpack config, and we can't easily fix it in Webpack, because of all the breaking changes and stuff like that, but we know about the problem and we want to make it more easy, and that is definitely on our mind.

Fair enough. There's a couple questions here that are sort of tying together, which is a bit of a nervous issue, since it's based on Rust. Will it be just as simple to integrate into a JavaScript project, or will the plugins be any different? What language are they going to be in? So currently it's already integrated into the JavaScript ecosystem by Next.js is using it, so Next.js is technically JavaScript, it's a start-up, it's JavaScript and then calls out to TurboPack. And so it can be integrated, it's like a native NodeJS plugin to integrate, but it will be probably also some kind of standalone binary executable, and we'll make it possible to integrate it. Also with our story that we want to offer JavaScript-based plugins, it will be able to run JavaScript code and integrate with that. So we don't all have to learn Rust really fast.

Yeah, I don't want to expect developers to run Rust. Turbo learn, if you will. Yes, that's what I... Rust is a good choice for performance, but it's really hard to learn and hard to write. So I can't feel comfortable forcing developers to write plugins in Rust. They won't do that, and nobody wants to learn Rust if you're just working on web development. So we want to offer JavaScript plugins and give you all the... You will not have to learn Rust when you want to learn TurboPack.

Sounds good to me, because I don't know Rust! This is a fun one. Is it too optimistic to expect TurboPack with Next to come out of beta before summer 2023? I think that's not too optimistic. Ah, there you go! At least in beta version. It will not be super stable and production-ready probably. We plan early next year a beta version that should have most of the features. So it depends on how brave you're feeling, I guess. Basically, if you have custom Webpack plugins in Next.js, we probably won't have this until summer. Or you'll have to write it in a different plugin system. But at least for the basic Next.js usage, where you don't have any advanced configuration, we will have that. I'm pretty sure we have this early next year by summer. It should be fine.

Perfect. We are out of time for on-stage questions, but you can find Tobias out in the speaker Q&A room after this. A round of applause. Please, thank you for joining us.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

JSNation Live 2021JSNation Live 2021
31 min
Vite: Rethinking Frontend Tooling
Top Content
Vite is a new build tool that intends to provide a leaner, faster, and more friction-less workflow for building modern web apps. This talk will dive into the project's background, rationale, technical details and design decisions: what problem does it solve, what makes it fast, and how does it fit into the JS tooling landscape.
React Advanced Conference 2023React Advanced Conference 2023
33 min
React Compiler - Understanding Idiomatic React (React Forget)
React provides a contract to developers- uphold certain rules, and React can efficiently and correctly update the UI. In this talk we'll explore these rules in depth, understanding the reasoning behind them and how they unlock new directions such as automatic memoization. 
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
GraphQL Galaxy 2021GraphQL Galaxy 2021
32 min
From GraphQL Zero to GraphQL Hero with RedwoodJS
Top Content
We all love GraphQL, but it can be daunting to get a server up and running and keep your code organized, maintainable, and testable over the long term. No more! Come watch as I go from an empty directory to a fully fledged GraphQL API in minutes flat. Plus, see how easy it is to use and create directives to clean up your code even more. You're gonna love GraphQL even more once you make things Redwood Easy!
JSNation 2023JSNation 2023
28 min
SolidJS: Why All the Suspense?
Solid caught the eye of the frontend community by re-popularizing reactive programming with its compelling use of Signals to render without re-renders. We've seen them adopted in the past year in everything from Preact to Angular. Signals offer a powerful set of primitives that ensure that your UI is in sync with your state independent of components. A universal language for the frontend user interface.
But what about Async? How do we manage to orchestrate data loading and mutation, server rendering, and streaming? Ryan Carniato, creator of SolidJS, takes a look at a different primitive. One that is often misunderstood but is as powerful in its use. Join him as he shows what all the Suspense is about.
React Day Berlin 2022React Day Berlin 2022
22 min
Jotai Atoms Are Just Functions
Top Content
Jotai is a state management library. We have been developing it primarily for React, but it's conceptually not tied to React. It this talk, we will see how Jotai atoms work and learn about the mental model we should have. Atoms are framework-agnostic abstraction to represent states, and they are basically just functions. Understanding the atom abstraction will help designing and implementing states in your applications with Jotai

Workshops on related topic

React Day Berlin 2022React Day Berlin 2022
86 min
Using CodeMirror to Build a JavaScript Editor with Linting and AutoComplete
Top Content
WorkshopFree
Using a library might seem easy at first glance, but how do you choose the right library? How do you upgrade an existing one? And how do you wade through the documentation to find what you want?
In this workshop, we’ll discuss all these finer points while going through a general example of building a code editor using CodeMirror in React. All while sharing some of the nuances our team learned about using this library and some problems we encountered.