Vite: Rethinking Frontend Tooling

Bookmark

Vite is a new build tool that intends to provide a leaner, faster, and more friction-less workflow for building modern web apps. This talk will dive into the project's background, rationale, technical details and design decisions: what problem does it solve, what makes it fast, and how does it fit into the JS tooling landscape.


Transcript


Intro


Hello everyone. My name is Evan You, and I'm the author of Vue.js and Vite. Today in this talk, I'm going to be talking about Vite, a next-generation build tool that I've been working on earlier this year. Specifically, I want to discuss some of the design trade-offs involved in creating a new build tool, why we want to do that, and what is involved.


What is Vite and why would you want to use it?


[00:39] Okay. So, what is Vite? Vite mainly consists of two parts, a no-bundle dev server, that serves your source files over native-ES modules and a production bundler, which is Rollup-based, it's pre-configured, and produces highly optimized production builds.

Why would you want to use Vite over an existing tool? First of all, it's lean and fast, like really fast. We'll see an example soon. Second, it's opinionated with very sensible defaults. It's based on the experience that I've been working on Vue CI, which is a very widely used build tool for Vue specific projects over the years. And it's similar to a pre-configured Webpack and Webpack dev server setup. It's also quite similar in terms of Sculp to Parcel. Many common features like TypeScript, JSX, and Pulse CSS work just out of the box. Despite being opinionated, it's still very extensible. It has a Rollup-compatible plugin interface that makes it very easy to build on top of.


How lean?


[01:46] So, just to get an idea of how lean it is, here is a tweet of someone running Create React App and Vite side--side on Replit, which is a service that runs your project on their remote container.

The Vite project is able to get up and running before the CRA project has even finished installing. One of the reasons that Vite is so lean is because it cares about its own dependency count and payload size. We actually pre-bundle a lot of unnecessary dependencies into Vite itself, so that it installs faster. It's no modules, disk size is often a fraction of that, of Webpack based projects.

[02:28] And here is another example of a user migrating a production Rollup app over to Vite. As you can see, the original start time was over two minutes long and a single reload can take up to 23 seconds. And now, both numbers, after migrating to Vite, are down to almost negligible numbers. Granted Rollup doesn't have hot module replacement, but I bet Webpack users are not unfamiliar with a hot update taking a few seconds in large projects. With Vite, the HMR is guaranteed to be always instant.


What contributes to Vite's performance?


[03:02] Now, let's check into what contributes to Vite's performance. Vite builds upon two interesting trends that we're seeing in the past year. The first, is that modern JavaScript is now widely supported. Native ES modules now have over 92% global support, and we're only going to see this number increase as Lexi browsers like IE11, going out of market at an accelerated pace.

The second important thing is, there are JS data compilers being written in compile to native languages. Two of the most prominent examples are esbuild, which is written in Go and swc, which is written in Rust. Both of these tools are dramatically faster than tools that are written in JavaScript. Sometimes up to a hundred times faster, depending on the work type and CPU cores utilized.


Benefits of Native-ESM dev server


[03:56] So, Vite leverage join one native-ESM building the dev server around native ES modules. There are many benefits of native-ESM dev server. First of all, there's no need to do any bundling. That's a big chunk of work that is simply not needed anymore. Second, native-ESM is on-demand nature, which means if a module is not loaded for the current screen that you're working on, it doesn't even need to be processed. Third, since modules are fetched over HTTP requests, we can leverage HTTP headers and let the browser do the caching. And finally, most importantly, hot module replacement can be implemented over native-ESM in a very simple, yet efficient manner. So we'll see, this gives us big performance wins.

But I want to be honest with the technical trade-offs here, right? Native-ESM is not perfect without any downside. The main problem that we have faced is the HTTP requests overhead. For example, a dependency like lodash-es may contain over 700 internal modules. And because native-ESM is eager, when you import even just the single method from lodash-es, the browser will try to request all of these modules that's imported in its entry point. Now, every module request translates to an HTTP request and the overhead just adds up. It creates a network congestion, and even on local machines with little to zero latency, this overhead is still noticeable.


Addressing HTTP overhead


1. Pre-bundle deps with esbuild


[05:33] So, Vite does implement a few tricks to work around this problem. The first is we pre-bundled dependencies with esbuild, right? There are many big dependencies like material UI that contains hundreds and even thousands of internal modules, but they don't change often, right? And they mostly contain standard JavaScript that doesn't need any special processing, unlike your source files.

So, this makes them the perfect candidate to be pre-bundled with esbuild. By pre-bundling them, we can ensure that each dependency translates to only one HTTP request at most. And since esbuild is so fast, the startup boost is often dramatic, especially when you're using large dependencies. In addition, dependencies don't change often. So, we can easily cache the pre-bundled output on disc and skip the pre-bundling phase on subsequent server starts. This process also converts CommonJS dependencies into ESM, so that they are consumable natively the browser.


2. Cash deps with HTTP headers


[06:39] On top of that, pre-bundled dependencies can be strongly cached with HTTP headers. Vite rewrites the imports to dependencies, to append a fingerprint query that is linked to the project's lockfile. So, on page reloads, with a strong cache header, the browser won't even make a new request unless your lockfile has changed.


3. Make source file requests conditional


[07:03] So, how about source files? Since source files may be edited at any time, we have to revalidate with the server on every request, but we can still leverage e-tags and if modified since header, to make the request conditional, as long as the file itself did not change. Now, this ensures performing full page reloads. Once the module has been processed once.


Performance trade-offs


[07:30] So, here are some important performance trade-offs in Vite's native-ESM dev server strategy.

First of all, the server start is dramatically faster because there's no bundling needed to be done ahead of server starts. And well, there are bundling that's dump for dependency files, but they're done with esbuild and they're typically so much faster than JavaScript based bundlers, you'll still see a huge boost. The very first page load will in fact be slower because the modules, that your source file modules are only processed on demand when they are requested over HTTP. Now, the request overhead also affects the first page load the most, but overall, if you combine the server start time, plus the first page load time, we're still seeing very significant gains in almost all cases.

Once the initial load is done, full page reloads are largely unaffected compared to a bundled method, but finally, another really big gain is that hot module replacement in this model is dramatically faster, especially in large projects because in the native-ES model, HMR performance is decoupled from the size of your project. So, even as your project grows bigger over time, HMR performance will remain the same.


Future explorations


[08:51] We do have some on how to make the first page load also fast, for example, introducing a persistent on disk cache for source files. Another idea that might be worth exploring is replacing part of the internal module processing pipeline with compile to native modules, as an example, Parcel two recently ported it's JS processing pipeline to Rust and has seen a big performance boost. So, there are potential for us to do the same, but both of these ideas are still in exploration phase, but we are sure that there are a lot of areas that we can still improve in this model.


Production Build


[09:37] Okay. So, now let's talk about the production built. Some may be asking why even bundle for production if we are using native-ES modules, why not we just shift these models on bundled to production?

Well, I don't have time to dig too much into the details here, but the short answer is the performance is not good enough default and caching, which is the supposed advantage of unbundled deployment, is very hard to get right. Bundling still seems to be the better trade-off that covers more cases.

[10:13] Another common question is, if esbuild is so fast, why is Vite still using Rollup, right? Why don't we just bundle with esbuild? Well, the main reason is that bundling user-facing applications has some really different set of challenges compared to bundling a library, which in most cases is just a single file, right? Code splitting is very important in delivering performance user-facing applications.

And as of now in this regard, Rollup is still more mature and also gives you more flexibility, more control over the chunking process when it comes to code splitting. So, this is important, both for us to give you automatic optimizations, and also important for users to be able to manually control the final cost splitting behavior.

[11:04] So yes, roll up is slower than esbuild. But in practice, we found that the actual bundling is only part of the total cost of the real world projects. Bundling itself, the act of concatenating the modules together is actually not the whole thing, right? For example, many projects rely on JavaScript based transforms of individual modules, for example, Vue or Svelte single-file components. Babel-based custom plugins for CSS in JS. A lot of meta frameworks need to perform static analysis on source files, right? All of these tasks are typically still done with JavaScript based tools. So, even when using esbuild for bundling, you still have to jump back to JavaScript to perform these tasks and esbuild won't really help much in these areas.

Another major build time sinc is minification. Vite in fact, supports using esbuild for a minification. So, you can get slightly faster builds at the cost of slightly bigger bundles, but we still default to Terser because Terser, while being slower still provides better compression, in most cases.

[12:18] Now, what we get in return using Rollup in Vite, is that we get to do a few things that are currently still hard to do with esbuild. For example, automatic CSS code splitting, the CSS imported in your async chunks are automatically built into a separate file and are loaded in parallel with the async chunk when they are requested. We also do automatic async chunk loading optimization. When you have waterfalls in your async chunks, we will automatically flatten them for you. You can also do manual chunk split control using Rollup options. Most of these are not really possible at this moment with esbuild. That said, Vite is still open to switching it's production bundler to esbuild or something faster in the future, if these become possible. Which makes the trade-off worthwhile, right? But currently we are essentially opting for a slower production build, but better performance for end users, which I believe is the right thing to do.


Opinionated Defaults


[13:21] Okay. Now, I want to talk a bit about another aspect of Vite's design. The fact that it provides an opinionated, sensible default that gets you pretty far out of the box. If we think about it, Vite is essentially functioning and comparable to a pre-configured Webpack setup with Webpack-dev-server, it's CLI, css-loader, style-loader, postcss-loader, you get where this is going. This is not saying Webpack is bad. Webpack started as a JS focused bundler, and it had to be bare-bones default and be extremely flexible because we had no conventions back then on how these common tasks should be tackled a build tool when building a user-facing application. So, we had to invent these things over time. It is remarkable that Webpack is so configurable, now we're able to actually do all of this with it, right?

But over time, we've also come to realize that 90% of the higher level tools built on top of Webpack contain a large amount of configuration and we're seeing the exact same problems over and over again, the way we import CSS, the way we expect TypeScript and JSX to just work, the fact that almost everyone has to use PostCSS, right? These have become conventions shared across tools and frameworks across ecosystem. So, Vite acknowledges this. So, it tries to absorb the complexity of getting these conventions working out of the box, so the end user can focus on the things that actually matter.

[14:56] We do also realize that conventions don't always fit every single use case. So, Vite's philosophy here, is that we optimize for the 90% happy path while advanced use cases are still made possible of your plugins, it should not be a requirement for the majority of use cases. So, it's also okay that Vite doesn't cover everything, right? It's not Vite's goal to completely replace Webpack. There are cases where Webpack is still the right choice. For example, if you need module federation, it makes sense for different tools to coexist and for different roles in the ecosystem.

Now that said, Vite still gives power users a lot via it's powerful plugin system. So, Vite's plugin system, what we call its universal plugin system, is shared between the dev server and the production build. It extends upon Rollup's very straightforward plugin interface. It's a super set of Rollup's plugin API with additional capability to control the dev server behavior. You can add a middleware, as you can add custom routes. You can also tap into the HMR pipeline when a file changes, you can modify how HMR works. So, this gives you a very fine grain control over the whole dev experience.

[16:17] In addition, Vite also provides an API to load ESM source files and instantiate them in Node.js. With hot module replacement like precise invalidation. Right? Typically, when we do service our rendering with a bundler, we are essentially running two bundles side side one for the client and one for the server. So, when you edit a file, we actually rerun both bundles, but in Vite, on the client side, we do hot module replacement over native-ES modules. And on the new JS side, we actually keep the instantiated copies of these modules in memory and only invalidate the ones that are affected your code changes.

So, this is almost like server side hot module replacement, which is very efficient. And this also makes it very easy to create a performance server side rendering dev setup with Vite. Now, it can be completely decoupled in production, so you don't have to use Vite in production for server side rendering. And the result is we are seeing a plethora of SSR and Meta-Frameworks build on top of Vite. There are Svelte Kit, Ream, which is a server side rendering framework for V3. There's vite-ssr and Vite plugin SSR, both our framework agnostic SSR extensions on top of Vite. And we're seeing frameworks like Marko, essentially using Vite, which is actually able to encapsulate a Meta-Framework like functionalities encapsulated inside a single plugin. So, this kind of speaks about how powerful the plugin system is.

[18:03] Okay, so Vite is growing really fast and we're proud of what we've achieved, but I also really want to give a shout out to these great projects Vite is built on top of mainly Rollup and esbuild, both are great gifts to the community. And also shout out to projects that have inspired features in Vite, Snowpack, WMR, @web/dev-server and Parcel. There are a lot of interesting ideas in each of these projects. Especially for Snowpack, WMR and web/dev-server, we are kind of exploring in the same space and we share a lot of these ideas. So, shout out to them for inspiring some of the features in Vite and yeah.

So, I'm very excited to see this new way of dev tools, we are to make the web ecosystem move forward, improve dev experience together. Thank you. That's all. Bye.


Questions:


[19:01] Mettin Parzinski: Evan has been asking us, do you currently use Webpack or any Webpack based tool, like Next or Nuxt? And well, 52% of you, so that's just over half, has been saying Webpack based tools, so like Next to Nuxt. Evan, what do you think about this? Does this surprise you?

[19:18] Evan You: Yeah, I guess the poll is a bit confusing because Webpack based plus Webpack has over a hundred percent. I don't know. I guess this just really shows how prevailing Webpack is in the ecosystem. Obviously, it's a great project, but one of the reasons I personally now use Vite for almost everything is just because I really miss that really snappy development experience. When I first started doing web projects, you just write JavaScript, load it in the browser and just refresh, everything is kind of fast. I really miss that, so.

[20:03] Mettin Parzinski: Yeah. So, you would actually have to go back to that 10% of nothing.

[20:07] Evan You: Not nothing, right? We would kind of want to have a cake and eat it too, so.

[20:13] Mettin Parzinski: Yeah. But it should feel like that nothing. Yeah, that would be great. So, we're going to go to our audience questions now. So, if you have any questions you can still do so in the community track Q and A channel. First question, is there any data already out there for Vite's performance in a large real world application in production?

[20:38] Evan You: Yeah, so actually, let me... I just saw this article on dev.to a while ago. Someone migrated a project with 150,000 lines of code. So that's a pretty big code base from, I think it was previously on Webpack. So, they moved to Vite, and trying to look at the numbers. So yeah, originally a code start using Webpack was 150 seconds and the app loads in six seconds. So, after migrating to Vite, a co-start is six seconds and the app loads in 10 seconds. So, hot reload went from 13 seconds to one second.

[21:31] Mettin Parzinski: Descent.

[21:33] Evan You: And if you put Vite in build mode, the build watch mode, it's still from 17 seconds Webpack to 13 seconds Vite. So, that is actually running a full build on every save. So yeah, I would say you get probably 10 times better performance during dev in this very specific case. Obviously, it will kind of vary. And I think this has a lot to do with how their old Webpack configuration was probably using full Babel or TypeScript. So, they also did compare this switching from Babel loader to esbuild loader for transpilation, which did increase the performance quite a bit. So, let's see. So, that shaves their speed from 185 down to 56, which is decent, but still not as fast as Vite, I would say. So, yeah. So, that's just some anecdotal data, but I think it's a good reference. So, I guess the takeaway is Vite does work in large projects and it is indeed able to handle that kind of scale.

[22:54] Mettin Parzinski: So, if people want to read up on these numbers, can you share it as an article in the Discord channel?

[23:01] Evan You: Yeah, definitely.

[23:03] Mettin Parzinski: Yeah. Great. Next question is from Jean Karte, and he's a bit confused. He says, I don't understand. So, we can use Vite for building React Applications?

[23:14] Evan You: Yes, absolutely. So, Vite is completely framework agnostic. I originally created it because I just wanted to get a faster dev server for myself to use Vue. But as we got a closer to feature completeness, I just realized, okay, most of these actually applies... They're framework agnostic. They would apply equally well for React App or other apps. So, basically I did a refactor, trying to think in Vite too, the biggest thing was we extract all the Vue specific logic out. And that was also a good process for us to sort of think about the plugin API, because if the plugin API can allow us to cleanly extract all of Vue's specific logic into a plugin, then it should work equally well for any other framework, right? Because Vue, in terms of, the compilation setup is actually pretty demanding. So once we did that, it proved to be quite successful. We're now seeing Svelte, a lot of people are switching from Create React App to Vite as well. So, Yeah. So, I'm pretty happy about that because I feel like we put in all the work building a fast dev server, it makes sense for it to be able to help not just Vue users.

Mettin Parzinski: Yeah.

Evan You: Right.

[24:43] Mettin Parzinski: Okay. And a follow-up from me then, do you think that because you're so affiliated and your name and so attached to Vue that, that might fight against the adaptation of Vite? Does people think this is a Vue thing because Evan built it?

[25:01] Evan You: I don't know. I guess, some people are kind of tribal, but I just think if you're a web developer or an engineer, your end goal is to use the tools to build great products for your end users. So, why does it matter who created the tool you're using, as long as it helps you to be more productive.

[25:26] Mettin Parzinski: Yeah. That's a great view. And I hope people will do that. Look at the best tools and yeah, just use the best tools. Next question is from Buru. Is Vite going to have some more integrated direction built process out of the box, like pre rendering?

[25:45] Evan You: If you look at our repo, we have a server side rendering example. So, that example actually already includes a pre-render script as well. It's pretty simple. We probably not going to do that out of the box. It's kind of a Sculp thing, right? So you can easily do that yourself using Vite, but it's not something... Because it kind of ties into what framework you're using, how you want to actually deploy it. Right? So Vite isn't as opinionated as that. We essentially give you the APIs for you to do it in the way you want. I kind of considered that to be a job of the Meta-Framework where it's built on top of Vite, like I guess, Svelte Kit kind of does that. Ream also does that. Vite Press is a static side generators that we built on top of Vite. So, if you're building a dock side, you can Vite Press, which basically works out of the box.

[26:46] Mettin Parzinski: Nice. Next question is from Jack Burke, when is Vite being adopted Next JS and Create React App? And make it happen yesterday, please. Thanks.

[27:03] Evan You: I guess, if you're using Create React App, you can just switch to Vite directly. I guess, there are some parts of the Create React App stuff that's still kind of missing, like JS support out of the box. JS integration has been a somewhat an interesting challenge because JS still does not support async transforms and async module resolvers. These are kind of blockers that, I think that JS team is working on. So, if they are solved, then it makes it pretty straightforward to integrate. But at the same time, we have other, solutions like Cypress, which can run your component directly in the browser. So, but other than that, I would say Vite is a pretty competent CRA replacer, like Replit, it actually just switched over from CRA to Vite for all their React repels. And as for Next, that's kind of a different story because Next is pretty deeply integrated with Webpack, they do pretty advanced pre-compiling. So, I would imagine it will be difficult for them to actually consider a switch. They also just hired Tobias, who is the author of Webpack. So, I think they're pretty committed on that path, but I know they are doing a lot of interesting work to make Next fast as well. I mean, Webpack isn't necessarily a dead end. I think a lot of it has to do with the historical burden where people just over configure it. But if you can say, use esbuild for transpilation, have very, very first party optimizations out of the box, like Next does, you can still get pretty decent performance. So, I'm interested to see how this unfolds. I mean, in Vue land, Next three is able to be bundler agnostic. So, they have a mode that can actually run on Vite. I think Next two also offers a way to actually run on Vite already. I've heard people who switched over their current Next project to run on Vite and it just got 10 times faster in development. So, I think it's kind of depends on how these higher level frameworks work.

[29:23] Mettin Parzinski: Yeah. All right. Last question. We have time for today in the live Q and A is from Yuchna. So you can go to Vite instead of React, is that going to help us to improve our resume?

[29:37] Evan You: I don't know. Vite is just a tool that helps you build stuff. I think it's more important to actually show what you've built instead of showing what tools you know how to use, right?

Mettin Parzinski: Yeah. I hope so too. That employers look at that. All right.

Evan You: Yes.

[29:57] Mettin Parzinski: That's the end of our Q and A, but if you have more questions for Evan, Evan is going to be on his speaker room. So, there you can continue the conversation about anything you want to talk about with Evan. Oh, you don't have a speaker room. Okay. Sorry. Sorry, yeah, that was my mistake. My mistake. Sorry, but Evan is available online. You can find him everywhere. So Evan, thanks a lot for joining us.

Evan You: Thank you.

[30:27] Mettin Parzinski: It's been an honor having this Q and A session and announcing you. Hope to see you again soon.

Evan You: Thank you.

Mettin Parzinski: Bye-e.



Transcription


Hello everyone, my name is Evan Yeo and I'm the author of vue.js and vite. Today in this talk I'm going to be talking about vite, a next generation build tool that I've been working on earlier this year. Specifically, I want to discuss some of the design trade-offs involved in creating a new build tool, why we want to do that, and what is involved. Okay, so what is vite? vite mainly consists of two parts, a no-bundle dev server that serves your source files over native ES modules, and a production bundler which is rollup-based, it's pre-configured and produces highly optimized production builds. Why would you want to use vite over an existing tool? First of all, it's lean and fast, like really fast. We'll see an example soon. Second, it's opinionated with very sensible defaults. It's based on the experience that I've been working on vue.CI which is a very widely used build tool for vue-specific projects over the years. And it's similar to a pre-configured webpack and webpack dev server setup. It's also quite similar in terms of scope to parcel. Many common features like typescript, JSX, and PostCSS work just out of the box. Despite being opinionated, it's still very extensible. It has a rollup-compatible plugin interface that makes it very easy to build on top of. So just to get an idea of how lean it is, here is a tweet of someone running create-react-app-and-vite-side-by-side on Repl.it, which is a service that runs your project on a remote container. The vite project is able to get up and running before the CRA project has even finished installing. One of the reasons that vite is so lean is because it cares about its own dependency count and payload size. We actually pre-bundle a lot of the unnecessary dependencies into vite itself so that it installs faster. Its node modules disk size is often a fraction of that of webpack-based projects. And here is another example of a user migrating a production rollup app over to vite. As you can see, the original start time was over two minutes long, and a single reload can take up to 23 seconds. And now both numbers, after migrating to vite, are down to almost negligible numbers. Granted, rollup doesn't have hot module replacement, but I bet webpack users are not unfamiliar with hot updates taking a few seconds in large projects. With vite, the HMR is guaranteed to be always instant. Now let's dig into what contributes to vite's performance. vite builds upon two interesting trends that we're seeing in the past year. The first is that modern javascript is now widely supported. Native ES modules now have over 92% global support, and we're only going to see this number increase as legacy browsers like IE11 going out of market at an accelerated pace. The second important thing is there are new JS compilers being written in compiled to native languages. Two of the most prominent examples are ESBuild, which is written in Go, and SWC, which is written in rust. Both of these tools are dramatically faster than tools that are written in javascript, sometimes up to 100%... sometimes up to 100 times faster, depending on the work type and CPU cores utilized. So vite leverages churning one native ESM by building the dev server around native ES modules. There are many benefits of a native ESM dev server. First of all, there's no need to do any bundling. That's a big chunk of work that is simply not needed anymore. Second, native ESM is on-demand by nature, which means it's not a big deal to do any bundling. Third, since modules are fetched over HTTP requests, we can leverage HTTP headers and let the browser do the caching. And finally, most importantly, hot module replacement can be implemented over native ESM in a very simple yet efficient manner. So we'll see this gives us big performance wins soon. But I want to be honest with the technical trade-offs here. Native ESM is not perfect without any downside. The main problem that we have faced is the HTTP request overhead. For example, a dependency like lodash-es may contain over 700 internal modules. And because native ESM is eager, when you import even just a single method from lodash-es, the browser will try to request all of these modules that's imported in its entry point. Now, every module request translates to an HTTP request, and the overhead just adds up. It creates a network congestion. And even on local machines with little to zero latency, this overhead is still noticeable. So vite does implement a few tricks to work around this problem. The first is we pre-bundle dependencies with ES-built. There are many big dependencies like material UI that contains hundreds and even thousands of internal modules. But they don't change often. And they mostly contain standard javascript that doesn't need any special processing, unlike your source files. So this makes them the perfect candidate to be pre-bundled with ES-built. By pre-bundling them, we can ensure that each dependency translates to only one HTTP request at most. And since ES-built is so fast, the starter boost is often dramatic, especially when you're using large dependencies. In addition, dependencies don't change often. So we can easily cache the pre-bundled output on disk and skip the pre-bundling phase on subsequent server starts. This process also converts common JS dependencies into ESM so that they are consumable natively by the browser. On top of that, pre-bundled dependencies can be strongly cached with HTTP headers. vite rewrites the imports to dependencies to append a fingerprint query that is linked to the project's lock file. So on page reloads, with a strong cache header, the browser won't even make a new request unless your lock file has changed. So how about source files? Since source files may be edited at any time, we have to revalidate with the server on every request. But we can still leverage ETags and the if-modified-since header to make the request conditional as long as the file itself did not change. Now, this ensures perform full page reloads once the module has been processed once. So here are some important performance trade-offs in vite's native ESM dev server strategy. First of all, the server start is dramatically faster because there's no bundling needed to be done ahead of server starts. And while there are bundling that's done for dependency files, but they're done with ES build, and they're typically so much faster than javascript-based bundlers, you'll still see a huge boost. The very first page load will, in fact, be slower because the modules, your source file modules are only processed on demand when they are requested over HTTP. Now, the request overhead also affects the first page load the most. But overall, if you combine the server start time plus the first page load time, we're still seeing very significant gains in almost all cases. Once the initial load is done, full page reloads are largely unaffected compared to a bundled method. But finally, a really big gain is that hot module replacement in this model is dramatically faster, especially in large projects, because in the native ESM model, HMR performance is decoupled from the size of your project. So even as your project grows bigger over time, HMR performance will remain the same. We do have some ideas on how to make the first page load also fast, for example, by introducing a persistent on disk cache for source files. Another idea that might be worth exploring is replacing part of the internal module processing pipeline with compiled to native modules. As an example, Parcel 2 recently ported its data processing pipeline to rust and has seen a big performance boost. So there are potential for us to do the same. But both of these ideas are still in exploration phase. But we are sure that there are a lot of areas that we can still improve in this model. Okay. So now let's talk about the production build. Some may be asking why even bundle for production if we are using native ES modules? Why not we just ship these models on bundle to production? Well, I don't have time to dig too much into the details here, but the short answer is the performance is not good enough by default. And caching, which is the supposed advantage of unbundled deployment, is very hard to get right. Bundling still seems to be the better trade-offs that covers more cases. Another common question is if ES build is so fast, why is V still using rollup? Why don't we just bundle with ES build? The main reason is that bundling user-facing applications has some really different set of challenges compared to bundling a library, which in most cases is just a single file. Code splitting is very important in delivering performance user-facing applications. And as of now, in this regard, rollup is still more mature and also gives you more flexibility, more control over the chunking process when it comes to code splitting. So this is important both for us to give you automatic optimizations and also important for users to be able to manually control the final code splitting behavior. So yes, rollup is slower than ES build, but in practice we found that the actual bundling is only part of the total cost of real-world projects. Bundling itself, the act of concatenating the modules together, is actually not the whole thing. For example, many projects rely on javascript-based transforms of individual modules. For example, view or spelt single file components, Babel-based custom plugins for css and JS, a lot of meta frameworks need to perform static analysis on source files. All of these tasks are typically still done with javascript-based tools. So even when using ES build for bundling, you still have to jump back to javascript to perform these tasks, and ES build won't really help much in these areas. Another major build time sink is minification. We, Evite, in fact, supports using ES build for minification. So you can get slightly faster builds at the cost of slightly bigger bundles, but we still default to Tercer because Tercer, while being slower, still provides better compression in most cases. Now, what we get in return by using rollup in Evite is that we get to do a few things that are currently still hard to do with ES build. For example, automatic css code splitting. The css imported in your async chunks are automatically built into a separate file and are loaded in parallel with the async chunk when they're requested. We also do automatic async chunk loading optimization. When you have waterfalls in your async chunks, we will automatically flatten them for you. You can also do manual chunk split control by using rollup options. Most of these are not really possible at this moment with ES build. That said, Evite is still open to switching its production bundler to ES build or something faster in the future if these become possible, which makes the trade-off worthwhile. Currently, we are essentially opting for a slower production build, but better performance for end users, which I believe is the right thing to do. Okay. Now I want to talk a bit about another aspect of Evite's design, the fact that it provides an opinionated, sensible default that gets you pretty far out of the box. If we think about it, Evite is essentially functionally comparable to a pre-configured webpack setup with webpack, Dev Server, its CLI, css loader, style loader, post-css loader. You get where this is going. This is not saying webpack is bad. webpack started as a JS-focused bundler, and it had to be bare bones by default and be extremely flexible because we had no conventions back then on how these common tasks should be tackled by a build tool when building a user-facing application. We had to invent these things over time. It is remarkable that webpack is so configurable that we're able to actually do all of this with it. But over time, we've also come to realize that 90% of the higher-level tools built on top of webpack contain a large amount of configuration addressing the exact same problems over and over again. The way we import css, the way we expect typescript and JSX to just work, the fact that almost everyone has to use post-css. These have become conventions shared across tools and frameworks, across ecosystems. Evite acknowledges this. It tries to absorb the complexity of getting these conventions working out of the box so the end user can focus on the things that actually matter. We do also realize that conventions don't always fit every single use case. So V's philosophy here is that we optimize for the 90% happy path. While advanced use cases are still made possible via plugins, it should not be a requirement for the majority of use cases. So it's also okay that V doesn't cover everything. It's not V's goal to completely replace webpack. There are cases where webpack is still the right choice. For example, if you need module federation, it makes sense for different tools to coexist and fit different roles in the ecosystem. Now, that said, V still gives power users a lot via its powerful plugin system. So V's plugin system, what we call its universal plugin system, is shared between the dev server and the production build. It extends upon Rolub's very straightforward plugin interface. It's a super set of Rolub's plugin api with additional capability to control the dev server behavior. You can add middlewares, you can add custom routes. You can also tap into the HMR pipeline. When a file changes, you can modify how HMR works. So this gives you very fine grained control over the whole dev experience. In addition, V also provides an api to load ESM source files and instantiate them in node.js with module replacement like pre-assigned validation. Typically, when we do server-side rendering with a bundler, we are essentially running two bundles side by side, one for the client and one for the server. So when you edit a file, we actually rerun both bundles. But in V, on the client side, we do hot module replacement over native ES modules and on the node.js side, we actually keep the instantiated copies of these models in memory and only invalidate the ones that are affected by your code changes. So this is almost like server-side hot module replacement, which is very efficient. And this also makes it very easy to create a performance server-side rendering dev setup with vite. Now, it can be completely decoupled in production, so you don't have to use vite in production for server-side rendering. And the result is we are seeing a plethora of ssr meta frameworks built on top of vite. There are SvelteKit, Ream, which is a server-side rendering framework for vue 3. There's vite ssr and vite Plugin ssr, both are framework agnostic ssr extensions on top of vite. And we're seeing frameworks like Markel essentially using vite, which is actually able to encapsulate a meta framework-like functionalities encapsulated inside a single plugin. So this kind of speaks about how powerful the plugin system is. Okay, so vite is growing really fast and we're really proud of what we've achieved, but I also really want to give a shout out to the projects, these great projects vite is built on top of, mainly Rollup and ESBuild, both are great gifts to the community. And also shout out to projects that have inspired features in vite, Snowpack, WMR, web-slash-dev-server, and Parcel. There are a lot of interesting ideas in each of these projects, especially for Snowpack, WMR, and web-dev-server. We are kind of exploring in the same space and we share a lot of these ideas. So shout out to them for inspiring some of the features in vite. And yeah, so I'm very excited to see this new wave of dev tools. We are to make the web ecosystem move forward and improve dev experience together. Thank you. That's all. Bye. Evan has been asking us, do you currently use webpack or any webpack-based tool like Next or Nuxt? And well, 52% of you, so that's just over half, has been saying webpack-based tool, so like Next or Nuxt. Evan, what do you think about this? Does this surprise you? Yeah, I guess the poll is a bit confusing because webpack-based plus webpack adds over 100%. I don't know. I guess this just really shows how prevailing webpack is in the ecosystem. Obviously, it's a great project, but one of the reasons I personally now use vite for almost everything is just because I really miss that really snappy development experience when I first started doing web projects. You just write javascript, load it in the browser, and just refresh. Everything is kind of fast. I really miss that. So you would actually like to go back to that 10% of nothing? Not nothing, right? We would kind of want to have a cake and eat it too. Yeah, but it should feel like that nothing. Yeah, that would be great. So we're going to go to our audience questions now. So if you have any questions, you can still do so in the community Track Q&A channel. First question, is there any data already out there for vite's performance in a large real-world application in production? Yeah, so actually, I just saw this article on dev.2 a while ago. Someone migrated a project with 150,000 lines of code. So that's a pretty big code base from, I think it was previously on webpack. So they moved to vite and tried to look at the numbers. So yeah, so originally a code start using webpack was 150 seconds, and the app loads in six seconds. So after migrating to vite, a host start is now... a code start is six seconds, and the app loads in 10 seconds. So hot reload went from 13 seconds to one second. And if you put vite in build mode, the build launch mode, it's still from 17 seconds webpack to 13 seconds vite. So that is actually running a full build on every save. So yeah, I would say you get probably 10 times better performance during dev in this very specific case. Obviously, it will kind of vary. And I think this has a lot to do with how their old webpack configuration was using... probably using full Babel or typescript. So they also did compare this by switching from Babel loader to ES build loader for transpilation, which did increase the performance by quite a bit. So let's see. So that shaves their speed from 185 down to 56, which is decent, but still not as fast as vite, I would say. So yeah, so that's just some anecdotal data, but I think it's a good reference. So I guess the takeaway is vite does work in large projects, and it is indeed able to handle that kind of scale. So if people want to read up on these numbers, can you share this article in the Discord channel? Definitely. Great. Next question is from Jean Cartier, and he's a bit confused. He says, I don't understand. So we can use vite for building react applications? Yes, absolutely. So vite is completely framework agnostic. I originally created it because I just wanted to get a faster dev server for myself to use vue. But as we got closer to feature completeness, I just realized, OK, most of these actually apply. They are framework agnostic. They would apply equally well for a react app or other apps. So I managed to just basically, I did a refactor trying to think. In vite 2, the biggest thing was we extract all the vue-specific logic out. And that was also a good process for us to think about the plugin api. Because if the plugin api can allow us to cleanly extract all vue-specific logic into a plugin, then it should work equally well for any other framework. Because vue, in terms of the compilation setup, is actually pretty demanding. So once we did that, it proved to be quite successful. We're now seeing svelte. A lot of people are switching from Create react App to vite as well. So I'm pretty happy about that because I feel like we put in all the work building a fast dev server. It makes sense for it to be able to help not just vue users. And like a follow-up from me then, do you think that because you're so affiliated in your name and so attached to vue, that that might fight against the adaptation of vite? Because people think this is a vue thing because Evan built it. I don't know. I guess some people are kind of tribal, but I just think if you're a web developer or an engineer, your end goal is to use the tools to build great products for your end users. So why does it matter who created the tool you're using as long as it helps you to be more productive? Yeah, that's a great view. And I hope people will do that, look at the best tools, and just use the best tools. Next question is from Puru. Is vite going to have some more integrated production build process out of the box like pre-rendering? If you look at our repo, we have a server-side rendering example. So that example actually already includes a pre-render script as well. It's pretty simple. We're probably not going to do that out of the box. It's kind of a scope thing, right? So you can easily do that yourself using vite, but it's not something... Because it kind of ties into what framework you're using, how you want to actually deploy it. So vite isn't as opinionated as that. We essentially give you the APIs for you to do it in the way you want. I kind of consider that to be a job of the meta-frameworks built on top of vite. I guess Filekit kind of does that. Ream also does that. VitePress is a static site generator that we built on top of vite. So if you're building a doc site, you can use VitePress, which basically works out of the box. Nice. Next question is from Jack Bjerk. When is vite being adopted by next.js and Create react happen? Make it happen yesterday, please. Thanks. I guess if you're using Create react app, you can just switch to vite directly. I guess there are some parts of the Create react app stuff that's still kind of missing, like Jest support out of the box. Jest integration has been somewhat an interesting challenge because Jest still does not support async transform and async module resolvers. These are kind of blockers that I think the Jest team is working on. So if they are soft, then it makes it pretty straightforward to integrate. But at the same time, we have other solutions like SitePress, which can run your component directly in the browser. But other than that, I would say vite is a pretty competent CRA replacer. Like, Repl.it actually just switched over from CRA to vite for all their react repls. And as for Next, that's kind of a different story because Next is pretty deeply integrated with webpack. They do pretty advanced pre-compiling. So I would imagine it would be difficult for them to actually consider a switch. They also just hired Tobias, who is a developer at webpack, who is the author of webpack. So I think they're pretty committed on that path. But I know they're doing a lot of interesting work to make Next fast as well. I mean, webpack isn't necessarily a dead end. I think a lot of it has to do with the historical burden where people just over-configure it. But if you can, say, use ESBuild for trans-violation, have very, very first-party optimizations out of the box like Next does, you can still get pretty decent performance. I'm interested to see how this unfolds. I mean, in vue land, Next 3 is able to be bundler-agnostic. So they have a mode that can actually run on vite. I think Next 2 also offers a way to actually run on vite already. I've heard people who switched over their current Next project to run on vite, and it just got 10 times faster in development. So I think it kind of depends on how these higher-level frameworks work. Yeah. Yeah. All right. Last question we have time for today in the live Q&A is from Jachna. Beginners can go to vite instead of react. Is that going to help us to improve our resume? I don't know. vite is just a tool that helps you build stuff. I think it's more important to actually show what you've built instead of showing what tools you know how to use. Yeah. Yeah. I hope so too that employers look at that. All right. That's the end of our Q&A. But if you have more questions for Evan, Evan is going to be on his speaker room. So there you can continue the conversation about anything you want to talk about with Evan. Oh, you don't have a speaker room. Okay. Sorry. Yeah. That was my mistake. Sorry. But Evan is available online. You can find him everywhere. So Evan, thanks a lot for joining us. Thank you. It's been an honor having this Q&A session and announcing you. Hope to see you again soon. Thank you. Bye-bye.
31 min
09 Jun, 2021

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic