We'll build a Nuxt project together from scratch using Nitro, the new Nuxt rendering engine, and Nuxt Bridge. We'll explore some of the ways that you can use and deploy Nitro, whilst building a application together with some of the real-world constraints you'd face when deploying an app for your enterprise. Along the way, fire your questions at me and I'll do my best to answer them.
Using Nitro – Building an App with the Latest Nuxt Rendering Engine
AI Generated Video Summary
We will create a Nox app using the Noxie CLI and adjust the resolution of our Zoom screens. Nuxt 3 has a new project structure and setup, with the app.vue file as the entry point. NitroMagic handles API routes and provides runtime config. Nitro has a storage abstraction layer and supports different drivers like Redis. The Nitro server is built in a two-step process and optimized for serverless and edge rendering.
1. Introduction to Nox app creation
We are going to create a new Nox app from scratch using the new Noxie CLI. Let's go into another folder and create a new app called Nox Reworkshop. Can we adjust the resolution of our own Zoom screens? If there's anything I can do to make things better, just say!
So... Great. I'll just dive straight in then, and I will... Let's see. I'll spotlight myself so you can see my screen. We are going to create a new Nox app from scratch. So... I am just going to... I seem to have done that by accident in the background. Let's go into another folder, and create a new app, which we'll call Nox Reworkshop. So this is using the new Noxie CLI, which is the Nox CLI. Well, is it unsharp for everyone? There you go. Yep! The resolution of your video screen is a little bit... Yeah....a little bit better. There you go. Can we adjust the resolution of our own Zoom screens somewhere? Because I did not find it immediately. It's better, but it's still interpolated. I think usually, if you change your screen resolution, it solves the problem. Let's see. I'll switch that to HD on Zoom. Has that made any difference for anyone else? Not yet. Not really? I'll see if I can actually change my screen resolution, maybe that might make a difference for you. Filming settings. Feeling any better for people? I mean, it's perfectly legible, I think it should just crack on, because it's probably going to be people's local network connections and Zoom, etc. Okay, super. If there's anything I can do to make things better, just say!
2. NUXT Project Structure and Setup
A brand new NUXT project has very little in place. It has an App.Vue file, a NUXT config, and a TS config. The app.vue is the entry point to the app. I'll create a Git repository called Nitro Workshop. We'll check the structure of the project and run the dev server at localhost 3000. The welcome component doesn't work in production, so replace it if deploying a test app. We have some other magic directories like RTS config. Most things from NUXT 2 build directory don't exist in NUXT 3. They're in memory for speed and can be updated differently. The dotnuxt directory contains type declarations, a document template, and a Nitro server.
And a brand new NUXT project is going to have very little in place. So it's just going to have an App.Vue file and a NUXT config, and there'll be a TS config that will do some magic for us as well. We probably don't need to make any changes to the TS config here.
The app.vue is the replacement. It doesn't exist in NUXT 2. And I'm aware that there are some people here who haven't used NUXT 2. So I'll just explain what it does. This is your entry point to your app. If you delete it, it's fine. There'll be just a basic one that exists that NUXT will use for you.
I'll just create a little repository. It's still unreadable for many of the people. It got even worse. It's extraordinary. That's very, very blurry. Maybe if you deactivated your camera as well, maybe the upstream doesn't really handle it? Yeah, Daniel, I'll tweet you a screenshot so you can see. You could make sure your BitTorrent downloads are paused. Is anyone seeing this properly? Or is absolutely everyone finding this as blurry? Very, very blurry. Okay, let's jump into a quick troubleshooting thing. I don't know if disabling all our cameras can help. But that would be only true I think if the problem is actually the—how about, let's try this. Let's ignore my camera. And let's just literally share my screen and see if that'll make a difference. Might help a bit. Okay, you'll now see. Ooh, now we're talking. Now that was great, yes. Now it's very sharp. Now we're talking, yeah. Super. Okay, that'll work for us. You can zoom out a bit if you want.
Okay. There we go. Age problem. So, okay. And we'll just— Nor OBS. Okay, so here we are. First thing Daniel, to interrupt you, I'm very sorry. Can I ask you if we are supposed to follow along with the code? You don't have to follow along with the code at all. So what we're going to do is I'm just about to create a Git repository, and I'll push it up to GitHub so you can actually see what we do as we do it. If that is useful for you. Yes, that would be great. Exactly. Okay, so we're going to call this Nitro Workshop. Sorry, I have a small suggestion. Can you just for the first time check the structure of this project? Can I do what? Yeah, great. Okay, so you should be able to see that. So you've got a little, little repo here. The structure is really very simple. This is what you would get if you were to run Nuxty in it and pop it to a directory. So if you find your app.vue, which is your entry point to your Nuxt app, we'll just fire up the dev server. So that's going to be just localhost 3000. And we will go there. So you'll see there's this welcome component. By the way, if you ever are playing around with Nuxt you'll find that this doesn't work in production, deliberately. We don't want it to accidentally end up in your production bundle. So if you are wanting to deploy a little app and test it out, please change the Nuxt welcome component and replace it with something else or you will think things aren't working. So if we do that, we'll say this is my app. You'll see that we are immediately updated. So that's what we are working with today. There are some other magic directories that exist. So RTS config, I love TypeSupport so I'm not going to tell you how everything works. We extend a generated type, RTS config that exists within this dotnuxt directory. Most things, by the way, that you would have seen in NUXT 2 build directory just don't exist in NUXT 3. They're all in memory, which means they can be super fast. And it means they can be updated in different ways, as well. So the only things we've got in here are some type declarations, a document template, which really is not going to change. That's just going to be a basic structure of your HTML, a Nitro server. This is going to get recompiled whenever we change anything on the server side.
3. Nuxt Project Structure and NitroMagic
The TS config in Nuxt 3 contains resolved paths for all Nuxt aliases and automatically updates for modules with their own aliases. Types for all modules used in the app are pulled in, including auto imports. The server folder in the project's structure is where most of the NitroMagic happens. A subfolder called API can be created within it to add endpoints. Nuxt 3 focuses on the front end, but a lot of the differences come from the back end, the Nitro server. API routes are automatically handled by a package called H3, which passes and stringifies responses. The dollar sign fetch function provided by Nuxt 3 automatically passes JSON responses.
This is going to get recompiled whenever we change anything on the server side. So that will be written there and then loaded from the file system. And then there's a disk folder here, which has a server, and this is a veep server. It gets pulled in by the Nitro server there. So not a lot going on there.
But what we do have is this TS config which contains, for example, resolved paths for all Nuxt aliases. So if you were using Nuxt aliases before, you will have had to add these to your TS config, whereas this is now all done for you. And if you're using modules that add their own aliases, this will get automatically updated for that. We also pull in types for all the modules that you use in your app. And we generate types to things like the auto imports that we provide. These are helper functions and we're telling TypeScript that these are globally available. So when I'm typing in my app, I can just use them. They're there and available to me and this is actually true. So I don't have to import them. They are just there for me. So if I were to, for example, want to see what my runtime config is, I can just display it like this and you'll see that's my runtime config for my app. I can explain a little bit more about what runtime config is if that would be helpful for people. But right now I just want to show you that you can can access these auto imports anywhere throughout your app.
In terms of other magic folders, we have a page, we can create a pages folder. We can create a plugins folder and we can create a server folder. There are others as well that we could dive into. I'm going to create a server folder because it's probably where most of our NitroMagic is going to happen today. And I've created a sub folder within it called API. And we'll just create a little health check endpoint for the server that we're going to create.
Now, Nuxt 3. Now, most of the things you hear about Nuxt 3 and most of the focus is, of course, on the front end. That's the view part of it, after all. But a lot of what is different about Nuxt 3 comes from the back end, the Nitro, the server bed that we're talking about today. And so we... Hence... We'll spend some time on that. So we have here just a little endpoint that will return status OK. And if I create another terminal and I pull that across, that's just going to be turned automatically into API health, and we get a response. So quite a useful little endpoint. What's going on here? Normally, if you would write this in a node API, such as what you're using all the time, ward I'm guessing, you would have a request and a response objects. And you would do something like console results equals status. OK, res and JSON, stringify, salt. And you would also probably set some headers, kind of type application, JSON. And that will actually also work for us. So if we were to say, make it a bit different. That's that's going to work fine, too. That's your normal Express node middleware type handler. But basically, it's a lot simpler to do that. And so we do that automatically for any of our server. So that entry points are API routes, and that's all powered by a package called H3, which basically handles automatically passing stringifying responses from from these end points. We have something similar at the other end as well. So when you get a. A response. So if we were to say, have a look at our. If you were normally going to fetch, you would fetch from localhost 3000. Say, and your, your results, you probably need to do something like a JSON. And then you're going to have a result. And we might pop that and display it in our Bobby. But that's going to look something like. Ah, I can't read it for whatever reason. It's fine because this is an internal entry point. And this is actually what I expect to use. An actual external. She might just show you what happens. If we do this. We get a placeholder value there. Now, when we use a function that we provide with next three, which is dollar sign fetch. By the way, this is something you can use in a non next three project as well. You can simply import it from a package called Oh my fetch, which will work same way in any app. This will actually automatically pass JSON for us. So we can just swap that out. And in this case, we should find exactly the same result that's happening. We could actually just say it's a fully valid JSON object. And when we were just for your own comfort, you can probably afford to zoom out a bit more because actually the screens super clear. Amazing.
4. Nitro Fetch and Runtime Config
With the Nitro fetch, we can pass a direct path to our API endpoint. It calls the endpoint directly on the server side without making a network request. Magic type hinting is provided by Nitro.ts, allowing for dynamic handling of API endpoints. API routes match anything and can be handled at the match level. Nitro also provides access to runtime config, allowing for multiple environment configurations through environment variables.
Well, you probably don't want to see that full level. I would zoom out to if I were completely available to do so. But, hey. This good for everyone? Yeah, that's fine. So sorry, I think it was better for me before. I think now it's getting a bit on the smaller side. And there's lots of real estate which we don't need to be seen just now. This. OK. That's what we. OK. OK, so here we go. So what am I saying? So with this this fetch, we can do something more than we would do with just a normal fetch. We can actually and you may have across this before, we can actually pass a direct path to our API endpoint. And that is going to if it's running on the server side. So in my initial render, it's going to call that API endpoint directly. As in it's just going to call it like a function. It's not going to create a request and make a network hit the network layer, just going to call it directly. And if it's happening on a client side, it's going to call it as you would normally expect an object. And then we're going to call a request to happen. We do also, by the way, benefit from some magic type hinting, which is all done by that at ESConfig I mentioned earlier. So we have something called Nitro.ts and it basically produces something which is incredibly simple. And we believe that this API route is going to return, well, whatever it returns. So as long as we're using this magic return a JSON object, we're going to have magic typing. And we can do things dynamically as well. One thing with API endpoints work to start with, is that they match anything. So API notes is going to match API notes two. And it's going to match API notes add or whatever else you like. It's all going to be handled by the same entry point. We're likely to make this a little bit more intuitive as well, and do some automatic parameter passing and things like that. But for the moment, we are able to handle everything at this match level. So if I were actually going to call this, stick it in, say, a health directory and call it health update, we should now have a couple of different routes here. I wonder if I've somehow. Oh, I typed in this. This is me. For some reason, I pasted in my. Don't save. Let's just open this again. I was very surprised that how the server had managed to create a typo. So it basically add this as another as another route. And so, in fact, if we were to now in our app view, what are we going to return? We will now find that our results is. And call it updates. Is now going to be returning both possible options for us because it's going to match both of those routes. And as we grow nitro, if we have more clever functionality like magic route parameters that you enable it that colon or anything like that, you can basically pipe that all in. So you're able to type checking on it in your runtime code. But enough about my project. Some other cool things you can do at the nitro level too. So what else can we do? We've got access to a runtime config. I mentioned that before. Let me tell you a bit about it. So in Nuxt, we have this thing called public runtime config and a similar called private runtime config. And this is basically because we probably want, most of the time, to build an app or a server that can be run in multiple environments. So you would normally have these as environment variables. So you would have something like a, maybe a base URL which you're going to set at, I don't know, your production API URL in production, and maybe there's a staging URL you're going to use somewhere else. And you're going to configure this in an infile or pass it on the command line. By the way, if I'm saying things that absolutely, you completely know, feel free to pipe up and say, stop teaching your Chrome Lover to sucker experiments. So environment variables is something you probably want. So let's try, maybe not base URL, but magic token is going to be, abracadabra. And just, Oh, the magic, I'm auto complete. We're talking about the stuff you see when I'm typing something like this. Something like that. That's GitHub co-pilot. So it's a cool machine learning type thing that predicts what I'm likely to want to type. So you can, if you've not come across it, you can sort of tell it what you want. So something like function, whatever, something we might want to do, parse URL. Okay, so we're gonna take in a URL. How are we gonna parse it? This is an interesting approach. I'm gonna create an element. I wonder if there's other, some other options. No, it's just doing that.
5. Interpolation and Config
We try to interpolate strings and match patterns with environment variables. We encounter a bug when rendering the token on the client side. To resolve this, we only export the necessary public info. We can use the token on the server side for fetch requests. The Nitro server allows us to import config directly and have full access to it. Hash config provides interpolation for environment variables.
Doesn't seem very good. Let's try interpolate string. Doesn't do anything useful for us. Okay, that's a pretty useful function. So that's basically matching any pattern, like that. And it's noticed that I have a base URL up there, and so it's going to basically place them with an environment variable. So if you have a string that's a bit like base URL, it's going to replace it with an environment variable.
Anyway, it's a co-pilot that's pretty cool. So that's nice for us to figure out what you're doing. So we are probably going to have this magic token. It's probably a private thing. It's going to be something that's just only really visible for us on our server. We don't want this to get to the client by any means. So we're going to say the token is going to be process and start magic token. I think that's what I called it. And we're going to maybe have some public runtime config to help us, public info is, hey there, reading my source.
And what you'll find is that if, in our app.view, we pull in that runtime config again, and we load it on the server side, we're actually going to have a bug. I'll tell you why we have a bug. You'll find if I pull out my dev tools, there's an issue here. I'll explain in a moment. But first I'm just going to turn my JavaScript off, and you can see what rendered HTML from the server is. We have our secret token rendered out, and we also have all the other runtime config, which Nuxt inserts in the config. Now, that token isn't available on the client side, so although it's in the HTML, because I've made a silly mistake of just running it rendering the token in the HTML, it's not going to be able to match up on the client side because the client doesn't have any access to that token.
If we were looking at the HTML, you can see that we set this window.nuxt variable to a number of things, including, you can see, the config. So this is how we pass it from the server to the client side. And it doesn't have the token. So when I turn JavaScript back on, what happens is that the HTML comes from the server, doesn't match anything from the client side, so we have to reboot, start again. This is one of the biggest challenges people find when they're moving from building just client-side apps in Vue to building something that's isomorphic, that's rendered both at the server and the client side. They have to make sure that the state that renders the component is exactly the same on both sides where you get issues like this.
In this case, we really, really don't want to pass the token to client-side. So instead, what you probably want to do is do something with a particular thing within it rather than just exporting the whole thing. We want public info, probably. So we can say, config.publicinfo. And hey, presto. Everything works fine now. But you might want to use that token on the server-side. And so you can do that. So you could use it to make some kind of fetch request, for example. Particularly, today, we're going to stick that in an API middleware. So we're going to say we're going to create a resource. So we're going to call our resource notes. And let's just list them all in an index file. So we are going to return a list of notes. They're going to be the one, title, note one. This is where gethebcopilot is perfect. So we have some notes now. And we're probably going to want to iterate over them.
What are we going to do with the config? On in the server side, in the Nitro side, we have actually a slightly different import. So now use runtime config, we can actually import config directly from something called hash config. Which I need to type, that will be typed in a PR in the next couple of days. But what we have here is, we now have a typed Yeah, so you can see it console logging out, that's the config, we have. And that means you can use it in your, and your Nitro app, you can have full access to that config rather than try and access at process.env. And you might say, why, Daniel, do we do this? Why do you have to import it rather than just accessing the environment variable from the process as you normally would? And the answer is that the Nitro server isn't necessarily running in a Node environment. It might be running in a Node environment, but it might also be running in a browser or in a CloudFlow worker, or it might be running on dno or something else. And we have to have a way of controlling that. So instead of using Node functions or utilities that you might normally use, where you're accessing a process directly, it's preferred if you can use an abstraction. We do insert polyfills for things like process. If you happen to be using a dependency that is using it, or you have some other call for using Node objects yourself. But it's preferable if you can import them like this.
Hey Daniel. Sorry. How does this hash config differ from normal.env? There are some... Very good question. We do actually do some interpolation, for example. You'll find if I have a magic token here, And then I do something like signed URL equals my API.com token equals magic token. Then what I should find is that if I were to use... That we should now have an interpolated string. So we've got signed URL is... We've actually referenced one environment variable inside another environment variable.
6. Runtime Config and Abstraction Layers
We can use runtime config to reference one environment variable inside another. This enables us to have a boolean value instead of a string. Runtime config allows for an isomorphic configuration experience between the server and client side. Nuxt adds abstraction layers for config in both browser and server contexts.
We've actually referenced one environment variable inside another environment variable. So that's something you can do with public runtime config. You can't do with just accessing process.env. This model, imagine you said enable middleware. If you were just to log the process.env of that, it's going to be a string equaling true, which always is a pitfall, a foot gun for lots of people because if process.env.enableMiddleware. do X, Y, and Z. In fact it might be set to false, which is also truthy, as a string. Whereas, in fact, if we were to say something like that here. The copilot is very carefully helping me by wanting to test if it equals true the string. But in this case, I'm just going to say enableMiddleware. And what's going to happen, but we actually have a boolean value now. So there's lot of useful things that run-time config enables you to do. Not least, and here's another key difference, it might make very little difference when it comes to the server. Because we can, after all, do something like... we can actually directly access the proces.env. I think we might actually affect this a little bit with rollup. Let's just confirm. I'm just going to stringify that value. And that's clearly causing some kind of error. So clearly, I can't stringify that value. But it might not be that different on the server side, but it is very different on the client side. So normally in a Vue app, you don't have access to process.inf. Maybe you do when you're rendering it on the server. You might also not because it's a webpack or a Vite context. And so runtime.config means that we can deliver a sort of equal, isomorphic configuration experience between the server and the client side. In both cases, you have access to config. And remember on the client side, we are parsing that to the client via this window.nuxt object. And that's how it gets all the environment variables that we wanted to have access to. You might have more questions to follow up on that. Do come back to me. That's alright. Thank you. Bryl. Yes, so Nuxt is adding abstraction layers for both browser and server contexts for config. And there are some helpful things in there that I haven't even gone into, which we could cover at some point.
7. Nitro Storage Abstraction
We have an abstraction layer called storage in Nitro that allows us to read and write to keys. It supports different endpoints like in-memory cache, file system access, and Redis. We can set up the database by creating a middleware file, mounting a driver to storage, and specifying a prefix. Mounting a driver is similar to mounting a drive to a folder in a file system. It handles all requests that match the specified prefix. We can create a Redis driver using the unstorage library. Is that helpful?
I'm going to cover another abstraction layer that I think you're going to think is really cool. At least, I think it's really cool. And this one is actually piped. We have something called storage. It is a special import that only works in Nitro. So, only on the actual server middleware, API routes and endpoints. And what we have is just a single named export called storage. And we can do some really neat things there.
So, I'm just going to pull this out. So, I've got a little bit more going on here. And. So, our storage object, we can do a number of things with it. So, you see it has got things like get keys, get item, get meta mount, move item, set meta, set item. And what it is, is it's like a database abstraction layer that lets you read and write to string, to keys, that are strings, effectively. And it supports a number of different endpoints, things like in memory cache or file system access, or we've got other drivers as well, such as, for example, Redis.
So, I thought it might be cool to show you how you can set it up with Redis. I'm going to create a middleware file, and we're just going to call it db. Now, middleware is similar to an API root, except it doesn't have any matching. It doesn't run only on certain roots, it runs on all of them. So, if I were to create a middleware here, and we'll pass a request object, and I'm just going to log the URL. That's going to actually log that URL whenever I do anything at all. So, if I were to pull up my terminal now, and just pull maybe a URL that doesn't exist, we are going to encounter some kind of error, for some reason.
What's going on here? This error has occurred more than once. I'm just going to take a little diversion, because it's clearly some kind of problem. It's the source map related. Might be a source map related thing. Let's see if we can... I've done something else stupid somewhere. Clearly I had. So we are just calling a non-existent page here, testing. It's being rendered by App.vue because we don't have any page level routing set up. And our middleware is running on that root. So this is where we're going to set up our database. We're going to pull in storage. And we are going to mount a driver to it. And it's going to mount something that is, oh by the way, this uses a library called unstorage. So you can use this anywhere, it's not just in Nuxt. So we're going to mount to storage, we can give a prefix, so we could do something like we're going to mount just to Redis, something like that. Because that would be our key which begins with Redis, goes to this driver. We could have fs which would be if we wanted to mount a file system driver specifically to the file system. But in this case, we're just going to mount it so everything is handled by this driver so we're just going to pass a zero length string. And we're going to create a new driver. Daniel, what you mean by mounting? What we're talking about is if you have a...think about a file system. And if you happen to have Linux experience, then that will that will make a difference. If you don't, then this may be a not helpful example. But if you have a sort of nested tree of directories, so you've got your server and you've got your server, API folder and you've got your server, API notes folder. And then you have another folder called Pages. And it might have some more content. When you say when you mount something to a folder, you're basically saying this driver is going to handle all requests that go into that that handle that folder.
So on Linux you might mount a drive to a folder. And what it means is that then even though these look like they're siblings in the same directory, the things that start with server are going to be handles by that drive. So if you're on a Linux or a Unix environment, if you were to run the mount command, you would see that, in this case, I seem to have a couple of different disks, and each of them has some partitions on it. And they are mounted to these different directories. So if I go to system volumes update it's actually going to open that partition. System volumes preboot is going to open this partition, which is an entirely different one. A different bit of the drive. If I go to this volume, it's going to open an entirely different disk. But it's abstracted out so that on my computer when I just see these directories side-by-side I don't have to worry about where they come from. I just have to worry about writing and reading content from them. The same is true with unstorage. If you have a key that begins with, if I mount it, for example, to meta, then think about it like a folder. So everything within that meta folder is going to be handled by whatever driver I parsed to this here. Everything else will be handled by maybe a different driver, the default one, which is in memory. I might not have made that super clear. Is that helpful? Yeah, great. Virtual paths. Exactly. Now, see, there you go. So in this case, we're going to create a reader's driver and we're going to pull that in from unstorage drivers. And I said we had a couple of different ones here.
8. Creating Redis Distributor and Handling Requests
We're going to pull in the Reader's one directly. We are going to mount the reader's driver, I believe. Let's set that up. I'm just going to hard code it for the moment. We'll probably need to set this up. I'm going to create a docker-compose file. We'll call it... And we'll use VDIS latest. And I'm pretty sure we only need a port mapping. So that's just 6379. It seems to be working, okay. So now what I want to be able to do is actually do some reading and writing to that. We're creating this redistributor. We want to test if things are working. In this notes folder, if we get a posted note then we're going to save it to the database. We're going to handle getting, and then maybe a post, just for now. In case of get, then we'll probably just display all of the notes, and post should probably save it. So what we're going to want to do, obviously, is access the body to start with. We're going to use something called use body, which is provided by H3. So what we're going to say is, I'm going to pull out the body of the request. And if we've posted, then we are going to want to... Storage. Set item. And maybe you want some kind of ID, right? So we'll want to say we have got an item interface. It's got to have an ID. And maybe some text. Maybe a title. And so we're going to expect that our body is going to...
We're going to pull in the Reader's one directly. We are going to mount the reader's driver, I believe. Pick some options. So we have a base, which can be nothing and then a URL, which we'll use. Let's say... We'll use our config, so it's going to be config.reader'surl. And let's set that up. That's only going to be used on the server so we're going to stick it here. I'm just going to hard code it for the moment. We'll probably need to set this up. So, I'm just going to create a docker-compose file. And pull in a... I always have to remind myself of the syntax for these things. Everyone does. This is... A docker-compose file I built for something else. Let's see what does it say there. OK. It's Laravel docker-compose file. We don't... It doesn't need to be nearly so complex. You just have to have one service really. We'll call it... And we'll use VDIS latest. And I'm pretty sure we only need a port mapping. So that's just 6379. That should be it, right? Am I missing anything anyone? And I want less indentation. So if I run Docker compose up... It seems to be working, okay. So now what I want to be able to do is actually do some reading and writing to that. Do you need to do any committing? Oh, probably, if you would like to see what's happened so far. Let's close that directory, it's not being used. So this is in progress, right? So it's probably not even gonna work. And I'm gonna commit my.env file. And I don't know GitHub will probably send me an urgent email saying, beware, beware magic token has been committed to source file. But hey, if it helps anyone, if you're sort of following along, and you want to see the code, you should be able to see it at Daniel Rowe Nitro workshop. Just test that exists. Seems to be there. Anyone else? Great. Okay, so where are we now? We're creating this redistributor. So I guess we want to test if things are working. I'm a little bit jumpy because of this weird error I've been getting so far. We need to fix that whatever it is. It seems to be fine, no crazy errors so far. So let's try doing something with it. So how about in this notes folder we, if we get a posted note then we're going to save it to the database. And if we, I can't bear things that aren't typed. Okay, so we're going to say our method is just going to equal this method, and then maybe we're going to do some kind of switch on that. Oh, this is super nice. Okay, so we're going to handle getting, and then maybe a post, just for now. In case of get, then we'll probably just display all of the notes, and post should probably save it. So, just return and empty array. So what we're going to want to do, obviously, is access the body to start with. So we're going to use something called use body, which is provided by H3, which is also providing these entry points. And there are lots of other useful functions too. So you can use cookies, you can... There's some that are specific to creating little sub apps like use base, or you can use query. But in this case, we just want the body. So what we're going to say is, I'm going to pull out the body of the request. And actually... Well... Let's say... What are we going to do? So if we've posted, then we are going to want to... Storage. Set item. And maybe you want some kind of ID, right? So we'll want to say we have got an item interface. It's got to have an ID. And maybe some text. Maybe a title. And so we're going to expect that our body is going to...
9. Creating Redis Distributor and Handling Requests
We can set items to be keys.map(getKeys.map). If we have a get request, we return the items. If we set the item, we return success. We seem to be unable to connect to the Redis server. Let's check the Redis URL and the docker-compose configuration. We should not have a POST request. Let's investigate the issue further. The item should not be set directly on the node end. We need to pass an endpoint and save the note as a particular note. We have the ability to read and write directly to Redis. Now we can pull out and save to a database in our server using different drivers.
So we're going to want to set this item to be body ID. And then body, I think, could probably be the concept, so we'll call it item. That seems to make sense. We can also pull out all items. So something like... This is going to be a little bit more complicated. We are going to want to first get the keys of all the items that we have stored in the storage event. So get keys. And that's just going to return us a list of strings. And we're going to say items is going to be keys map. This is going to be get keys map. Something like that should do the job. And if we have get, we'll want to pull them all back. So just return the items. And if we set the item, we're going to just return success. Actually, we can just return, right, and then now if I just manually trigger this, we're going to post. We're going to have an ID as test. And title is hey. And text is this. There's some longer text. Okay we seem not to be able to connect to the Redis server. This is probably my mistake. What is our Redis URL after all? Oh it doesn't need, no we're hard coding it. Yeah, it doesn't seem to be working. Why is that? Probably my lack of docker compose configuration. Oh we should never have a POST request. Is anyone else? I will just have a think about that for a moment. I just sent a URL, seems fine. Notes, I'm setting an item, item id. Someone is telling me the solution. Should not be, because I am sending it directly on the node end. So if this was happening in the browser, that would be a different thing. But, instead, I would bet that that might be the issue. Can I just set the item if I... Yeah. Okay. What's happening? Let's... Just in case this is a deal, I'm gonna rename it. I'll return null. What's happening is we went back to the... I could just also call res end, maybe. No. And I also probably like setting a 204 status code. Yeah, that seems fine. Okay, let's see what the issue was with my getting the items. So we want the keys. And this should be what I get back when I simply get from that endpoint. So we drop post and this is a get request. And this is exactly the issue I obviously created. I think probably what we want to do is pass a endpoint. Probably actually we probably want to save this as a particular note. And we probably want to get only the notes. There we go. So a Mistake has a namespace, so now we have an ability to read and write directly to read us. So we're sending a list of I was thinking about sending a note and getting notes back at the other end. So we went the whole thing. We know, have a list. Great question. Yep, that's a good idea. And we probably only need to actually read this when we are in a get. So let's just put out that. And it's also probably true that the item doesn't exist unless we are in a first request. Yeah, there we go, exactly. Thanks for the pointer. So now we have the ability to pull out and save to a database in our server. And you'll also likely have loads of other ways of doing that. You might have other databases as well. You can write your own. So you don't have to use this particular driver that I've pulled up for Redis.
10. Custom and HTTP Drivers for Databases
You can create your own driver for the database you're working with. There's also an HTTP driver that allows you to use other endpoints as your database. Focus on the common patterns of interacting with storage and use the appropriate prefixes for different databases.
So you don't have to use this particular driver that I've pulled up for Redis. So if you were to look at unstorage, you'll see that there's a how to write driver section. So you can actually just create your own. Whatever database you're working with, you can do. There's another cool one I just want to mention. We have an HTTP driver, which basically let's you use other endpoints as your database. So you can actually just, say, treat it as a drop-in database like everything else, but it's gonna make a get request and post requests in order to return items and save them. That means you can use it to interact with your existing PIs. So all you need to do at your end is focus on the common patterns of interacting with storage. And again, you can have something like everything that starts with notes, maybe, is going to be the reader's database. And everything that starts with something else is going to be a different kind of thing. So, that would be a perfectly reasonable strategy.
11. Hash Imports and Unstorage
The hash for the imports is a virtual import provided by Nitro itself. It's not encountered in actual npm packages. Unstorage is a package that can be downloaded from npm. Unplugin simplifies writing transform code for Vite, webpack, and standard rollup. Uninv creates a single environment for browser and node. It contains presets, aliases, injections, and polyfills. The proxy created by Uninv is a universal proxy that won't error no matter what you do to it.
Before we dive in any further, does anyone have any questions so far? OK, a basic one. The hash for the imports, what's the difference when you had the hash storage and no hash before the onStorage? So, what we're doing with the hash is this is a virtual import. So when this is transformed into our server, the storage is actually something that's provided by Nitro itself. That's the hash input, and basically that, you'll never encounter that in an actual npm package. It's always gonna be either a subpar import that you've provided for yourself, or an alias to it that you've aliased to some other existing folder or package, or virtual import like this one. The same is true for config, except I haven't properly typed it. And whereas unstorage is actually a package you can just download from npm, so if I look in my yarn.lock file, there's just an unstorage package. And if I were to go into Node modules, it's there, so it's just files on disk. In terms of why you would use something like unstorage or something like unplugin, yes, you totally should. Unplugin aims to make it really simple to write transform code that you might use in Vite, but have the same code be usable in webpack or in standard rollup. So Unplugin aims to make it really simple to do that. The others are like Uninv, which is maybe one of the coolest. Uninv aims to create a single environment that works with browser and node and in other situations. So it contains a lot of useful things and it powers a lot of the cool work that we're doing with Nitro and Nuxt. So we have things like presets, so we've got a node and nodeless preset. Those are probably the two that we would work with most. And they contain aliases, injections and polyfills. And those can be consumed in whatever way you want to consume them. So here are some of the kinds of component pieces that we provide. For example, if we were wanting to pull out... So what do you do if you encounter an import fs? And you're in a browser, we've created this proxy, which is a universal proxy. This proxy will not error no matter what you do to it. That's what this proxy is for. And we also have little tiny inputs, like no op and empty, which you can alias things to if you need to. And we do that. So, for example, in Nodeless, you'll see that we alias e tag to no op.
12. Uninv and Default Bundler in Nuxt 3
Uninv aims to create a single environment that works with browser and node. It contains useful things and powers the work done with Nitro and Nuxt. Presets like node and nodeless provide bundles of aliases, injections, and polyfills. A universal proxy is used to import node built-in modules in a browser environment. npm packages like console.luf and debug are duplicated to save bundle size. H3 is a server framework faster than Fastify, Express, or Connect. The default bundler in Nuxt 3 is Veet, but Webpack can also be used.
Because a lot of these kinds of things are just really simple. And if we can have one code that works everywhere, that's much better. The others are like Uninv, which is maybe one of the coolest. Uninv aims to create a single environment that works with browser and node and in other situations. So it contains a lot of useful things and it powers a lot of the cool work that we're doing with Nitro and Nuxt. So we have things like, I'm just gonna turn this into a VS code-like view. You have really useful things like presets, so we've got a node and nodeless preset. Those are probably the two that we would work with most. And they contain bundles of some of the other things that we've produced. So a node environment is going to want to provide browser APIs to node. And a nodeless environment is probably going to want to provide node APIs to the browser. And you can... They basically contain aliases, injections and polyfills. And those can be consumed in whatever way you want to consume them. Obviously, we consume them in Nitro, but other servers or entry points could do the same, or you can use just the component pieces.
So here are some of the kinds of component pieces that we provide. For example, if we were wanting to pull out... Just expanding some of the options here. So one thing we do in a nodeless environment, you'll see we have a number of things which refer to a mock. In fact, they're not in this nodeless... No, here we are, the arrayed variable. So we are mocking all of the node built-in modules, things like fs and path. They don't exist in a browser. You can't just pull them in. I mean, there are some things, like Browserify and others have sort of replacements that are aimed to rewrite them. So what do you do if you encounter an import fs? And you're in a browser, we've created this proxy, which is a universal proxy. This proxy will not error no matter what you do to it. Try and make an error. It will work as a function. It will work as a value. It will work as a... It has all the properties that you could ever want to get off of it. It returns strings. It can be a class. It can be whatever you want. The main thing is it won't error. So you can basically import FS in a browser environment. And whatever you do, it won't do anything. But it won't fail either. So that's what this proxy is for. And we also have little tiny inputs, like no op and empty, which you can alias things to if you need to. And we do that. So, for example, in Nodeless, you'll see that we alias... e tag to no op. So it will just do nothing when you call it. There are quite useful utilities. There's some that are more useful, that do more. So npm, we have a number of npm packages, which we mock, not mock, but sort of duplicate the functionality. So console.luf, for example, is a console logger, but we've produced a package which basically just turns it into just the native JavaScript console. And basically that means for Nitro, we save a bundle, literally, we save the bundle size. Because whatever dependencies the user is using, which use console.luf, when they're deployed to the Nitro environment, they'll just use the native console logger, which means you don't have to package the whole console.luf set of dependencies. And the same is true of debug, which is another package, which is using lots of dependencies, and FS events, node fetch, MIME DB, which is enormous, has loads and loads of data about MIME types, and so on. We've got poly fill. Anyway, this is probably far too much info, but basically, check out some of the un.js repos that we have got on GitHub. They exist because they fill niches that are not being filled by other things. In many cases, that's because we need packages that work in multiple environments, like H3 works in the browser, not just on node. H3, by the way, is phenomenally fast, so it's faster than Fastify, or definitely Express, or Connect, or any of the other sort of server frameworks. One of the reasons it is is because all the functionality is exposed by tree-shakeable utilities. So if you're not using cookie parsing, it's not included in the bundle, and the same is true for anything else it provides. Sorry, I've just gone on and on. Does that answer your question about Andreas?
Okay, super. Any other questions? Daniel, can I ask you a question, general question about MAX3, or am I off-topic? Go for it. I'm new to MAX3 and Vue 3. I was wondering, what is the default bundling tool in MAX3 because in these days with Vue.js, I've learned it's ready for Webpack and for Vits as well, and I was wondering which one works by default? So by default, Nuxt is, and this works by the well, by the way, because I want to go into some Nuxt 3 sides of things because we want to talk about how we deploy. A couple of people asked at the beginning. The default bundler is Veet. So if you have your Nuxt config, this is sort of out of the box, it's going to be using Veet, and you can see that in the code. Yeah, that was my set of errors back in the day that the Veet server is being built, and that's what's rendering things for you. If you want to use Webpack, you can set Veet equals false, or you can also pass objects to Veet, and the options to configure it. So if I were to run Yarn Dev again, now it's going to do it with Webpack, which is still phenomenally fast, by the way, compared to Webpack 4, although it's difficult to compete with Veet proper for speed, but that's still pretty good.
13. Nuxt Project Build and Nitro Server
When building a Nuxt project, the output includes server and client Webpack bundles that are transformed into a Nitro server. The Nitro server is created in a two-step process, where the first step compiles the view app into a bundle. The server side of the Nitro server contains enough to render the view components, while the client side contains the JavaScript files needed for rendering and hydration. The Nitro build process copies the browser requirements to a public folder and produces a server with the required Node modules. The server entry point includes chunks for server middleware and API routes. The Vue app entry point packages the Vue 3 renderer and handles the rendering of the app. Middleware chunks are loaded dynamically based on the requested routes, and the View app rendering is only loaded if there has been no hit in the middleware. Nitro options, such as Timing middleware, can be used to measure the loading time of each chunk and track which chunks have been loaded.
If you were to build your Nuxt project, you'll see this is normal, sort of Webpack output. And it produces sort of normal server and client Webpack output, which is then transformed into a Nitro server. The same is true with Veet true. So if you were to build a Veet server, it's also going to do the same. It's going to create server and client bundles in the dist folder, to consume those to produce your.output folder.
Let me get into a little bit more detail there. So Nitro, if you didn't know is a two-step process, it's a process of building a Nitro server as the second step of a normal Nuxt app. So the first step compiles the view app, which renders a component. This is a concept of a page with components, the whole view side of your app. It renders that into a bundle, which is what we see here in this Nuxt dist folder. And the server side isn't a full server, it just contains enough to render to an HTML string the view components of your app. And the client contains the JavaScript files that a browser needs to do the same, to render and hydrate, that same HTML. The Nitro build process then takes those inputs and creates, just copies the browser requirements over to a public folder where they can be served to the browser, and then produces a server. The server will contain, in a Node environment, the Node modules that are required to run it. So you'll see it's not a lot of Node modules, particularly compared to the number in my Node modules folder. So if you have a look at my Node modules folder, it is, let's see, 560 directories. Whereas if you look in the output server Node modules folder, it is 27. So that's a huge, huge difference. We actually trace usage to find out. We use Versil have a wonderful library called NFT, Node File Trace. We actually find out what you're using. And actually just copy those across. We have a sort of a server entry point. So if you were to ever start that Node server, you're going to want to just run it as a node. That's going to be then the server entry point. And then it's got a number of chunks. It has chunks corresponding to all of the, just the pure server middleware and API routes that we've just created today for example. So health yeah is very simple, it just returns that status. DB does a little bit more here. So you'll see that, that hash storage import has been converted to a relative input to import this virtual storage, which is shared between several different chunks. And then the same is true for notes, for example. So you'll see our code there, and index, I'm not really sure what we did, what we put in there. Oh this is the, the Vue app. Entry Point. So in your app then, which is where we are normally going to look for our Vue application, we have a Vue renderer, which this just packages the Vue 3 renderer up. And you'll see, for example, in this render function, it does lots of things. This is the manifest rendering. But basically it's rolling up quite a lot of other things, like all of the Vue dependencies and so on. So what happens is, when the entry point is hit, we have a very small amount of code which gets run. So this is a loading screen or perhaps an error screen, which is inlined. And there's another one. And we've got the creation of the server and the registration of each of the different routes that we have registered. These are all dynamic, so that if one of these routes is hit, it gets loaded. But if it's not hit, it doesn't get loaded. So if we have something, and every request will load these two particular middleware chunks. But only the requests that match API nodes for API health will render those particular ones. And we will only load the View app rendering to String if there has been no hit so far in the middleware. So in, I can show you what that looks like a little bit with some Nitro options. So we can set an option called Timing, which is going to inject Timing middleware. So if I'm, I'm just gonna build this now again. And now you will have seen it actually just in terms of an illusion to timing middleware, which actually will inject some server Timing for us. I've now enabled and in fact, that's already updated that's why. So now when I hit, if I load a server, and I hit it, you'll see we now have some time it's told us, it told me how long it's taken to load each chunk. And it's also told us which chunks have been loaded. So if you see the possible options here, so these two open middleware they've been loaded and they exist in Nitrous static that's been loaded here that takes care of serving static files, which is needed with the basic nodes over and most targets you'll deploy to like and the five versal or cloud flair or Azure, static web apps or any of these things, they will have their own CDN that will handle that. And so this doesn't get bundled, but it's hitting our static server. We've got some great optimizations on that by the way, because everything can be done in advance.
14. VEAT and IDE Type Checking with Nitro
There's an amazing package called MPX VUE TSC that enables type checking for VUE apps. Nitrous Static and DB are loaded, but notes and health are not. The code loads in about 10 milliseconds and won't need to be loaded again. Chunks are loaded as needed. Users find this view of Nitro's server helpful.
Oh just to answer your question about VEAT and IDE type checking, there's an amazing package produced by the author of OLA, a guy called Johnson, which basically enables you to type check your VUE app. And it basically uses the same mechanism that your IDE does. So all you have to do is run MPX VUE TSC, no and that, and that's going to type check your VUE application, exactly. In terms of the, um, but yes, in terms of where I am here. So we load Nitrous Static. We load DB. DB itself relies on virtual storage and on config. So it loads both of those, but then notes and health on, we don't match those in the root. So we don't load them at all. Instead, we just go into the app, which needs to render itself. It needs to pull in a client manifest. There's some other things that it needs to do there, which it does. So the whole process, if you can see these, we're talking sub, is that about 10 milliseconds? Four, eight, eight milliseconds to load the code. Basically to respond to that request. And that code won't need to be loaded again. So if I make another request. We don't know it. It's already in memory and we're able to respond. But if I were to make a request that specifically looks at say help, it's not going to load that chunk too. So it's as needed. And it's separating out into chunks. Is this by the way, useful people? Are you glad to be getting this sort of view of what Nitro does with your server?
15. Running Apps in Different Environments
You can throw the app into Heroku. The project is independent of the environment. It has no runtime dependencies and is super small. It fits well in serverless functions or Lambdas. There are different build targets.
So yes, can you throw it into Heroku? Yes, you can. The whole point is that we want to get rid of. Nuxt, we want to get rid of the dependencies you might need to run it. So previously, if you built a Nuxt app, you still have to have Nuxt, or there's a lightweight distribution called Nuxt Start, which is very lightweight, but it still is a separate thing. Whereas here, you're actually just running native Node code and it's native Node modules. It uses native ESM for Node, which has its own challenges too, because we have to wait for the ecosystem to catch up. But it means it's entirely independent of the environment it's in. Oh, of Nuxt. So Nuxt should be a build time dependency. This project has no runtime dependencies, technically. What it produces as output contains all of the dependencies it needs to run. So it is super, super small. You should be able to take that output folder, stick it somewhere else and run it, and it should work. Doesn't need all the scaffolding environment around it in the project, which means that you're talking about total project size in this case of 1.7 Meg, which includes all of the server dependencies, as well as all of the generated code for the client side and all of the generation code for the service side. So it's incredibly minimal and really small and fits very well in a serverless function or a Lambda or something like that. And it's even smaller, if you're using a CDN and you don't need to serve the static files from within the Lambda. We've got a number of build targets. Let me show you how they work, because you can write your own too.
16. Nitro Optimization and Static Site Generation
Search engines prioritize fast-loading sites to provide users with a good experience. Slow sites tend to lead to disengagement and are deprioritized by search engines. Nitro is optimized for serverless and edge rendering, loading only what's required. It has a fast start time and response time, making it ideal for dynamic and personalized results. Static site generation in Nuxt 3 is being abstracted into a caching layer, allowing for different strategies for each root or response.
Let me show you how they work, because you can write your own too. Before we go over to the build targets, because you're talking about how fast it is, how does that impact with SEO? Does there some sort of relationship with that? Yes, yes, there is a relationship with that. I mean, SEO is a bit magical. So there'll be lots of people who will draw the lines and make the connection in different senses. But my rule of thumb is that search engines are trying to connect users with good experiences. So they want users to get what they're looking for. So that's how you think, I've had a success today, I found what I was looking for. And so that's why search engines tend to deprioritize sites that are slow to load, because it tends to mean disengagement. A search engine measure engagement is determined by how long you stay on the page, whether you stay at all, whether you interactive, you navigate further. And if as a user, you are loading a page which takes a long time, you're much more likely to close that tab and move on to the next one. Which means all of the data is going to drive search engines, whether or not they explicitly say this to start with, to deprioritize slow sites. And I think in general, that that's a fair comment. Although, I guess, if you have a slow site which is more targeted to the need of the user, that's a countervailing factor. So in general, you want a site that's as fast as you possibly can make it. In terms of the key metric, it's that time to first bite. There are other metrics as well that really matter too, such as time to interactive. So after everything does get to your browser, how long is it going to take to be fully responsive to you as a user? And so, if you have loads of client-side libraries, that's gonna make a difference too. In terms of Nitro, Nitro is optimized to work really well in conditions like serverless or edge rendering, because it only loads what's required. So if I were to, I'm just gonna edit the code of my app here, okay? My app here, okay? So, we're going to do console.time.start, console.time.end.start. I just wanna see how long it takes to actually just load the server. So, our cold start then, is five milliseconds. That's how long it took to start the server, now responding to requests. If you want to see how long it takes to say, respond to the request, we can actually hit say, just look at the headers, because I've enabled timing middleware here. So we have a server timing header, which you can inspect and view more graphically, nicely in, say, Chrome or Firefox. But it tells exactly how long it takes to make this happen. So, it's basically four milliseconds to respond to that request. So, now obviously, if you start making API requests, that's gonna be your main time spent. Because if you're requesting or connecting to an external database or something like that, it's gonna obviously take time. And particularly, if you waterfall it, so you first make one request and you wait for a response and then you make another request, that's gonna delay things for you. But out of the box, we're talking, what, five milliseconds start in four millisecond response to a request. And that was not an API root, that was rendering a Vue app. It wasn't a very complicated one. If I had looked at just the health check, for example, that's the same four milliseconds. But maybe we just can't get a lot faster than that. Anyway, it's really fast. It's aimed to work really well in edge rendering, in serverless functions where you have a high premium on startup time and cold start. If you were, for example, to, which means you can actually use it more. The cold start. For example, with Nuxt 2 starting without Nitro, it takes over a second to start, which might not seem like much, but that's 100 times more than Nitro. That makes a big difference when you have to initialize. Obviously, it takes even longer to return the response. It makes a huge difference. Normally people wouldn't even think of having a server side rendered app. You might just have a static one rather than render it on a serverless function, whereas Nitro means you can revisit that assumption and think about maybe serving more dynamic or more personalized results relevant to the user on before timing might've ruled that out. I see there's a question about static mode. So I can tell you a little bit about that, but you can't try it out now. So I couldn't do this today because at the moment we just build a server. But the plan is the, I should say, look you might still need a warmer to keep the, like it depends on your platform because they have their own overhead in terms of starting up the, but completely agree with what Toby has just said, by the way, it is only one part of the thing. In terms of the Lambda, the Nitro side is really fast, but you also do have to consider the fact that if it's in cold storage, it has to be downloaded to the server and it has to be unzipped and then it has to be run. So there's all of that overhead too. So it's not instantaneous. Cloudflow Workers does claim to have a zero millisecond cold start because of using V8 isolates. So there are, anyway, you should try it out and basically see how it works in your particular environment. Like what are the trade-offs and what's the responsiveness. But one benefit you do get with Nitro is that we have one function. So rather than a different function for every root, we have the single function which can render any root. And I think it's basically the sweet spot because what it means is, you're much less likely to have a cold start. Because for example, if you have a health check, that's gonna warm up only the health serverless function. If you have an API hit, it's gonna warm up only that function. And if you have another one on your main site, it's gonna warm up only the view app. Whereas if you render them all with single entry point, which dynamically loads what it needs, you don't have the disadvantage of having a enormous monolithic entry point, but you do have the benefit of having a function which is more likely to be warm. I mean, if you do get a hundred simultaneous requests, you're still gonna need to warm up a hundred simultaneous functions. So it's not like it can handle everything. But basically mathematically, if you've got five functions versus one function, somewhere like five times less likely to need to warm up a function. I have also one question about incremental static site generation. Yeah. And about the current status, because I know this is like a platform thing and a framework thing, because for example, we need to host like 4 million pages and basically it's too much. And this also gets to the question about static site generation. So the reason that we haven't rolled out Max 3 with static site generation out of the box is that we're abstracting it into something that we're calling the caching layer. So the concept is that for every root or possible response to a request, it's possible to have a different strategy.
17. Caching Strategies and Nitro Entry Points
You can have different caching strategies for different sections of your site, such as HTTP header caching or incremental static generation. The concept of static site generation is now part of the caching strategy. Nuxt Nitro offers different presets for targets, and you can even create your own. The Nitro entry points, like server and Vercel, have different implementations but follow a similar structure. Each entry point handles requests and can use the magic isomorphic fetch. Netlify uses the lambda approach, while Azure's implementation depends on whether it's a static web app or an Azure function.
So you can have a, for example, a client side only set of the site, which will only render for only just sort of return the client bundle and render in the user's browser. So maybe your admin page or something like that, you might decide to do that. You can have roots or sort of sections of root that have caching strategies. So those might be HTTP headers, which say this can be cached for X amount of time. It might integrate with someone like, Versal has a CDN which can be controlled with caching headers from the endpoints point of view, so you can set, you can return a stairway revalidate header that will basically mean that the CDN will then just serve that response and revalidate it in the background. You can also have an incremental static generation type rule so that we can actually build that on request and then add it to the build output so that any future requests don't need it. But a key part of that is you have to also have a way of invalidating that. So your CMS, for example, can say, hey, this content was updated, so the next time a user requests that page, re-render it because it's no longer the updated version. So basically, all of that is going to, going into a sort of one ring to rule them all abstraction so that you can set up your app and basically say cache this bit and the caching strategy is generate static files. I don't know if that answers your question, but it's all one. So the concept of static site generation is going away as a separate concept. It's instead a particular caching strategy. So caching at generate time, at build time, this is a kind of cache. But there are other kinds of cache too, like incremental static generation or HTTP header caching or dot dot dot. Like so many things that we're doing, we're making it configurable. So the different Nuxt, amazing thank you guys, that's right, so the different Nuxt Nitro target, the providers for example, the presets that we offer, you can write your own, you just pass to a preset. I can show you, if you're interested, the tools that we provide to enable you to do that. What am I saying? So do you want to see some stuff about employee targets? Yeah. Okay, so basically if we have a built Nitro app, it's the same as basically saying the preset is server, that is what the default is. But there are lots of other presets that you can pick from. And in terms of the ones which are available, I'm just gonna show you in the directory, but you can also check out the Nuxt documentation. So we've got this Entries folder. So there's an Azure and an Azure Functions, there's a CLI one, which is slightly different from a, sorry that's the default one. It just basically runs the server, but you might have a server which is just a function. There's a Cloudflare entry point, as the Dev entry point which is what we use when we're building the site, which uses node workers in the background to handle requests. A Firebase, there's a Lambda one, which is this one you might use if you're building your own. A Lambda, a Netlify builder function, a Node target, server, service worker, or Versil. And all of those you can make your own. So what does one of those look like? If you were to pull open, say I think the server is probably the most minimal that I could possibly show you. So that is, well actually it's not the most minimal, but it's pretty minimal. So this is what a Nitro entry point looks like, a preset. It basically creates a native node server from HTTP using a handler function which is provided by this entry point here. This might be similar, might be familiar to you because this basically existed when I opened up the Nitro app. What this does is simply create an app and add middleware to it and then handle requests. And also has some, this bit handles the creation of the magic isomorphic fetch. So this local call that we're talking about basically is a way of interacting directly with H3 without needing to make a new network request. So that's a pretty minimal version. It just exports this function. And we, because it's already built in the node standard with a request response object. We just pass it to the server. We set the port and host name for the server. And we, oh, sorry, we get the port and host name from the server. And we pass them to the server via listen. And we tell people where it's listening. So that's a server entry point. You wanna see something like Vercel. Even simpler. Because Vercel expects just to have a default function exported with request and response objects. So we just import it from our server and export it again. Super simple. Netlify, similarly, is wanting to, it takes the sort of lambda approach. You see we're importing lambda here. A lambda deals with an event comes in. It doesn't have a request and response. It has a number of properties like headers and body and things like that. So instead, we're just importing the lambda and exporting it again. So lambda here is, this is a handler. We're using local call, which I mentioned before as a way of making calls that don't have to network layer. So we're actually making effectively a network request. It doesn't hit the layer just to render the event that Netlify provides us with. It's the same format as the AWS lambda. We encode the URL so that it has the query string parameters which in a lambda event comes separately. And then effectively you see we're just transforming one format into a different format. And then we are returning the format that the Netlify function expects. So a status code instead of a dot status and headers are the same, but the body is a string rather than an object. So you see we do those kinds of things. Azure depends a little based on whether we're doing a static web app or an Azure function. And that's because with a static web app we have to get the URL from the header. It's this XMS original URL. We have to pull it from that, pass it and then just extract the path name.
18. Using Nitro with Azure
To use Nitro with Azure, you need to configure a Preset and an Entry. The Azure Preset includes configuration options and hooks for the Nitro build process. You also need to create the required JSON files and ensure they are accurate. Nitro can detect the target environment automatically based on the preset or environment variables. For Azure, you need to explicitly set the preset. The output includes a host.json file and a function definition with an entry point called handle. The Azure Static Web Apps CLI can be used to test the application locally. The CLI serves the static content and the server from the public folder. Opening the specified URL renders the page in an Azure context. Dave has a question.
And then we do the same thing. This local call that we did with Netlify. We pass the URL headers, method, and body. Then again, the way we respond with an Azure function is instead of returning an object, we just set the property on context. We set Res to equal the status headers and body. And then Azure takes that and does magic with it. That's how it's done. You can write one of those basically. You can write a Preset that will basically enable that to... Basically it will just set things up for you. If I were to switch across to actual code of the framework, rather than just pulling it out of Node modules... So here's the Azure. There's two things, a Preset and an Entry. A Preset basically configures things and Entry is the actual bit of code that gets bundled. So this Azure Preset here, this is the configuration. We pass a path to the Entry, which I showed you just a moment ago. In this case we're saying that we are going to use Node External so we don't have to inline everything in the build. We have a custom server directory because Azure expects it. You can also, all of this is typed by the way so you can see what properties you can pass. And then you can have hooks that hook into different parts of the Nitro build process. So you can actually just interact with things before or after build, modifying the configuration. And then we have to basically write a root JSON file and various things, lots of JSONs that Azure expects. We have to create those in the right place and make sure they're accurate and pointed to the right things. So it's all very specific to Azure, it's not a sort of, and the same would be true if you built your own integration with any platform you want. The complication is only how complicated is the platform. Basically you follow the docs, you ensure that you create the things you need to create and you can actually then just publish that as a standalone preset if you want. And then it has an entry point which is just the same thing that I showed you a moment ago but with some more comments and types. So feel free to ask more questions if there would be useful things. If we were to build here, this site for say Azure, then you have to do this, you have to either pass it as an environment variable or pass it in here as a preset because otherwise it's just gonna produce the normal node thing. So I've had examples of issues by people just building Nitro and then expecting it to work in different places. It won't because it hasn't built the thing, the connectors for the target environment that it needs to go into. So here I've set the preset. In most cases, by the way, you don't need to do this. So we try and detect the environment. So with Nitro, based on where we are, we try and detect the... We've got this detect target function. So if there's not actually an explicit preset, if there's not an environment variable, then we run this detect target function. We can actually detect if we're on Netlify or if we're on Versal or if we're on Azure. So you don't need to set those, that's automatic. If it's in that environment, it will know that. But then that's the kind of thing. If we've got ways of identifying other environments, we can add it to the zero config because who doesn't like zero config? But in this case, we're just gonna explicitly say we're building for Azure because we want to test it. In an Azure kind of way. It's done that. So our output is a little different here. We've got a host.json file which seems to be required. There's a function definition where we say what the entry point is. So it's a function called handle which is exported handle. We've got the triggers. Basically we just sort of configure it for Azure. But it's the same thing. So it's still got the index.mjs. It's just exporting a handle. And then the rest is just the plumbing. We have got a command here that we can run which will just run the Azure Static Web Apps CLI. As much as possible, we want to provide those for people so that you can just know what you need to run to test it. So you can test Firebase locally, you can test Azure. I think Vercel and Netlify, we don't provide a command for that. Yeah, but there we go. Azure Static Web Apps CLI which by the way, didn't exist when we first built this functionality. So we have just gone back a few days ago and added sort of documentation for this. It's going to serve the static content from that public folder and the server from there which is what we want. And if we then open that 7071, oh no, that's the internal, sorry, that's the internal URL. The one we meant to access is 4280. So that's just rendered the page for us just as it would in an Azure context. So it's hit 4280, it's realized then that it needs to load, to render that with this server function. So it's executing the function and then we see all the timings that we're expecting. It's done that and then it's returning a response. So it's an example of how you would do that. Dave has a question.
19. NUXT Magic and Nitro Features
NUXT is a powerful framework, but the magic it employs can be frustrating when things don't work. The team is working on improving documentation and providing more hand-holding. The goal is to make things easier and simpler, with a focus on enhancing the CLI. The current beta version is not recommended for production sites, as it is still undergoing changes and bug fixes. It is advised to wait for a more stable release. There is a Discord server and NUXT JS Twitter handle for support and updates. Nitro, a part of NUXT, offers features like isomorphic fetch, type API routes, and file system access through assets. TypeScript can be used in NUXT, but type checking needs to be done separately for performance reasons.
Hi Daniel, I'm sure you can remember from last time if you actually helped me out on this, but one of the things I find with NUXT is I think somebody mentioned in the comments like oh, great, more magic. The magic I've always found quite frustrating in NUXT because if it works, it's brilliant. If it doesn't work, you've got a sort of, it's just really difficult to know sort of where to start and I think when, last time we launched on Azure. I launched on FirstSell as well, which I found a bit easier, but for example, Azure has got this massive control panel and it is actually really difficult to know where to go. So just wondered what sort of the team's plans are to actually maybe give people a bit more hand-holding on this next version? So, the, well, so basically we've got documentation, which is in progress because it's a beta, but if you identify issues with it, basically I will fix them. So, we've got our docs and we have got, that's our deployment section here. So if you're launching on Azure, hopefully this pretty fully answers the questions. If it doesn't, then it's definitely worth raising an issue because if it's not a bug, it's an issue with the documentation anyway. So, in terms of handholding, I guess the documentation is basically where we are right now. In terms of say, Nucsi deploy command, so you can actually just run Nucsi deploy and it asks whatever it needs to know and does whatever it needs to do. That's the kind of thing we'd love to build. So I would just sort of wait and see what comes. But our objective is to make things easier, simpler. So if that's, and we particularly want to make the CLI really nice, so you can do cool things with it like add a functionality to your app by pulling in a module and then having it auto-configured for you, rather than just expecting people to have to add it and then find the stuff that they need to add to then it's config and do that. And already, hopefully, you can see that with some of the type stuff, that things should be increasingly there for you automatically. You shouldn't need to know all the functionality and you shouldn't need to open the docs, you should be able to sort of be guided somewhat as you type. But if you're not encountering that, bug me about it because it will be something I will want to change. Yeah, okay, cool, thanks. I guess it's probably gonna be a while before I deploy something on the next version. But yeah, thanks. It is worth saying, by the way, about that. Well, exactly, in fact as Andrea has just said, we are totally in beta right now. So things are changing, every commit is a new release. We are fixing bugs, we are causing new ones, we are fixing those. The trend is upward but it's definitely not something to build a production site on right now unless you have the time to explore and enjoy and are happy to fix it if things don't work. This is not gonna go on for that long. So we're, I think, moving probably pretty quickly. But definitely wait for it to be a little bit more stable before you build some production on it. Questions?
Okay. Lots of stuff to look at. I really hope, by the way, that anyone who, if you've got any questions or want to explore, dive in a bit deeper, please, please do. So we've got a Discord server. Come join it. It's, I'll send a link. Come join the Discord server. And you can ask questions and bug me with push notifications. Obviously, the NUXT JS Twitter handle is a great way of finding releases and new features. But same equally, just to stay on top of the NUXT framework repository. That's where everything gets released. Yeah. Teefo asks, how much magic is there in Nitro? The answer is there's a lot of magic in Nitro. So I don't know exactly what you mean by it. I don't know how I can answer that question most helpfully. So I think that the things that I would point to as being magical in Nitro are the isomorphic fetch because that enables you to make the local requests. I would point to the type API routes, which basically enable you to know the kind of responses that come back. I would refer to the things like assets and... It's going wrong. Assets, config and storage. So I've mentioned storage and config. Assets is another one. It actually lets you access the file system effectively, whatever that looks like. So in a Cloudflare workflow, accessing the file system is a bit different because it's in a key value storage. But in your API endpoint, you can do something like... You can import stuff from assets and you've got similar kinds of get keys, read asset and start asset. I think that was like a file system access to your assets. Let me just stick that in the DB middleware and show you what it does. Stick that in the DB middleware and show you what it does. And we're going to...... That would be one of those bugs we were talking about. But basically, assets enable you to direct access to the file system. I should probably document it, to make sure this is working. Are there other questions? So you can write TypeScript. Basically, TypeScript as you write it, won't be type checked. You have to do that yourself, that's a separate thing. And the reason for that is largely performance because type checking is phenomenally resource intensive. And we used to encounter issues in Nuxt, too, where people's servers would basically be frozen and it was TypeScript. It was the TypeScript checker that was freezing it rather than even web pack four. So what you want to do is run that npx UTSC nomt command. That's going to type check your project for you.
20. TypeScript and Type Checking
You can write TypeScript for your server and app files. TypeScript allows for type checking during the build process, making it faster and more efficient. The types get stripped, resulting in a clean and concise codebase.
That's going to type check your project for you. I would recommend doing that as part of your build step and like for production rather than running it as you go and relying on IDE autocompletion, but that we are open to improving that as well. Maybe running it in a separate process or figuring out to make it more performant. But you can just write TypeScript. So your server entry point, you know, like you'll see that my, um, server files are TypeScript and the same is true. Um, your app can be TypeScript as well. All your view stuff and everything imported in your, your app TypeScript just gets, um, basically the types get stripped. It's the fastest way of making things just, just what you're writing.
Comments