Instant websites using Fresh and Deno on the Edge

Rate this content
Bookmark
Slides

Any interaction faster than 100ms is imperceptible to users - what if all website interactions, including loading were 100ms or less? Let's explore strategies and how Fresh & Deno can help.

33 min
24 Oct, 2022

Video Summary and Transcription

The Talk discusses the concept of instant websites, aiming to minimize the time between user interaction and unblocking the user. It emphasizes prioritizing the loading of primary content and delaying the loading of secondary content to improve page loading times. Server-side rendering is highlighted as a faster alternative to client-side rendering, reducing network round trips and improving rendering times. The concept of island architecture is introduced, where only the JavaScript needed for interactive components is shipped to the client. The Fresh web framework is presented as a speed-focused framework for Deno, offering automatic CSS inlining and using Preact for client-side interactivity.

Available in Español

1. Introduction to Instant Websites

Short description:

Welcome to my talk about writing instant websites for fresh and Deno. I am Luca, a software engineer at the Deno company. I work on the fresh project, Deno deploy, and I am involved in standardizing web technologies at TC39, What Wig, and W3C. I am also a co-chair of the WinterCG community group, which focuses on standardizing behavior across JavaScript server-side runtimes.

Hey everyone, welcome to my talk about writing instant websites for fresh and Deno. My name is Luca, I am a software engineer at the Deno company, I work on the fresh project at the Deno project and Deno deploy which is our serverless edge compute runtime. We'll talk about that in a second. In addition to that, I'm also a delegate at TC39 working on web standards. The TC39, specifically, is the standards community that standardizes JavaScript, but I also work with folks at What Wig and W3C to standardize things like the FetchSpec, WebCrypto, and other related specifications. At W3C I'm also a co-chair of the WinterCG community group which is a community group which focuses on standardizing behavior across Javascript server side runtimes. So things like Node.js, Deno, or CloudFlow workers, you want to standardize the behavior between those so you can portably write code and have it work across a bunch of different platforms. That's who I am.

2. Instant Websites and User Interaction

Short description:

The core idea of instant websites is to minimize the time between a user interacting and being unblocked by that interaction. When a user navigates to a page, they expect the content to load quickly. To make a page feel instant, we need to minimize the time between the user interacting and being able to see and act on the primary content they care about, such as a recipe.

Let's get to the actual meat and potatoes of the talk. Before we can do that, we need to figure out what do I actually mean with my title of the talk, Instant Websites with Fresh and Deno. What are instant websites? How do I make a website feel instant? The core idea is that we want to minimize the time it takes between a user interacting and the user being unblocked by that interaction. What does that mean? If a user does some interaction, they expect something to happen. For example, when they navigate to your page, they expect some certain content to load because they're interested in that content. They want to read that content or view that content. If we want to make a page feel instant, what we want to do is we want to minimize the time between the user interacting and them being unblocked. If I can give you an example of this, the user wants to visit a recipe page which shows the recipe for a certain dish. They look up this recipe on Google or they look up this dish on Google, they click on a link. That's the interaction. How long does it take for them to actually be able to look at the recipe, understand it, and maybe not even understand it but look at the recipe and start to understand what's going on. So, the point at which the user is unblocked is the point at which they can see the recipe and they can start to act on that. What's the time here that we need to minimize? It's the time between them clicking the link and the primary content that they care to see about, or that they care to see, the recipe being loaded.

3. Achieving Instant Interactions

Short description:

To achieve instant interactions, we need to understand how fast they need to be perceived by users. Interactions faster than 100 milliseconds are generally imperceptible, so we aim for a maximum of 100 milliseconds. Visual changes can provide more leeway, as users expect more time for significant changes. However, it's crucial to provide feedback to the user quickly, even if an interaction takes longer. Show a loading indicator or spinner to indicate that something is happening.

How do we actually achieve that? Well, to do that we need to figure out how fast instant actually needs to be. How fast do these interactions need to happen for the user to think that they're instant? Well, really we want to make them happen as fast as possible, because A, there's no reason to make them slower. The user is not going to think, hey, this page is too fast. Right? That doesn't make any sense. The faster, the better, always. But there are practical limits to at what point the user thinks or does not realize the difference between too slow and or between fast and even faster. For example, as a rule of thumb, interactions faster than 100 milliseconds are generally imperceptible. So that means if you have an interaction which is 60 milliseconds versus 100 milliseconds, they're going to feel the same to the user. They're going to feel really fast. So we're going to aim for interactions which are around that time maximum of 100 milliseconds. That's really difficult, though. So there's a little bit of leeway we have. For example, users often are much more lenient with their understanding of instant or their perception of something if the visual change that they see is big. This comes from reality. If a user looks at something in real life and there's a big project going on, like a big thing is happening, there's a house being built from ground where there was previously nothing, that's a big change. Users expect this to take more time. They translate this into software as well. If there's big visual changes, there's usually more leeway. For a user to still perceive something as being instant. You obviously want to take this with a grain of salt. You still want to be as fast as possible. The larger your visual change, the more leeway you have. Also, really important is that you give feedback to the user quickly that something is happening at all. If there is an interaction which is going to take longer and which you cannot make as fast as 100 milliseconds, tell the user about it. Show the user that something is happening. Give them something to look at while it is happening. Click on a button and you know that button is not going to complete within the next second or with the next 100 milliseconds. Show them. Put a loading indicator on it. Put a spinner on it.

4. Making Pages Load Faster

Short description:

Put a timer on it. Prioritize the things that the user actually cares about. Figure out what is actually blocking the user. Load the primary content quickly.

Put a timer on it. Something to indicate to the user that yes, progress is being made. Here is approximately how long it is going to take. This is how far we are. Give them something to look at. It is going to make your page feel much faster than if you just have a button which you click and then nothing happens and then I don't know. Still nothing happens. Still nothing happens. At some point something pops in. Not a great user experience. But how do we actually do this?

This seems like a very difficult problem to make. For example, let's focus specifically on page loads for now. How do you make your page load happen so quickly that the user does not notice they are happening? Well, we really have to take a lot of variables into consideration, like how fast is the user's device, how fast is the network, what's their round trip time to the server? But there's some general rules that we can follow which will apply to whatever network, whatever device the user's on, which will always make the site feel faster even if they're on a really high-end device or a really low-end device.

The main idea that we always want to have in the back of our mind is we want to prioritize the things that the user actually cares about. We want to prioritize the things that the user is actually waiting for. We want to prioritize the things that are blocking the user because the primary goal to making a page feel instant is to minimize the time between interaction and unblocking the user. So if we prioritize unblocking the user over other auxiliary functions of a page, we may be able to increase performance. Let me demonstrate what I mean by that.

So first thing with figuring out how to make stuff faster in this case is to figure out what we even want to make faster. So figure out what is actually blocking the user. For most pages this is relatively doable because you can split up most pages into different types of content. Primary content, secondary, or even tertiary content. What are these different types of content? Primary content is content that the user is actually there for is that the user came to your page for to see things like on a news page the article itself on a cooking site the recipe or on an e-commerce store the product listing. These are the things that the user needs to see to be able to be unblocked.

The user does not come to an article page or a news page to view related articles to view ads to view navigation banner. No. They came here or to view a like sponsor post or something like that. No. They came there to read that actual article so the important thing that's going to unblock the user is loading the things that the user actually cares about the primary content. You can still load secondary and tertiary content but don't make the loading of that content slow down the primary content.

5. Optimizing Page Loading

Short description:

There are multiple ways to optimize the loading time of a webpage. By prioritizing the loading of primary content, such as the main hero section, and delaying the loading of secondary content, like a search button, we can significantly improve page loading times. For example, on a DSL connection, this optimization can save up to 0.3 seconds, resulting in a 15 to 20% improvement in loading times. This is especially important for mobile devices with high latency. By optimizing for worst-case scenarios, we can provide a faster user experience.

There's multiple ways you can do this. Some real-world examples here from real websites. First one is the Deno homepage itself. You can split this into two primary contents. This is the landing page for the Deno project. The primary content which is the hero section with like a title a subtitle an installation but in the current version and then you have a bunch of talking points underneath which explain what Deno is and why you'd want to use it. And then there's secondary content like the navigation bar for example contains a search button. The search button is so useful but it's not the reason why most people go to the Deno page the Deno homepage. Go to the Deno homepage to learn more about Deno not to immediately go search for something. So the search is a secondary content. The rest of the page the things that the user actually wants to see is primary content. So what we can do is we can make sure the primary content loads first and only after the primary content is loaded do we start loading the secondary content like the search button. And you can actually see this in action. I have a little gif here showing the loading of the Deno homepage and if you look at by that red arrow there you can see that the search button actually flashes in slightly after the entire page is loaded. Put some real numbers behind this the actual page loads at point three or point one sorry 1.3 seconds but the search button only loads at 1.6 seconds there's a 0.3 seconds there which the page is loaded the search button does not yet. This is obviously greatly exaggerated in slowness here because what I'm showing you is actually a DSL connection at 1.5 megabits a second and most desktop users in north america are not on connections that slow anymore. But this really illustrates the problem that you're trying to solve because a lot of people actually still are in connections like this not on their desktop but on their mobile devices. It's very common for mobile devices to have a very high latency even if the actual maximum throughput is relatively high a lot of times latency can still be quite great. So what you want to do is you want to minimize you want to optimize for cases for the worst case and that's automatically going to make the best case better as well. So through this optimization here where we load the primary content first and then load the secondary content the search button later we actually save about 0.3 seconds on this DSL connection that's a 15 to 20 improvement in page loading times just by lazily loading this search button after the primary content. This unlocks the user faster.

6. Example of Secondary Content

Short description:

I have a different example of this where it's less obvious what's happening. The fresh home page for the fresh project has primary and secondary content. The secondary content is an animation that is not critical for the user to view the page. It's loaded lazily after the initial page load.

I have a different example of this where it's less obvious what's happening which can be good right? You don't want to make it super obvious that your page is loading slowly in pieces. If you can hide that that's cool.

So this is the fresh home page for the fresh project and this also has primary and secondary content but if you look closely there's nothing that flashes in on this page after the initial load. The initial load is design-wise complete. It loads all the components of the page and if you didn't know any better you wouldn't actually notice what the secondary content here is.

So let me tell you the secondary content here is actually an animation. It's not a specific component on the page it's an animation which is a nice thing to have but it's not critical for the user to view this page. So it's not something we block the page rendering on because it's not something that actually blocks the user. When the user goes to this page they want to view information about the fresh project. The animations are nice to have but it's not something that's actually blocking them. So if you actually want to see the animation here it's right up here in the hero banner for the page the fresh logo drops down this little drip that splashes on the rest of the page. It's a really nice animation but it's not critical for the page to load so we loaded lazily after the initial page has loaded.

7. Benefits of Server-Side Rendering

Short description:

Again, server-side rendering is often much faster than client-side rendering due to minimizing reliance on network round trips. Client-side rendering typically requires at least three network round trips before rendering, while server-side rendering can reduce the number of sub requests. By fetching data in the initial HTML request, server-side rendering avoids the need for additional network round trips from the client to an API. This is beneficial because servers usually have better latency to other servers, resulting in reduced latency.

Again this is a very slow because this is on a DSL connection. I just tell a streamer a point. The next thing you want to do is you probably want to be server-side rendering. I'm at a react conference here and you all are probably very familiar with react and react is very client rendering heavy but nonetheless you want to be server-side rendering because server-side rendering can be and often is much faster than client-side rendering. Let me explain why.

One of the biggest cost factors for your application loading speed is network round trips. The amount of times like every time you make a network request there is a number a certain like fixed time that this just has to take due to the network switching equipment between the client and the server. So there's like a maximum speed at which data can travel between your client and the server. This is related to your ISP, to your device, to your connection type, to a bunch of different things, but it's something that you don't have influence over. It's something which is fixed. You cannot change this very much. So what you want to do is you want to minimize your reliance on this like fixed time that every request takes.

One way to do this is to make sure that you don't have the scenario where you load one request. Then that request, once that's done loading, you start two more requests or three more requests or whatever. Once those are done loading, they start even more requests. And only after all those requests are done, do you actually load the page. But ideally, you want to do is you want to do all the requests at once in parallel because then the latency doesn't matter as much. Then the thing that you're primarily waiting on is not the network round trips anymore because you only have one of those. If you're doing everything in parallel, the only thing that you care about now is how big the asset is.

The problem with client side rendering is that very often you're in a scenario where you have at least three network round trips before you render anything. That is because when you do client side rendering, you often ship an empty HTML to the client in the initial vendor, which then has a few sub resources, usually some CSS and with client side rendering, always some JavaScript. You have to download that JavaScript. That's one more network round trip. You have to then execute that JavaScript and usually that executed JavaScript does another API call or some other call to fetch data from the server. That's another network round trip. Only after that third round trip has finished can you actually render the page. Whereas, if you do server side rendering, you can get away with a lot less sub resources, sub requests. Server side rendering, you can do data fetching in the initial request for the HTML, which means you don't need to have this other network round trip from the client to an API, for example. You can make that request from the server, which is very beneficial because servers usually have much better latency to other servers than clients have to servers. Because servers are connected on high performance networks, they appear directly to other data centers, maybe your database is even running in the same data center that your server side rendering from, you can have much reduced latency here.

8. The Benefits of SSR for Web Performance

Short description:

Once the HTML is done downloading, you may only need a single other sub request, such as fetching CSS, which cuts out an entire network roundtrip. This can result in a 30% improvement in rendering times and is beneficial for the user's battery on mobile devices.

Which means that once the HTML is done downloading, you may only need a single other sub request. Which is, for example, something like fetching CSS. You've cut out an entire network roundtrip here. Which, if you have a 50 millisecond network roundtrip, which is pretty fast, then that means you've now cut your client side, or your rendering times by at least 50 milliseconds. The entire thing takes 150, you cut it down by 50, that's 30% improvement, that's very good. And additionally, obviously client side rendering has the downside of using a lot of CPU cycles, because a lot of... on your client's device, which, especially in mobile devices, which are battery powered, can result in additional battery drain. It's not something we want. If we do server side rendering, you have to do less work on the client, it's good for the user's battery, they're going to be much happier.

9. Optimizing Sub Requests for Rendering

Short description:

You can inline your CSS into the HTML to prevent additional network roundtrips and improve the initial rendering time. By minimizing sub requests, you can significantly speed up page rendering.

And you can actually take this even a step further. So this is a um diving a little bit deeper on the sub requests that are required for rendering. This is a timeline, a network timeline, of the fresh homepage that I showed you earlier. And what you can actually see here is that the fresh homepage can render and can show meaningful content to the user, after the first request is done. It requires no sub requests to be able to show content to the user. And you might ask how does this work? Don't you always need some CSS for example? Well you can do things like inlining your CSS into the HTML to not require an additional network roundtrip for your initial render. This can be very very beneficial even on very fast connections. For example in this case the HTML is done downloading at 0.45 seconds but the next, which actually results in the yellow green line here at 0.52 seconds, that's when the page actually renders for the first time, results in rendering within like 55 milliseconds here, which is very fast. If we would have had to wait until an additional network sub request with another network roundtrip would complete, the first one completes at 0.65 seconds. So we would have had to slow, the page rendering would have slowed down by at least 10 to 15 milliseconds, which in this example here is 30 percent slowdown. Very significant. So you really want to be minimizing your sub requests if you can at all try to inline your CSS into your HTML to prevent an additional network roundtrip for downloading the CSS. This can be very beneficial.

10. Client-Side Rendering and Selective JavaScript

Short description:

We sometimes need client-side rendering for interactive features. Existing frameworks like Next.js and Remix often client-side render the entire page, even for static content that hasn't changed since server-side rendering.

And finally, we sometimes need client-side rendering to do certain interactive things. If we do need to do client-side rendering and we need to ship some JavaScript to the client to do this, we want to make sure that this JavaScript only client-side renders a piece of the page, only the piece of the page that actually requires being interactive. A lot of existing frameworks like Next.js and Remix, what they will do is they will client-side render not just the components that are actually interactive, but they will client-side render the entire page. So what they will do is they will server-side render the page on the server for the first time, and then they will bundle up the entire rendering code that was required to server-side render, ship all of that to the client, and do all that server-side rendering on the client again. This is very painful because a lot of the content that you'll be rendering on the client really hasn't changed since the server-side render.

11. Client-Side Rendering and Selective JavaScript

Short description:

To achieve efficient client-side rendering, it is important to only ship JavaScript to the client for components that are interactive. For example, on the Merge Store landing page, only the JavaScript required for the cart button functionality is sent to the client, rather than the entire page rendering. This approach minimizes the weight and loading time of the application.

As an example, you can think of a blog for example with an article where the article is written in Markdown. To be able to render that you need to parse the Markdown. You need to turn that into HTML. You can do all that in the server, but if you also need to do that in this client, you need to ship this entire Markdown parser, the Markdown itself, the Markdown to HTML Converter. You need to ship all of that to the client. This can be a significant down weight on your application, and it can slow down the loading significantly for no real benefit, because the Markdown hasn't changed since the service had rendering, right? There's no benefit of doing that again on the client.

So what you want to do is you want to make sure your client-side renderer only the components that are actually interactive on the client. You want to render very selectively. A lot of your page is static. You do not need to re-render this static content on the client. You only need to re-render things that are actually interactive. So make sure to only ship the JavaScript to the client for components that are actually interactive.

I'll give you a quick example of this. This is the Merge Store. You know Merge Store. You can find it at merge.deno.com. And this is the landing page. The landing page has a bunch of links to different products that are available on the Merge Store. And those are just regular ATAGs. They don't need any JavaScript. When you click on them, it will navigate to a different page. But there is actually one bit of JavaScript that is required on this page, which is to power this cart button at the top. This cart button, when you click it, it actually opens a dialogue from the side, which you can use to look at your shopping cart, remove items from it, press the checkout, stuff like that. This requires some JavaScript to function. In a lot of traditional frameworks, that would mean that now we need to ship the rendering for the entire page, including the product previews, the header, the little icon in the top left, the footer, all of that to the client, and re-render it on the client. But what we do instead is we only ship JavaScript to the client for the things that are actually interactive. So we only ship the JavaScript to the client that is required to render that little button and to be able to perform the right actions when you click on that button, the event listener that gets invoked when you click on that button. I can give you some other examples from this store. So this is the product preview page where you can view information about a given product. This offers a card again, which requires some JavaScript, but there's other pieces of this page which also require JavaScript.

12. Island Architecture and Selective Hydration

Short description:

The image viewer, size selector, and add to cart button in an instant website require JavaScript to be powered. This concept is known as island architecture, where only the JavaScript needed for interactive components is shipped to the client. The idea is to render pages on the server and selectively hydrate parts with client-side JavaScript. Static content remains untouched. To learn more, read the blog post 'Island Architecture' by Jason Miller, the creator of Preact.

For example the image viewer has a left and right button that allows you to view different images of the product. This requires some JavaScript to be powered, and it also has a size selector and add to cart button which also requires some JavaScript. All of these things are shipped to the client individually and the JavaScript is constrained to only the elements that actually need to be interactive. You could, for example, not do any of this and ship the renderer for the entire page to the client. But again, this is slow. You would now need something to able to render the product description which is probably written in Markdown to your client and render that. It's not something you want. You only want to ship the components and the JavaScript to the client for the components that are actually interactive. This whole idea of shipping only the JavaScript that you need for making some components interactive is called island architecture. The idea behind it is that you render your pages on the server and then you hydrate specific parts of that Markdown with client-side JavaScript. And you don't touch static content of the page. So page content that is completely static is never touched by client-side JavaScript. You only hydrate the components that actually need interactivity. If you want to learn more about island architecture, after this talk you can check out the blog post Island Architecture by Jason Miller. Jason Miller is the person that originally invented Preact. It's a great blog post. It has some great diagrams. I really urge you all to read it.

13. Introduction to Fresh Web Framework

Short description:

Fresh is a new web framework built for Dino that focuses on speed and ease of use. It renders pages on the server, with no client-side rendering or JavaScript shipped by default. Client-side interactivity is achieved by selectively shipping components as islands. Fresh also provides automatic CSS inlining and uses Preact, a faster and customizable alternative to React. It offers familiar features like file system routing and uses Deno, making it feel familiar to developers with browser experience.

So, that was the first part of the talk title, Instant Websites, but it was the talk title in its entirety, was Instant Websites with Fresh and Dino. So, let's get to Fresh. So, Fresh is a new web framework we built for Dino, which is built with the idea in mind that everything should be really fast and that you should be able to build instant websites really easily. So, it takes a lot of those ideas that I previously talked about in this talk and builds them right into the framework, makes them really easy for you to use.

For example, Fresh renders all of your pages just in time on the server. There's no client-side rendering by default, which also means there's no JavaScript shipped to the client by default. Instead, if you want to do some client-side interactivity, you can have certain components be islands, which are shipped to the client. But you have to be very selective about this. You don't send the entire page render to the client. For example, you only ever send specific components to the client to be rendered there. And Fresh is also very ... we try to make it very easy for you to do the right thing. For example, I told you inlining a CSS can be a great performance win. We inline your CSS automatically if you use our Tailwind plugin, for example. So we try to make it really easy to build fast websites and make it really easy to stay on the fast path.

So how is it actually building sites with Fresh? Well, if you've used NextJS or Remix before, it's going to feel very familiar. Because pages and islands, they're written in JSX. They're going to look a lot like the React components that you've built for your Next or Remix sites. The difference is that they're actually not rendered by React, they're rendered by Preact. Preact is a alternative framework to React. It's much smaller, much faster, and much more customizable, really, than React. It's a really great alternative, you should all check it out, preactjs.org. But there's also other ways where it feels a lot familiar to NextJS. For example, routing is done through file system routing. Essentially identical to NextJS, which is a very... a lot of people are familiar with it, and it's a good way to do routing nowadays. And Fresh uses Deno, right? It's a Deno framework, which means that if you've built anything for the browser in the last ten years, it's gonna feel very familiar. If you want to do an HTTP request, use the Fetch API. Requests and responses are the same requests and responses that you have in browsers through the Fetch API, or service workers. It's gonna feel very familiar.

14. Features of Fresh Web Framework

Short description:

Fresh inherits cool features from Deno like no build step, fast iteration times, and TypeScript support out of the box. It makes it easy to turn static components into interactive islands and handle data fetching for server-side rendering. To get started with Fresh, download the Deno CLI, run the bootstrap command, and visit the Fresh home page for more information.

You have web crypto at your disposal if you need to do things like hashing or encryption or decryption. And some other really cool features of Fresh which it sort of inherits from Deno are that there's no build step, which means you have really, really fast iteration times. You don't need any configuration. It all just works out of the box, no configuration necessary. And TypeScript also just works completely out of the box with no configuration necessary, just like it does in Deno.

Let's look at some code real quick, just to illustrate my point here. This is a very simple basic route, just like you would find it in probably most projects. To create a route, you put a TSX or JSX file into the routes folder in your project. In this case, I have a route called routes slash about.tsx, which due to the file system routing would be available at slash about on my web server. This is just a regular JSX component. It returns some HTLM markup, which is a server side rendered for each request. And that for each request is important, because if you have dynamic routes which contain parameters, for example, this route, greed slash name, you want to be able to change the output of your route depending on the input parameters. In this case, you want to read the specific user that requested that route.

And I think one of the most cool features of fresh is how it deals with interactive islands and how easy it makes it to take a static component and turn it into an interactive island. So to make a static component and turn it into an interactive island, the only thing you need to do in fresh is put it into the islands folder, there's a folder called islands, put a JSX component in there, you import that from your route, use it in your route, and fresh will do the rest. It'll make sure that it is sent to the client and hydrated there. It'll also make sure that anything which is static, for example, this route has a static title and paragraph, are not sent to the client as JavaScript. They're only sent to the client to static markup. And data fetching is also something that's very important nowadays and if you're doing server side rendering. This is inspired by Remix a lot. If you've used Remix before, you've used the data handlers and the fetch functions there, it's very familiar. You have a way to have a function that runs for each request, which you can do data fetching in and then pass some data to the page component or the route component rather, where that data is then rendered into some HTML. And you can even pass this data into the property of an island and it'll be serialized and hydrated all on the client automatically.

This was really fast. I'm sorry, we don't have a super large amount of time. If you want to get started yourself on Fresh, download the Deno CLI. You can do that from deno.and. Then you want to run this bootstrap command here, denovan-a-r https fresh.deno.dev, which also happens to be the Fresh home page. So if you want to learn more about Fresh, you can go there, specify a directory to generate a project, and enter that directory and run denotask start. And then you can view your new website at localhost 8000.

15. Improving Page Performance with Deno Deploy

Short description:

To drastically improve page performance, reduce network roundtrip time by moving the server closer to the user. With Deno Deploy, you can run your code in 34 different regions worldwide, achieving deployment within two to five seconds. Deno Deploy is within 100 milliseconds of most internet users in the developed world. Check it out at deno.com/deploy. Netlify Edge Functions is powered by Deno Deploy.

So that's Fresh. I want to point out one other really cool thing here, which is we talked a lot about server-side rendering and network round trips today. And one way to drastically improve your page performance is to just reduce the network roundtrip time, which sounds really difficult. And I said it's nearly impossible earlier, but there's actually one way you can have a lot of effect on... There's actually one way that you can affect this a lot, which is to just move your server closer to the user. If you have a server in US East 1, if a user is in Tokyo, they need to do a network roundtrip from Tokyo to US East 1. That's like 500 milliseconds. That is very slow. So you want to avoid this. You want to have a second server in Tokyo. Or what if you don't have just a second server, but if you have 30 servers all across the world, really close to all of your users? This is something which you can do with Deno Deploy. Deno Deploys are hosted Deno offering. You can give us your Deno code, we'll run it in 34 different regions across the world. It has an integrated GitHub integration where you can push something to your GitHub repository and then we'll instantly deploy it to all of these 34 regions within two to five seconds. And we're within 100 milliseconds of essentially all internet users in the developed world. You can check this out at deno.com deploy. And if you want to look at some real-world products that are built with Deno Deploy, Netlify Edge Functions is powered by Deno Deploy. So if you've ever used Netlify Edge Functions, you've actually already used Deno Deploy.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a new web framework from the creators of React Router that helps you build better, faster websites through a solid understanding of web fundamentals. Remix takes care of the heavy lifting like server rendering, code splitting, prefetching, and navigation and leaves you with the fun part: building something awesome!
React Advanced Conference 2021React Advanced Conference 2021
39 min
Don't Solve Problems, Eliminate Them
Top Content
Humans are natural problem solvers and we're good enough at it that we've survived over the centuries and become the dominant species of the planet. Because we're so good at it, we sometimes become problem seekers too–looking for problems we can solve. Those who most successfully accomplish their goals are the problem eliminators. Let's talk about the distinction between solving and eliminating problems with examples from inside and outside the coding world.
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
JSNation 2022JSNation 2022
28 min
Full Stack Documentation
Top Content
Interactive web-based tutorials have become a staple of front end frameworks, and it's easy to see why — developers love being able to try out new tools without the hassle of installing packages or cloning repos.But in the age of full stack meta-frameworks like Next, Remix and SvelteKit, these tutorials only go so far. In this talk, we'll look at how we on the Svelte team are using cutting edge web technology to rethink how we teach each other the tools of our trade.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Top Content
Featured WorkshopFree
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.
React Summit 2022React Summit 2022
136 min
Remix Fundamentals
Top Content
Featured WorkshopFree
Building modern web applications is riddled with complexity And that's only if you bother to deal with the problems
Tired of wiring up onSubmit to backend APIs and making sure your client-side cache stays up-to-date? Wouldn't it be cool to be able to use the global nature of CSS to your benefit, rather than find tools or conventions to avoid or work around it? And how would you like nested layouts with intelligent and performance optimized data management that just works™?
Remix solves some of these problems, and completely eliminates the rest. You don't even have to think about server cache management or global CSS namespace clashes. It's not that Remix has APIs to avoid these problems, they simply don't exist when you're using Remix. Oh, and you don't need that huge complex graphql client when you're using Remix. They've got you covered. Ready to build faster apps faster?
At the end of this workshop, you'll know how to:- Create Remix Routes- Style Remix applications- Load data in Remix loaders- Mutate data with forms and actions
Vue.js London Live 2021Vue.js London Live 2021
169 min
Vue3: Modern Frontend App Development
Top Content
Featured WorkshopFree
The Vue3 has been released in mid-2020. Besides many improvements and optimizations, the main feature of Vue3 brings is the Composition API – a new way to write and reuse reactive code. Let's learn more about how to use Composition API efficiently.

Besides core Vue3 features we'll explain examples of how to use popular libraries with Vue3.

Table of contents:
- Introduction to Vue3
- Composition API
- Core libraries
- Vue3 ecosystem

Prerequisites:
IDE of choice (Inellij or VSC) installed
Nodejs + NPM
JSNation 2023JSNation 2023
174 min
Developing Dynamic Blogs with SvelteKit & Storyblok: A Hands-on Workshop
Featured WorkshopFree
This SvelteKit workshop explores the integration of 3rd party services, such as Storyblok, in a SvelteKit project. Participants will learn how to create a SvelteKit project, leverage Svelte components, and connect to external APIs. The workshop covers important concepts including SSR, CSR, static site generation, and deploying the application using adapters. By the end of the workshop, attendees will have a solid understanding of building SvelteKit applications with API integrations and be prepared for deployment.