The Rise of the Dynamic Edge

Over the last few years, the JS web app community has seen a growing awareness and focus on performance & scalability. The days when prominent production sites serve entirely-blank pages waiting on monolithic bundles of megates of JavaScript are (mostly!) behind us.

A big part of that has been deeper integration with CDNs, after all, round-trip latency is one of the primary determiners of performance for a global audience. But frameworks, and the companies that support them, have different approaches to how they can be used, and the operational complexities their strategies introduce have real consequences.

But what if, instead of a dynamic origin sending instructions to a static CDN, you could run your application directly on the edge? As it turns out, that doesn't just improve performance, it also vastly simplifies our deployment and maintenance lives too.


Transcript


Okay. Hello, my name is Glen, my talk today is the rise of the dynamic edge or another way to talk about it would be the past, present, and future of front-end hosting.


About Glen


If you don't know me, my name is Glen Maddern, that's me on Twitter, that's probably the easiest way to get in touch. I've done a couple of open-source projects in the React space, a couple on styling CSS modules and styled-components.

More recently, a couple of years ago I switched gears and started thinking about production performance and deployment and started a project called Frontend Application Bundles or FABs which is at fab.dev as well as a product around deployments called linc.sh. Fairly excitingly last year Linc was acquired Cloudflare Workers. I've only been there a couple of months but now I get to kind of approach the same problem but from a point of view of an entire platform, an entire global platform which is pretty exciting.


How CDNs became a part of Frontend Apps


[01:12] So today I wanted to drill into something that I found really interesting over the last few years getting into this stuff which is how we've come to depend on and how CDNs have become a part of our front-end app workflows.

So just to recap, a traditional CDN architecture has the CDN in between your users and your origin server, your actual host, and requests flow through and responses flow back. The CDN will take copies of those requests responses depending on some algorithms, some directives, but your origin server is the ground truth.

So why do people use CDNs? Well they're everywhere, right? This is Cloudflare's network, it's over 200 locations, but it might be a little bit surprising to just see just how important that geographical distribution is, why do they need to be in so many locations? So I wanted to start today's talk looking over something I'd actually looked at a couple of years ago which is about the impact of latency.

[02:17] This was an experiment I ran for a web series I was doing called Front End Center where I ran a bandwidth test or a download speed test from Melbourne where I was living at the time against three different locations, Sydney, San Jose, and London. Now Sydney is only 15 milliseconds away, San Jose is on the other side of the Pacific and London is 280 milliseconds speed of light or as I live there now it's a lot longer plane, let me tell you. So when you have a small file you get download speeds or total download times pretty much exactly what you'd expect, it's just one single round trip to the server, so if the server is away the longer it takes for the file to download.

But what might be surprising is just when you have a fast connection to a local box and this is between two data centers, so there's no bandwidth constraints here at all really, for a 250 kilote file was still a fraction of a second. But when you add some latency into this picture things start to get pretty different, at 200 kilotes you're now looking at two seconds in the best case scenario to download that file and if you double the latency the same effect is doubled.

Now, this might be surprising because those servers are only a 100 or 200 milliseconds further away and yet the download times are taking 10 times longer or 30 times longer in some cases. And these steps are actually the latency between those hops so each jump on the graph is 160 milliseconds, each jump on the red line is 280. This is because of the way TCP, the protocol, works underneath everything else where it starts slow and ramps up as it detects that the network conditions are good enough. This means that the first 100 kilotes cost a lot, that every 100 kilotes from then on can increasingly cost your performance and much more so than you might think, so being local is really important.

This was actually in a video called why latency matters which was episode 10 of Front End Center, it's also out on YouTube. If that information is new to you I encourage you to check it out going to TCP stuff as well.

[04:31] But for now I'm going to change gears slightly and talk about how it's relevant to us today. Because if you're a JavaScript application developer like myself, then when Webpack came on the scene you probably noticed that it was automatically generating these URL schemes for you. Now, Webpack wasn't the first one to do it, I was doing it in gulp before then but Webpack certainly put it in from dev data and everything since Webpack has also used it. This is really important because this has a hash of the content inside the URL meaning that the content can't change unless the URL does, that's perfect for a CDN. So setting this case control immutable means that everybody knows once they've seen that file once they never have to check the origin server again.

[05:17] And if you're building client-side apps that's kind of enough, right? Because you can serve an empty page and then a whole lot of JavaScript and then the app will run in the browser and if you do bundle splitting every time something changes, you just update the bits that change.

Now, not every app needs to do better than that, right? If you're building an app that people are sitting there for hours using at a time it doesn't really matter how many seconds it takes to start and if you're building like a banking app then it doesn't really matter how bad it will be because you'll still have people's money and they'll walk over broken glass to get it. But for the rest of us and anytime when you have to start dipping into more performance sensitive stuff this isn't fine, you have to start to embrace a global network. And I would even go so far as to say, unless you make CDNs a part of your deployment process you can't build a fast website.


Here's what you can do with CDNs


[06:11] So I wanted to talk about a couple of things because this is a performance talk but it's very tempting to say, hey, you have to throw away everything you've done before in order to get better performance, whereas the truth is, CDNs are actually really great so if you've got one in front there might be a couple of things that you can do with it.

The first one is the s-maxage header which is one of personal favorite of mine. Now, maxage just means, if this thing is not this old, so that number means one year, then you don't have to come back to the origin to check for a new version. Max-age zero here means the browser never trusts your old version, always come back and check but s-maxage is telling the CDN, you have this copy for a year, you can always handle these requests. So you get the browses checking in but they're only going as far as their nearest CDN node.

The trick with this is then you need to include some way of telling the CDN that this content has changed when the content changes. Now, this isn't for those fingerprinted assets, this is for your index HTML, anything that rapidly changes. But CDNs are really good at updating what's caged and so sending them a command to say, hey, these URLs have now changed as part of your deployment, it gives you really, really good performance and very low load onto your origin.

[07:27] Go to the next version of this is the stale-while-revalidate header. This tells the CDN that while the version it has caged might be expired and it needs to go and get a new version, it can keep sending the old one for this length of time, so for a year it can serve the old version. But the s-maxage of 60 means once a minute, it needs to go and check for a new version. And this sort of inverts the process, the CDN is now pulling from the origin server but at a really low kind of rate. All of your users are still accessing, it's still just hitting their local CDN so they're getting really fast response but your content is never more than 60 seconds out of date.

The other advantage of this is that you can have millions of people accessing your website but your origin is still only seeing one request every 60 seconds from every node across the network. This I love using and I think it's great but I'm expecting it to have been a much bigger part of people's deployments if it wasn't for part two which is the rise of JAMstack.


JAMstack & Integrated Edge


[08:34] JAMstack is one of the bigger trends in recent web deployments and I think it's popular because, well, it kind of hides a lot of the details around CDNs and gives you something much simpler to understand.

So traditionally you had this CDN that sat between your users and your origin but you needed all three to kind of be there, you needed your origin to be live and the CDN was only just an enhancement. JAMstack well, it has the same structure, changes that, and now it says that the CDN is your host, your CDN is responding to every request but in order to change content you now tell the CDN, here is the new copy of the file. It's a subtle change but it really increases the stability of your site and I think stability is the best thing about JAMstack. Because you're only deploying static files and you're doing it globally, your resilience to load is much increased, there's no route that's going to hit your origin server because you don't really have an origin server anymore and so you can kind of sleep easy because setting static files is really simple.

[09:37] In terms of performance it can be a lot better than what you're doing at the moment but I will say that there are definitely some things that are difficult to do in a static deployment which means you end up leaning on client side JavaScript a fair bit more. And so you can end up with a worst performance in some cases even going to a static side.

The same sort of story applies with complexity where deployments are now super simple because you're just throwing static files across a wire but generating a perfect static snapshot of your build takes a bit of extra effort and then you have to figure out what to do with the dynamic pieces and how to run them efficiently in the browser.

[10:17] But one of the things I disliked the most about JAMstack is its inefficiency because in the traditional sense, every change to your site needs to generate the entire site a fresh, which means a 10,000 blog posts site with changing one route can be taking 30 minutes, an hour, to rebuild. This is a real impediment to iterating quickly on your code and it's something that needs to be addressed. 

The way it is addressed is the same companies who are pushing JAMstack as a methodology are also offering platforms that solve some of these pain points in what I'm kind of calling the integrated edge. So they don't just take the simple JAMstack approach, they also have some strategies around publishing or rebuilding only the things that have changed or only publishing the things that have changed or in Vercel's case, actually telling the CDN giving strategies that do something very similar to stale-while-revalidate running your build kind of as a server in the background. These are all really good but for me it's not quite as good as what we had before with the purely static HTTP, sorry, purely standardized HTTP world.

[11:35] There are a few things that I think weaken JAMstack's appeal and the first is just how locked-in all of these vendors make you. Everything that's made easier at every step in a JAMstack world has more and more baggage associated with it. For example, this is a couple of snippets that go into a Netlify conflict file where you're doing simple redirects or setting a few headers on a few routes. This is something that you could have done with a few lines of code if you have a Node server or any kind of web server that now you have to put into platform specific config. It's not the worst in the world but every time you reach for this to solve a problem you're getting further away from something that you can pull up elsewhere.

[12:24] The other thing is there's been a tendency towards looking beyond JAMstack into this idea of a hybrid framework where some pages are static and some pages are dynamic. But something I'd like people to just sort of be a bit more aware of is that when you're thinking about performance and you're thinking about moving from static to dynamic there can be a huge performance disparity between the two. I actually ran a benchmark this morning on one of a NextJS app running on Vercel, the data here I'll skip over but it's more for the slides later. But you can see the impact or the difference between a static and dynamic route here where a dynamic in the best case is still a few hundred milliseconds worse than static but as you get to the higher percentiles, so these are 50% or 25% of your audience are getting more slower response times up to about a second. And this is the time to first te, right? This is before the rest of your app has even loaded, this is just on top of everything else.

[13:26] Now this is a synthetic benchmark, right? And I encourage you never to trust a synthetic benchmark or any benchmark that you haven't run yourself and understood the constraints, do run these tests yourself. The only point of the numbers here is to show you that there's a huge difference between the rendering pipelines between the static and dynamic halves of Vercel's platform and that's something that might not be obvious considering how easy it is to move from one to the other.


The Dynamic Edge


[13:57] All that is to bring us to the future, well, which is also the current but not evenly distributed which is the dynamic edge. So rather than looking at the CDN architecture and saying, well, let's make a static CDN, a static edge and send updates to it like we do in JAMstack, if we have a dynamic edge then we don't really need an origin server at all, In fact, the whole point of a CDN was to run close to our users.

But what if this was where we hosted it? What if we hosted it everywhere in the world? I mean, it's just a front-end code after all, it's not massive databases and all these sorts of things, so this is a new category of tool. Now I'm going to be talking about Cloudflare Workers because Workers in this space is the leader. But this is a category that's going to keep growing and while some of these competitors products will do some of these things but not all of them, I expect them all to get better over time. And you shouldn't think that you're just opting into one vendor because this is just a new way of running code at the edge.

For Workers there are a few things to know, right? You have a one megate script limit because it's deployed to every location that Cloudflare has but on the other hand it is running in all 200 locations. It runs in a V8 container not NodeJS so it's JavaScript but you need a new mental model. And there are some tighter CPU and RAM limits although for general purpose stuff it should be okay. One of the exciting things is that there's no cold-start impact because of this different architecture. So when we saw on that Vercel graph, that one second bump, that was because Amazon landed under the hood during a cold-start for a new request. On Workers that cold-start is actually so quick it happens in the background while your TLS handshake is underway. So while your browser is negotiating with its nearest server to encrypt an SSL connection the Worker is spitting up in in the background so the time the request body actually comes through your work is ready to go, effectively making cold-starts invisible.

[16:13] Now one version of this future is that these new edge frameworks come out and these are all really cool, NuxtJS isn't quite released for its edge serverless one yet but it will be quite soon, where these new frameworks offer cool new things and you switch your app across to them and have a great time, really great performance and that's the next history.

But for me that's not quite good enough, I think we have an opportunity to do better because we're going from a model where the CDN is just capable of caching and therefore just capable of static things to being able to run your entire application. So why do we need to lock ourselves in to only frameworks that are designed for that?

I've been trying to bring about this future where anyone can access their code and pull it across to running at the edge with the project Frontend Application Bundles. The architecture of FAB is that it should be able to be, sorry, any application using any framework no matter how static or dynamic it is it should be able to be compiled down to a FAB.

[17:24] A FAB is a single server entry point and a directory full of assets. The key is that FABs can be deployed to brand new global edge locations as well as anywhere else, basically anywhere that can run JavaScript, anywhere that can run Node, it's completely backwards compatible. The idea being that you can test whether your application right now is compatible with FABs maybe use it internally, maybe test it on the edge and see how it performs before deciding to really adopt that because there might be parts of your app that you want to rewrite in order to get the most performance, it's great to be able to verify that things are actually running really well.

Now, I don't have too much time to go into this but when a FAB gets deployed in a traditional way it does run differently to when it runs on the edge, some of the performance characteristics are much better running at the edge. In the traditional employment we've tried to design it so that it runs as well as it can but the real gains are if you start using something on Workers.

[18:30] And I did another benchmark this morning of NextJS running inside of FAB on Cloudflare Workers. Again, these numbers are just for the slides, it's easier to see on the graph. Static performance is pretty good, dynamic performance is okay. NextJS it's actually kind of the worst use case for FABs at the moment, it's kind of the most bloated or the most chunky, I should say, framework. It's really designed for NodeJS, it's really not designed to run inside a FAB so we have to do a lot in the compiler to actually make it compatible which is why you see such a big disparity between static and dynamic performance here. It's something that we will be actively working on and trying to improve but if you use FABs with other frameworks, you should see even better performance than this.

[19:20] And just to compare, we're seeing extremely similar static performance to Vercel but the dynamic performance is clearly a lot better. Well, there is still some to the long tail 75% percentile slowness, you're still running it worst case 400 milliseconds instead of worst case 1.5 seconds. And with that I encourage you to check it out, I encourage you to go to fab.dev or to GitHub and have a look, try to compile your app to it, see if it will run on Workers, Workers is available at workers.dev. If you go to fab.dev there's a Discord there, you can get in touch with me and I'll help you if I can but other than that, thanks for having me and time for questions.


Questions


[20:05] Mettin Parzinski: Well Glen, that was fab, amazing. Thank you for that great talk and sorry for this lame joke. Glen, can you join me on stage so we can look at the results of your poll.

Glen Maddern: Great.

[20:20] Mettin Parzinski: Hey, good to see you again. So you asked the people, how much have you done with the CDN/dynamic edge surface? And well, 43% have said: I've never used one, and then the biggest runner up is: I use JAMstack so my site can be built statically and served for the edge. How do you feel about those results? Is this what you were expecting?

[20:45] Glen Maddern: Yeah, pretty much. Although I am surprised that we've got 8% deploying full apps to the edge, that's great. I think a couple of years from now that'll be higher and higher but 8% already is awesome.

[20:45] Mettin Parzinski: Yeah. And since it's fairly new but before this talk I had never heard of it even, so 8% then out of our audience that's pretty amazing.

Glen Maddern: Yeah.

[21:09] Mettin Parzinski: So hats off to you, I guess. So I would like to remind everyone you can ask your questions in the Discord channel and we're going to jump to the questions right now. And the first one is actually what I was curious about when I was watching you talk. And yeah, I see this as a great open source solution that's not backed a big company but what's the business model behind it?

[20:45] Glen Maddern: Yeah. Well, it started as basically to solve a problem for the company I was working on, Linc, which was we needed a way for people to be able to deploy apps and for us to build previews of people's apps. And everything out there like Docker or anything else or just a ZIP file full of HTML and JavaScript, it didn't fit, right? So a static directory, you obviously can't do any server rendering and I really wanted to see the future of server-side rendering come about.

And Docker is really heavy, I mean, you thinking about just how big a Docker image is just for a few, a 100K, a couple of meg of HTML, CSS and JavaScript, it's using a sledgehammer to crack a nut. And so basically it came out to solve that problem, how do we get customers onto Linc in a way where we can build every commit and give them a URL to that commit for forever but not locking them into static. And so from there it's grown to basically deploying it to anywhere, compiling to it from any framework, these are all just extensions of that first idea and it's kind of grown from there.

[22:57] Mettin Parzinski: Awesome. And then is the team you now or is it more people? Is there a team or is it you?

[23:05] Glen Maddern: It's mainly me but we're getting more contributors all the time. So I mean, working with somebody recently to put a Flareact to it and part of the reason there is even though Flareact can already deploy to Cloudflare Workers directly, that's what it's designed to do, it's basically NextJS designed for Workers. Being able to put it into a FAB means it's much easier to test locally, it's much easier to review, you can put it through Linc, you can deploy it to other infrastructure and it makes that project much more portable. And so I'm happy to see someone joining to try and bring that about. I've got lots of collaboration on the NextJS compiler, which is one of the more difficult targets for us. So it's really being led me but we're picking up contributors all the time.

[23:52] Mettin Parzinski: Awesome. Always nice to hear that people are willing to help out. I have a question from Mike Kahel. Are there also stats available for Angular, not AngularJS, and VueJS apps deployed using fab.dev?

[24:10] Glen Maddern: No, not too much. Basically because well, Vue and server-side rendering is something I've looked into but I haven't got a very successful story just yet. NextJS which is a framework built on Vue in server-side rendering was having a bit of a rework before their version three, I think is the current version coming out, which is going to be directly compatible with edge rendering like Workers. And so in the Vue ecosystem we kind of posed largely on that development until those sorts of changes have come out. That seems like it's really on the horizon so I'll be expecting to get back into that soon.

As for Angular it's not an area that I've got a lot of experience within but every Angular project I've seen has been compiled statically and anything static is very, very fast with FABs. Yeah, it's extremely optimized for serving from the edge but it's also kind of easy for anything, you can use pretty much any CDN to do that. FABs really come into their own when you want to start sprinkling a little bit of say logic so maybe you want to do reverse proxies/API to your backend and then you want to preview that differently in staging and production, FABs make all of those work cases really easy but the Angular part of that of your code should perform just fine.

[25:37] Mettin Parzinski: Awesome. Just a little plug from my side, do you have any connections with the Next team if you need help? Because one of my coworker's sisters is a core developer there, I can hook you up.

[25:46] Glen Maddern: Oh, that'd be great. Actually, I have a chat room in Discord that we occasionally fire back and forth but I haven't had a chat with them for a few weeks so I'll be very keen to reopen those dialogue lines again.

[25:58] Mettin Parzinski: All right. I will get you in contact with him on Discord. Next question from Mark See, what advice or guidance would you have to leverage Akamai's edge workers with NextJS? We are hosting our static assets on S3 and would love to see how we can integrate it with edge workers.

[26:18] Glen Maddern: That would be fantastic to talk about afterwards because that's a target that I haven't tried to deploy to yet. So FABs it should be perfectly compatible but I haven't had a test case for it. We didn't have any customers at Linc who were looking to do that yet so there was no kind of impetus to do it. NextJS compiling to a FAB is something I'm working on at the moment that is rapidly improving and gearing up for a proper release in the next week. But the edge worker stuff should be completely compatible as far as I understand it but yeah, it'd be great to collaborate on.

[26:55] Mettin Parzinski: Awesome. So Mark can contact you.

Glen Maddern: Yeah.

Mettin Parzinski: Awesome.

[27:03] Glen Maddern: So if you go to fab.dev there's a Discord link on the homepage that you can join our Discord and you can DM me there.

[27:11] Mettin Parzinski: All right. Awesome. Chrissy is asking, does FAB work with Node Express for supporting dynamic routing on front-end apps?

[27:18] Glen Maddern: Yeah. So FAB when you're working on it locally ships with an express server of its own which is FAB Serve. Inside that there is a little sandbox VM that's used to isolate the FAB runtime then simulate basically serverless environments so that the FAB can't break out. There's certain restrictions on a FAB that make it more secure and make it possible to deploy to something like Workers. So those packages you can use from your own express server or you can call into directly and just use FAB Serve on the command line to spin up one for its own.

[27:54] Mettin Parzinski: All right. Great. Next question is what are the biggest performance challenges in modern web solutions that you're facing?

[28:06] Glen Maddern: Yeah, well, so the biggest thing is FABs building on top of a standard where we've kind of left NodeJS behind, right? So if you look at a project like NextJS as an example, it's a perfectly fast web framework but everything that's contributed to NextJS assumes that you have a NodeJS environment, soon as you have expressed it assumes that you don't really need to worry about bundle size and converting that to a FAB means that you end up having to shim out a lot of those modules. So you ended up having to give JavaScript implementations for Node-API so that they'll run within the FAB, it's quite similar to actually compiling stuff to run in the browser. That means that even for kind of simple stuff you can find that the performance is not as good as it should be.

I mean, in the example I was showing you with the two graphs of static versus dynamic things running in NextJS on Workers, those two lines should be pretty much identical but because of the different code paths through a FAB and the extra work that you have to do just to make NextJS compatible ends up incurring extra performance hits. And so long-term, we'll be able to solve a lot of those but you'll find that some projects just won't be able to perform at the same level as others because they will have more deeper connections and more deeper assumptions that are running on Node and therefore require more shim code, more kind of scaffolding to get executing in a FAB.

[29:43] Mettin Parzinski: All right. Thank you. We have time for one more question. The question is from Austin, can we use FAB as a reverse proxy to target environment APIs with a single build from the four different parts?

[29:59] Glen Maddern: Yeah, that's it. Yeah. So a single build produces a FAB, the FAB can have as little or as much server JS logic as you want, it can include a bunch of assets and then dynamically the server component can be booted up with different environment variables to point it at different places. So the idea is that a single project could have kind of what's called a backend for front-end idea. So it could be an API, it could be a set of proxy routes, it could be maybe not your entire backend but it's the stuff that makes it possible to write your front-end and that can kind of iterate together. So every time you're deploying each commit, you're deploying your backend for front-end as part of this project as well as all the static assets you built for that commit.

[30:46] Mettin Parzinski: Awesome. That sounds really powerful Glen. Thanks for your hard work and sharing this with us. Let's get that 8% up, right?

Glen Maddern: Yeah, absolutely.

[30:57] Mettin Parzinski: All right. Thanks for being with us and hope to see you again soon. Bye-e.

Glen Maddern: Okay. Thank you.

Glen Maddern
32 min

Check out more articles and videos

Workshops on related topic