1. Introduction to the Rise of the Dynamic Edge
Hello. My name is Glen. My talk today is the rise of the dynamic edge, or another way to talk about it would be the past, present, and future of frontend hosting. I've done a couple of open source projects in the React space. More recently, I started a project called frontend application bundles, or FABs, which is at FAB.dev, as well as a product around deployments called link.sh. Last year Link was acquired by Cloudflare workers. Now I get to approach the same problem but from a point of view of an entire platform, an entire global platform, which is pretty exciting.
Okay. Hello. My name is Glen. My talk today is the rise of the dynamic edge, or another way to talk about it would be the past, present, and future of frontend hosting. If you don't know me, my name is Glen Madden, that's me on Twitter, that's probably the easiest way to get in touch. I've done a couple of open source projects in the React space. A couple on styling, CSS modules and styled components. More recently, a couple of years ago I switched gears and started thinking about production performance and deployment and started a project called frontend application bundles, or FABs, which is at FAB.dev, as well as a product around deployments called link.sh. Fairly excitingly, last year Link was acquired by Cloudflare workers. I've only been there a couple of months, but now I get to kind of approach the same problem but from a point of view of an entire platform, an entire global platform, which is pretty exciting.
2. The Impact of Latency on CDN Usage
Today, I will discuss how CDNs have become an integral part of our front end app workflows. CDNs are widely used due to their geographical distribution, which plays a crucial role in reducing latency. I conducted an experiment comparing download speeds from different locations and found that even a small increase in latency can significantly impact download times. This is because of the way TCP works, where the initial data transfer is slower and gradually ramps up. Therefore, being local to the server is essential for optimal performance.
So today I wanted to drill into something that I found really interesting over the last few years getting into this stuff, which is how we've come to depend on and how CDNs have become a part of our front end app workflows. So just to recap, a traditional CDN architecture has the CDN in between your users and your origin server, your actual host. And requests flow through and responses flow back. The CDN will take copies of those requests, responses, depending on some algorithms, some directives. Your origin server is the ground truth.
So why do people use CDNs? Well, they're everywhere, right? This is CloudFlare's network. It's over 200 locations. But it might be a little bit surprising to just see just how important that geographical distribution is. Why do they need to be in so many locations? So I wanted to start today's talk by looking over something I'd actually looked at a couple years ago, which is about the impact of latency. This was an experiment I ran for a web series I was doing called Frontend Center, where I ran a bandwidth test, or a download speed test, from Melbourne, where I was living at the time, against three different locations. Sidney, San Jose, and London. Now, Sidney's only 15 milliseconds away. San Jose is on the other side of the Pacific. And London is 280 milliseconds by speed of light, or as I live there now, it's a lot longer by plane, let me tell you.
So when you have a small file, you get download speeds, or total download times, pretty much exactly what you'd expect. It's just one single round trip to the server. So the further the server is away, the longer it takes for the file to download. But what might be surprising is just when you have a fast connection to a local box, and this is between two data centers, so there's no bandwidth constraints here at all, really. For a 250 kilobyte file, we're still a fraction of a second. But when you add some latency into this picture, things start to get pretty different. At 200 kilobytes, you're now looking at 2 seconds in the best case scenario to download that file. And if you double the latency, the same effect is doubled. Now this might be surprising, because those servers are only, you know, 100 or 200 milliseconds further away, and yet the download times are taking 10 times longer, or 30 times longer in some cases. And these steps are actually the latency between those hops. So each jump on the graph is 160 milliseconds. Each jump on the red line is 280. This is because of the way TCP, the protocol, works underneath everything else, where it starts slow and ramps up as it detects that the network conditions are good enough. This means that the first 100 kilobytes cost a lot, you know, that every 100 kilobytes from then on, can increasingly cost your performance. And much more so than you might think. So being local is really important.
3. Relevance of Latency and CDN Usage
This was actually in a video called Why Latency Matters, which was episode 10 of Frontend Center. It's also out on YouTube. If that information is new to you, I encourage you to check it out. I go into TCP stuff as well.
So I wanted to talk about a couple of things because this is a performance talk, but it's very tempting to say, hey, you have to throw away everything you've done before in order to get better performance. Whereas the truth is CDNs are actually really great. So if you've got one in front, there might be a couple you can do with it. The first one is the S maxage header. If this thing is not this old, so that number means one year, then you don't have to come back to the origin to check for a new version. Maxage zero here means the browser never trusts your old version, always come back and check. But SMaxage is telling the CDN, you have this copy for a year, you can always handle these requests. So you get the browsers checking in, but they're only going as far as their nearest CDN node. The trick with this is that then you need to include some way of telling the CDN that this content has changed when the content changes. Now, this isn't for those fingerprinted assets.
4. The Impact of CDNs and Jamstack
This part discusses the use of CDNs for rapidly changing content and the benefits of the stale while revalidate header. It then introduces Jamstack as a trend that simplifies web deployments and increases site stability. However, Jamstack has limitations in terms of performance and complexity, particularly when dealing with dynamic content. The inefficiency of rebuilding the entire site for every change is also highlighted. Companies are addressing these challenges with integrated edge platforms that offer strategies for publishing only the changed content. Despite its advantages, Jamstack has some weaknesses compared to the traditional standardized HTTP world.
This is for your index HTML, anything that rapidly changes. But CDNs are really good at updating what's cached. So sending them a command to say, hey, these URLs have now changed as part of your deployment gives you really, really good performance and very low load on your origin.
The next version of this is the stale while revalidate header. This tells the CDN that while the version it has cached might be expired and it needs to go and get a new version, it can keep sending the old one for this length of time. So for a year, it can serve the old But the XMAX stage of 60 means once a minute it needs to go and check for a new version. And this sort of inverts the process. The CDN is now pulling from the origin server, but at a really low kind of rate. All of your users are still accessing the still just hitting their local CDN, so they're getting really fast response, but your content is never more than 60 seconds out of date. The other advantage of this is that you could have millions of people accessing your website, but your origin is still only seeing one request every 60 seconds from every node across the network. This I love using, and I think it's great, but I'm expecting it to have been a much bigger part of people's deployments if it wasn't for part 2, which is the rise of Jamstack.
5. The Future: Dynamic Edge and Cloudflare Workers
Vendors in the JAMstack world make you more locked in with every step. Moving from static to dynamic can have a significant performance disparity. Synthetic benchmarks show a huge difference between the rendering pipelines of static and dynamic. The future lies in the dynamic edge, where we don't need an origin server. Cloudflare Workers lead in this space, but it's a growing category. There's a one-megabyte script limit for Workers.
And the first is just how locked in all of these vendors make you. easier, made easier at every step in a JAMstack world, has more and more baggage associated with it. For example, this is a couple of snippets that go into a Netlify config file where you're doing simple redirects or setting a few headers on a few routes. This is something you could have done with a few lines of code, if you had a Node server any kind of web server, that now you have to put into platform specific conflict. It's not worse in the world, but every time you reach for this to solve a problem, you're getting further away from something that you can pull elsewhere.
The other thing is that there's been a tendency towards looking beyond JAMstack into this idea of a hybrid framework, where some pages are static and some pages are dynamic. But something I'd like people to just sort of be a bit more aware of, is that when you're thinking about performance, and you're thinking about moving from static to dynamic, there can be a huge performance disparity between the two. I actually ran a benchmark this morning on one of Vercel's Next.js app running on Vercel. The data here, I'll skip over, but it's more for the slides later, but you can see the impact or the difference between a static and dynamic route here. Where a dynamic, in the best case, is still a few hundred milliseconds worse than static, but as you get to the higher percentiles, so these are 50% or 25% of your audience, are getting more slower response times, up to about a second. And this is a time to first byte. This is before the rest of your app is even loaded. This is just on top of everything else. This is a synthetic benchmark, and I encourage you never to trust a synthetic benchmark or any benchmark that you haven't run yourself and understood the constraints. Do run these tests yourself. The only point of the numbers here is to show that there is a huge difference between the rendering pipelines between the static and dynamic halves of the cells platform. And that's something that might not be obvious considering how easy it is to move from one to the other.
All that is to bring us to the future. Well, which is also the current but not evenly distributed, which is the dynamic edge. So rather than looking at the CDN architecture and saying, well, let's make a static CDN, a static edge, and send updates to it like we do in Jamstack. If we have a dynamic edge, then we don't really need an origin server at all. In fact, the whole point of a CDN was to run close to our users. Now, what if this was where we hosted it? What if we hosted it everywhere in the world? I mean, it's just front-end code, after all. It's not massive databases and all these sorts of things. So this is a new category of tool. Now I'm going to be talking about Cloudflare Workers, because Workers in this space is the leader. But this is a category that's going to keep growing. And while some of these competitors' products will do some of these things, but not all of them, I expect them all to get better over time. You shouldn't think that you're just opting into one vendor, because this is just a new way of running code at the edge. For Workers, there are a few things to know, right? You have a one-megabyte script limit, because it's deployed to every location that Cloudflare has.
6. Running FABs on the Edge and Performance Benefits
But on the other hand, running in all 200 locations, in a V8 container, not Node.js, brings a new mental model. No cold start impact with workers, as they spin up in the background while the TLS handshake is underway. The future is not limited to frameworks designed for caching static content. The project front-end application bundles (FABs) enable any application, regardless of its static or dynamic nature, to be compiled into a FAB and deployed to global edge locations. Testing compatibility and performance before adoption is crucial. Deploying FABs on the edge provides better performance characteristics than traditional deployment, especially with cloudflare workers.
Now, I don't have too much time to go into this, but when a FAB gets deployed in a traditional way, it does run differently to when it runs on the edge. Some of the performance characteristics are much better running at the edge. In traditional employment, we've tried to design it so that it runs as well as it can, but the real gains are if you start using something on workers. And I did another benchmark this morning of NextJS running inside a FAB on cloudflow workers. Again, these numbers are just for the slides. It's easier to see on the graph.
7. FABs Performance and Call to Action
Static performance is pretty good. Dynamic performance is okay. NextJS is actually kind of the worst use case for FABs at the moment. It's kind of the most bloated or the most chunky, I should say, framework. But if you use FABs with other frameworks, you should see even better performance than this. And just to compare, we're seeing extremely similar static performance to Vercel. But the dynamic performance is clearly a lot better. And with that, I encourage you to check it out. I encourage you to go to fab.dev or the GitHub and have a look. See, trying to compile your app to it, see if it will run on workers. Workers is available at workers.dev. If you go to fab.dev, there's a Discord there. You can get in touch with me and I'll help you, if I can. But other than that, thanks for having me. And time for questions.
Static performance is pretty good. Dynamic performance is okay. NextJS is actually kind of the worst use case for FABs at the moment. It's kind of the most bloated or the most chunky, I should say, framework. It's really designed for NodeJS. It's really not designed to run inside a FAB. So we have to do a lot in the compiler to make it compatible. Which is why you see such a big disparity between static and dynamic performance here. It's something that we will be actively working on and trying to improve.
But if you use FABs with other frameworks, you should see even better performance than this. And just to compare, we're seeing extremely similar static performance to Vercel. But the dynamic performance is clearly a lot better. There is no... While there is still some, to the long tail, 75 percentile slowness. You're still running it, worst case, 400 milliseconds, instead of worst case, 1.5 seconds.
And with that, I encourage you to check it out. I encourage you to go to fab.dev or the GitHub and have a look. See, trying to compile your app to it, see if it will run on workers. Workers is available at workers.dev. If you go to fab.dev, there's a Discord there. You can get in touch with me and I'll help you, if I can. But other than that, thanks for having me. And time for questions. Wow, Glenn. That was fabulous. Amazing. Thank you for that great talk. And sorry for this lame joke. Glenn, can you join me on stage so we can look at the results of your poll? Great. Hey, good to see you again.
CDN/Dynamic Edge Usage and Business Model
43% have never used a CDN/dynamic edge service. 8% deploy full apps to the edge. The business model behind the open source solution started with solving a problem for the company Link, enabling app deployment and previews. The team is mainly the speaker, but there are increasing contributors, including someone working on porting Flare React to the solution.
So you asked the people, how much have you done with the CDN slash dynamic edge service? And well, 43% have said, I've never used one. And then the biggest runner up is I use Jamstack. So my site can basically can be built statically and surfs from the edge. How do you feel about the results? Is this what you were expecting?
Yeah, pretty much. Although I am surprised that we've got 8% deploying full apps to the edge. That's great. I think a couple of years from now, that'll be that'll be higher and higher. But 8% already is awesome.
Yeah. That's fairly new. Before this talk, I had never heard of it even. So 8% then out of our audience, that's pretty amazing.
Yep. So hats off to you, I guess. So I would like to remind everyone, you can ask your questions in the Discord channel. And we're going to jump to the questions right now. And the first one is actually what I was curious about. When I was watching your talk. And yeah, I see this is a great open source solution that's now backed by a big company. But what's the business model behind it?
Awesome. And then, is the team you now, or is it more people? Is there a team, or is it you?
It's mainly me, but we're getting more contributors all the time. I've been working with somebody recently to port Flare React to it, and part of the reason there is even though Flare React can already deploy to Cloudflare workers directly, that's what it's designed to do, it's basically Next.js designed for workers, being able to put it into a FAB means it's much easier to test locally, it's much easier to review, you can put it through Link, you can deploy it to other infrastructure and it makes that project much more portable.
Collaboration, Compatibility, and Integration
Happy to see collaboration on the Next.js compiler. No stats available for Angular and Vue.js apps on fab.dev yet. Vue.js and server-side rendering are being reworked for compatibility with edge rendering. Angular projects compiled statically perform well with FABs. FABs excel in adding logic, reverse proxies, and previewing different environments. I can connect you with the Nuxt team. Akamai Edge Workers integration with Next.js is untested, but FABs should be compatible. Next.js compiling to a FAB is rapidly improving. Contact me on the FAB.dev Discord for more information.
So, happy to see someone joining to try and bring that about. I've got lots of collaboration on the Next.js compiler, which is one of the more difficult targets for us, so it's really being led by me, but we're picking up contributors all the time. Awesome, always nice to hear that people are willing to help out.
I have a question from Mike, are there any also stats available for Angular, not AngularJS, and Vue.js apps deployed using fab.dev? No, not too much. Basically because, well, Vue and server side rendering is something I've looked into, but I haven't got a very successful story just yet. Nuxt.js, which is a framework built on Vue and server side rendering, was having a bit of rework before their version 3, I think, is the current version coming out, which is going to be directly compatible with edge rendering, like workers, and so in the Vue ecosystem, we kind of paused largely on that development until those sorts of changes have come out. That seems like it's really on the horizon, so I'll be expecting to get back into that soon.
As for Angular, it's not an area that I've got a lot of experience with, but every Angular project I've seen has been compiled statically, and anything static is very, very fast with FABs. It's extremely optimized for serving from the edge, but it's also kind of easy for anything, you can use pretty much any CDN to do that. FABs really come into their own when you want to start sprinkling a little bit of logic, you want to do reverse proxies of slash API to your back end, and then you want to preview that differently in staging and production. FABs make all of those work cases really easy, but the Angular part of your code should perform just fine.
Awesome, just a little plug from my side. Do you have any connections with the Nuxt team if you need help, because one of my co-workers is a core developer there, I can hook you up. Oh, that'd be great. I have a chat room in Discord that we occasionally fire back and forth, but I haven't had a chat with them for a few weeks. I'd be very keen to re-open those dialogue lines again. All right, I will get you in contact with him on Discord. Next question from Mark C, what advice or guidance would you have to leverage Akamai Edge Workers with Next.js? We are hosting our static assets on S3 and would love to see how we can integrate it with Edge Workers. That would be fantastic to talk about afterwards, because that's a target that I haven't tried to deploy to yet. So FABs should be perfectly compatible, but it's not. I haven't had a test case for it. We didn't have any customers at Lync who were looking to do that yet, so there was no impetus to do it. Next.js compiling to a FAB is something I'm working on at the moment that is rapidly improving and gearing up for a proper release in the next week. But the Edge Workers stuff should be completely compatible as far as I understand it. But yeah, it'd be great to collaborate on. Awesome. So Mark can contact you. Yeah. Awesome. If you go to FAB.dev, there's a Discord link on the homepage.
FABs and Performance Challenges
You can use FAB with Node Express for dynamic routing on frontend apps. FAB ships with its own express server called FAB serve, which includes a sandbox VM to isolate the FAB runtime. FABs face performance challenges as they build on a standard that has moved away from Node.js. Converting frameworks like Next.js to FABs requires shim code and extra work, resulting in performance hits. FABs can be used as a reverse proxy to target environment APIs with a single build for the front part, allowing for a back-end for front-end approach.
You can join our Discord, and you can DM me there. All right, awesome. Chrissy is asking, does FAB work with Node Express for supporting dynamic routing on frontend apps? Yeah, so FAB when you're working on it locally, ships with a express server of its own, which is FAB serve. Inside that, there is a little sandbox VM that's used to isolate the FAB runtime that simulate basically serverless environments so that the FAB can't break out. There's certain restrictions on a FAB that make it more secure and make it possible to deploy to something like Wekas. So those packages you can use from your own express server or you can call into directly and just use FAB serve on the command line to spin up one for its own.
All right, thank you. We have time for one more question. The question is from Aston. Can we use FAB as a reforced proxy to target environment APIs with a single build for the front part? Yeah, that's it. So a single build produces a FAB. The FAB can have as little or as much server.js logic as you want. It can include a bunch of assets, and then dynamically the server component can be booted up with different environment variables to point it at different places. So the idea is that a single project can have kind of what's called like a back-end for front-end idea. So it could be an API, it could be a set of proxy routes, it could be maybe not your entire back-end, but it's the stuff that makes it possible to write your front-end. And that can kind of iterate together. So every time you're deploying each commit, you're deploying your back-end for front-end as part of this project, as well as all the static assets you built for that commit. Awesome. That sounds really powerful Glenn. Thanks for your hard work and sharing this with us. Let's get that 8% up, right? Yeah, absolutely. Thanks for being with us and hope to see you again soon. Bye-bye.