The Rise of the Dynamic Edge

Rate this content
Bookmark

Over the last few years, the JS web app community has seen a growing awareness and focus on performance & scalability. The days when prominent production sites serve entirely-blank pages waiting on monolithic bundles of megates of JavaScript are (mostly!) behind us.

A big part of that has been deeper integration with CDNs, after all, round-trip latency is one of the primary determiners of performance for a global audience. But frameworks, and the companies that support them, have different approaches to how they can be used, and the operational complexities their strategies introduce have real consequences.

But what if, instead of a dynamic origin sending instructions to a static CDN, you could run your application directly on the edge? As it turns out, that doesn't just improve performance, it also vastly simplifies our deployment and maintenance lives too.

32 min
01 Jul, 2021

Video Summary and Transcription

The Talk discusses the rise of the dynamic edge and the past, present, and future of frontend hosting. It emphasizes the impact of latency on CDN usage and the relevance of CDNs in JavaScript application development. The use of CDNs for rapidly changing content and the benefits of the Jamstack approach are explored. The future of the dynamic edge lies in platforms like Cloudflare Workers. The Talk also highlights the performance benefits of running Frontend Application Bundles (FABs) on the edge and the challenges faced in achieving optimal performance.

Available in Español

1. Introduction to the Rise of the Dynamic Edge

Short description:

Hello. My name is Glen. My talk today is the rise of the dynamic edge, or another way to talk about it would be the past, present, and future of frontend hosting. I've done a couple of open source projects in the React space. More recently, I started a project called frontend application bundles, or FABs, which is at FAB.dev, as well as a product around deployments called link.sh. Last year Link was acquired by Cloudflare workers. Now I get to approach the same problem but from a point of view of an entire platform, an entire global platform, which is pretty exciting.

Okay. Hello. My name is Glen. My talk today is the rise of the dynamic edge, or another way to talk about it would be the past, present, and future of frontend hosting. If you don't know me, my name is Glen Madden, that's me on Twitter, that's probably the easiest way to get in touch. I've done a couple of open source projects in the React space. A couple on styling, CSS modules and styled components. More recently, a couple of years ago I switched gears and started thinking about production performance and deployment and started a project called frontend application bundles, or FABs, which is at FAB.dev, as well as a product around deployments called link.sh. Fairly excitingly, last year Link was acquired by Cloudflare workers. I've only been there a couple of months, but now I get to kind of approach the same problem but from a point of view of an entire platform, an entire global platform, which is pretty exciting.

2. The Impact of Latency on CDN Usage

Short description:

Today, I will discuss how CDNs have become an integral part of our front end app workflows. CDNs are widely used due to their geographical distribution, which plays a crucial role in reducing latency. I conducted an experiment comparing download speeds from different locations and found that even a small increase in latency can significantly impact download times. This is because of the way TCP works, where the initial data transfer is slower and gradually ramps up. Therefore, being local to the server is essential for optimal performance.

So today I wanted to drill into something that I found really interesting over the last few years getting into this stuff, which is how we've come to depend on and how CDNs have become a part of our front end app workflows. So just to recap, a traditional CDN architecture has the CDN in between your users and your origin server, your actual host. And requests flow through and responses flow back. The CDN will take copies of those requests, responses, depending on some algorithms, some directives. Your origin server is the ground truth.

So why do people use CDNs? Well, they're everywhere, right? This is CloudFlare's network. It's over 200 locations. But it might be a little bit surprising to just see just how important that geographical distribution is. Why do they need to be in so many locations? So I wanted to start today's talk by looking over something I'd actually looked at a couple years ago, which is about the impact of latency. This was an experiment I ran for a web series I was doing called Frontend Center, where I ran a bandwidth test, or a download speed test, from Melbourne, where I was living at the time, against three different locations. Sidney, San Jose, and London. Now, Sidney's only 15 milliseconds away. San Jose is on the other side of the Pacific. And London is 280 milliseconds by speed of light, or as I live there now, it's a lot longer by plane, let me tell you.

So when you have a small file, you get download speeds, or total download times, pretty much exactly what you'd expect. It's just one single round trip to the server. So the further the server is away, the longer it takes for the file to download. But what might be surprising is just when you have a fast connection to a local box, and this is between two data centers, so there's no bandwidth constraints here at all, really. For a 250 kilobyte file, we're still a fraction of a second. But when you add some latency into this picture, things start to get pretty different. At 200 kilobytes, you're now looking at 2 seconds in the best case scenario to download that file. And if you double the latency, the same effect is doubled. Now this might be surprising, because those servers are only, you know, 100 or 200 milliseconds further away, and yet the download times are taking 10 times longer, or 30 times longer in some cases. And these steps are actually the latency between those hops. So each jump on the graph is 160 milliseconds. Each jump on the red line is 280. This is because of the way TCP, the protocol, works underneath everything else, where it starts slow and ramps up as it detects that the network conditions are good enough. This means that the first 100 kilobytes cost a lot, you know, that every 100 kilobytes from then on, can increasingly cost your performance. And much more so than you might think. So being local is really important.

3. Relevance of Latency and CDN Usage

Short description:

In this part, I discuss the relevance of latency in today's JavaScript application development and the importance of using CDNs. CDNs with cache control immutable allow for efficient delivery of client-side apps, especially when utilizing bundle splitting. While not every app requires such optimization, embracing a global network and integrating CDNs into the deployment process is crucial for building a fast website. I also highlight the benefits of using the S maxage header to reduce origin server requests and the need for a mechanism to notify the CDN of content changes.

This was actually in a video called Why Latency Matters, which was episode 10 of Frontend Center. It's also out on YouTube. If that information is new to you, I encourage you to check it out. I go into TCP stuff as well.

But for now, I'm gonna change gears slightly and talk about how it's relevant to us today. If you're a JavaScript application developer, like myself, then when Webpack came on the scene, you probably noticed that it was automatically generating these URL schemes for you. Now, Webpack wasn't the first one to do it. I was doing it in Gulp before then. But Webpack has certainly put it in from Daydot. This is really important, because this has a hash of the content inside the URL, meaning that the content can't change unless the URL does. That's perfect for a CDN.

So, setting this cache control immutable means that everybody knows, once they've seen that file once, they never have to check the origin server again. And if you're building client-side apps, that's kind of enough, right? Because you can serve an empty page and then a whole lot of JavaScript, and then the app will run in the browser. And if you do bundle splitting, every time something changes, you just update the bits that change. Now, not every app needs to do better than that. If you're building an app that people are sitting there for hours using at a time, it doesn't really matter how many seconds it takes to start. And if you're building a banking app, then it doesn't really matter how bad it will be because you still have people's money and they'll walk over broken glass to get it. But for the rest of us, and anytime where you have to start dipping into more performance-sensitive stuff, this isn't fine. You have to start to embrace a global network. And I would go even so far as to say, unless you make CDNs a part of your deployment process, you can't build a fast website.

So I wanted to talk about a couple of things because this is a performance talk, but it's very tempting to say, hey, you have to throw away everything you've done before in order to get better performance. Whereas the truth is CDNs are actually really great. So if you've got one in front, there might be a couple you can do with it. The first one is the S maxage header. If this thing is not this old, so that number means one year, then you don't have to come back to the origin to check for a new version. Maxage zero here means the browser never trusts your old version, always come back and check. But SMaxage is telling the CDN, you have this copy for a year, you can always handle these requests. So you get the browsers checking in, but they're only going as far as their nearest CDN node. The trick with this is that then you need to include some way of telling the CDN that this content has changed when the content changes. Now, this isn't for those fingerprinted assets.

4. The Impact of CDNs and Jamstack

Short description:

This part discusses the use of CDNs for rapidly changing content and the benefits of the stale while revalidate header. It then introduces Jamstack as a trend that simplifies web deployments and increases site stability. However, Jamstack has limitations in terms of performance and complexity, particularly when dealing with dynamic content. The inefficiency of rebuilding the entire site for every change is also highlighted. Companies are addressing these challenges with integrated edge platforms that offer strategies for publishing only the changed content. Despite its advantages, Jamstack has some weaknesses compared to the traditional standardized HTTP world.

This is for your index HTML, anything that rapidly changes. But CDNs are really good at updating what's cached. So sending them a command to say, hey, these URLs have now changed as part of your deployment gives you really, really good performance and very low load on your origin.

The next version of this is the stale while revalidate header. This tells the CDN that while the version it has cached might be expired and it needs to go and get a new version, it can keep sending the old one for this length of time. So for a year, it can serve the old But the XMAX stage of 60 means once a minute it needs to go and check for a new version. And this sort of inverts the process. The CDN is now pulling from the origin server, but at a really low kind of rate. All of your users are still accessing the still just hitting their local CDN, so they're getting really fast response, but your content is never more than 60 seconds out of date. The other advantage of this is that you could have millions of people accessing your website, but your origin is still only seeing one request every 60 seconds from every node across the network. This I love using, and I think it's great, but I'm expecting it to have been a much bigger part of people's deployments if it wasn't for part 2, which is the rise of Jamstack.

Jamstack is one of the bigger trends in recent web deployments, and I think it's popular because, well, it kind of hides a lot of the details around CDNs and gives you something much simpler to understand. Traditionally, you had this CDN that sat between your users and your origin, but you needed all three to kind of be there, you needed your origin to be live, and the CDN was only just an enhancement. Jamstack, while it has the same structure, changes that, and now it says that the CDN is your host, your CDN is responding to every request, but in order to change content, you now tell the CDN here's the new copy of the file. It's a subtle change, but it really increases the stability of your site, and I think stability is the best thing about Jamstack. Because you're only deploying static files and you're doing it globally, your resilience to load is much increased. There's no route that's going to hit your origin server because you don't really have an origin server anymore, and so you can kind of sleep easy because saving static files is really simple. In terms of performance, it can be a lot better than what you're doing at the moment, but I will say that there is there are definitely some things that are difficult to do in a static deployment, which means you end up leaning on client-side JavaScript a fair bit more, and so you can end up with a worse performance in some cases even by going to a static site. The same sort of story applies with complexity, where deployments are now super simple because throwing static files across a wire, but generating a perfect snapshot of your build takes a bit of extra effort, and then you have to figure out what to do with the dynamic pieces and how to run them efficiently in the browser. But one of the things I dislike the most about JAMstack is its inefficiency. In the traditional sense, every change to your site needs to generate the entire site afresh, which means a 10,000 blog post site with changing one route can be taking 30 minutes an hour to rebuild. This is a real impediment to iterating quickly on your code, and it's something that needs to be addressed. The way it is addressed is by the same companies who are pushing JAMstack as a methodology offering platforms that solve some of these pain points. In what I'm calling the integrated edge. So they don't just take the simple JAMstack approach. They also have some strategies around publishing or rebuilding only the things that have changed or only publishing the things that have changed. Or in the case, actually telling the CDN, giving strategies that do something very similar to STALE while Revalidate. By running your build kind of as a server in the background. These are all really good, but for me it's not quite as good as what we had before with the purely static HTTP, sorry, purely standardized HTTP world. There are a few things that I think weaken JAMstack's appeal.

5. The Future: Dynamic Edge and Cloudflare Workers

Short description:

Vendors in the JAMstack world make you more locked in with every step. Moving from static to dynamic can have a significant performance disparity. Synthetic benchmarks show a huge difference between the rendering pipelines of static and dynamic. The future lies in the dynamic edge, where we don't need an origin server. Cloudflare Workers lead in this space, but it's a growing category. There's a one-megabyte script limit for Workers.

And the first is just how locked in all of these vendors make you. easier, made easier at every step in a JAMstack world, has more and more baggage associated with it. For example, this is a couple of snippets that go into a Netlify config file where you're doing simple redirects or setting a few headers on a few routes. This is something you could have done with a few lines of code, if you had a Node server any kind of web server, that now you have to put into platform specific conflict. It's not worse in the world, but every time you reach for this to solve a problem, you're getting further away from something that you can pull elsewhere.

The other thing is that there's been a tendency towards looking beyond JAMstack into this idea of a hybrid framework, where some pages are static and some pages are dynamic. But something I'd like people to just sort of be a bit more aware of, is that when you're thinking about performance, and you're thinking about moving from static to dynamic, there can be a huge performance disparity between the two. I actually ran a benchmark this morning on one of Vercel's Next.js app running on Vercel. The data here, I'll skip over, but it's more for the slides later, but you can see the impact or the difference between a static and dynamic route here. Where a dynamic, in the best case, is still a few hundred milliseconds worse than static, but as you get to the higher percentiles, so these are 50% or 25% of your audience, are getting more slower response times, up to about a second. And this is a time to first byte. This is before the rest of your app is even loaded. This is just on top of everything else. This is a synthetic benchmark, and I encourage you never to trust a synthetic benchmark or any benchmark that you haven't run yourself and understood the constraints. Do run these tests yourself. The only point of the numbers here is to show that there is a huge difference between the rendering pipelines between the static and dynamic halves of the cells platform. And that's something that might not be obvious considering how easy it is to move from one to the other.

All that is to bring us to the future. Well, which is also the current but not evenly distributed, which is the dynamic edge. So rather than looking at the CDN architecture and saying, well, let's make a static CDN, a static edge, and send updates to it like we do in Jamstack. If we have a dynamic edge, then we don't really need an origin server at all. In fact, the whole point of a CDN was to run close to our users. Now, what if this was where we hosted it? What if we hosted it everywhere in the world? I mean, it's just front-end code, after all. It's not massive databases and all these sorts of things. So this is a new category of tool. Now I'm going to be talking about Cloudflare Workers, because Workers in this space is the leader. But this is a category that's going to keep growing. And while some of these competitors' products will do some of these things, but not all of them, I expect them all to get better over time. You shouldn't think that you're just opting into one vendor, because this is just a new way of running code at the edge. For Workers, there are a few things to know, right? You have a one-megabyte script limit, because it's deployed to every location that Cloudflare has.

6. Running FABs on the Edge and Performance Benefits

Short description:

But on the other hand, running in all 200 locations, in a V8 container, not Node.js, brings a new mental model. No cold start impact with workers, as they spin up in the background while the TLS handshake is underway. The future is not limited to frameworks designed for caching static content. The project front-end application bundles (FABs) enable any application, regardless of its static or dynamic nature, to be compiled into a FAB and deployed to global edge locations. Testing compatibility and performance before adoption is crucial. Deploying FABs on the edge provides better performance characteristics than traditional deployment, especially with cloudflare workers.

But, on the other hand, it is running in all 200 locations. It runs in a V8 container, not Node.js. So it's JavaScript, but you need a new mental model. There are some tighter CPU and RAM limits, although for general purpose stuff, it should be okay. One of the exciting things is that there's no cold start impact because of this different architecture. When we saw on that Vercel graph, that one second bump, that was because of Amazon Lambda under the hood doing a cold start for a new request. On workers, that cold start is actually so quick. It happens in the background while your TLS handshake is underway. So while your browser is negotiating with its nearest server to encrypt an SSL connection, the worker is spitting up in that in the background. So by the time the request body actually comes through, your work is ready to go, effectively making cold starts invisible.

Now, one version of this future is that these new edge frameworks come out and these are all really cool. Nux.js isn't quite released for its edge serverless one yet, but it will be quite soon. Where these new frameworks offer cool new things and you switch your app across to them and have a great time, really great performance, and that's the next history. But for me, that's not quite good enough. I think we have an opportunity to do better. Because we're going from a model where the CDN is just capable of caching and therefore just capable of static things to being able to run your entire application. So why do we need to lock ourselves in to only frameworks that are designed for that? I've been trying to bring about this future where anyone can access their code and port it across to running at the edge with the project front-end application bundles. The architecture of a FAB is that it should be able to be that, sorry, any application using any framework, no matter how static or dynamic it is, should be able to be compiled down to a FAB. The FAB is a single server entry point and a directory full of assets. The key is that FABs can be deployed to brand new global edge locations as well as anywhere else. So anywhere that can run JavaScript, anywhere that can run Node, is completely backwards compatible. The idea being that you can test whether your application right now is compatible with FABs, maybe use it internally, maybe test it on the edge and see how it performs before deciding to really adopt that. Because there might be parts of your app that you want to rewrite in order to get most performance. It's great to be able to verify that things are actually running really well.

Now, I don't have too much time to go into this, but when a FAB gets deployed in a traditional way, it does run differently to when it runs on the edge. Some of the performance characteristics are much better running at the edge. In traditional employment, we've tried to design it so that it runs as well as it can, but the real gains are if you start using something on workers. And I did another benchmark this morning of NextJS running inside a FAB on cloudflow workers. Again, these numbers are just for the slides. It's easier to see on the graph.

7. FABs Performance and Call to Action

Short description:

Static performance is pretty good. Dynamic performance is okay. NextJS is actually kind of the worst use case for FABs at the moment. It's kind of the most bloated or the most chunky, I should say, framework. But if you use FABs with other frameworks, you should see even better performance than this. And just to compare, we're seeing extremely similar static performance to Vercel. But the dynamic performance is clearly a lot better. And with that, I encourage you to check it out. I encourage you to go to fab.dev or the GitHub and have a look. See, trying to compile your app to it, see if it will run on workers. Workers is available at workers.dev. If you go to fab.dev, there's a Discord there. You can get in touch with me and I'll help you, if I can. But other than that, thanks for having me. And time for questions.

Static performance is pretty good. Dynamic performance is okay. NextJS is actually kind of the worst use case for FABs at the moment. It's kind of the most bloated or the most chunky, I should say, framework. It's really designed for NodeJS. It's really not designed to run inside a FAB. So we have to do a lot in the compiler to make it compatible. Which is why you see such a big disparity between static and dynamic performance here. It's something that we will be actively working on and trying to improve.

But if you use FABs with other frameworks, you should see even better performance than this. And just to compare, we're seeing extremely similar static performance to Vercel. But the dynamic performance is clearly a lot better. There is no... While there is still some, to the long tail, 75 percentile slowness. You're still running it, worst case, 400 milliseconds, instead of worst case, 1.5 seconds.

And with that, I encourage you to check it out. I encourage you to go to fab.dev or the GitHub and have a look. See, trying to compile your app to it, see if it will run on workers. Workers is available at workers.dev. If you go to fab.dev, there's a Discord there. You can get in touch with me and I'll help you, if I can. But other than that, thanks for having me. And time for questions. Wow, Glenn. That was fabulous. Amazing. Thank you for that great talk. And sorry for this lame joke. Glenn, can you join me on stage so we can look at the results of your poll? Great. Hey, good to see you again.

QnA

CDN/Dynamic Edge Usage and Business Model

Short description:

43% have never used a CDN/dynamic edge service. 8% deploy full apps to the edge. The business model behind the open source solution started with solving a problem for the company Link, enabling app deployment and previews. The team is mainly the speaker, but there are increasing contributors, including someone working on porting Flare React to the solution.

So you asked the people, how much have you done with the CDN slash dynamic edge service? And well, 43% have said, I've never used one. And then the biggest runner up is I use Jamstack. So my site can basically can be built statically and surfs from the edge. How do you feel about the results? Is this what you were expecting?

Yeah, pretty much. Although I am surprised that we've got 8% deploying full apps to the edge. That's great. I think a couple of years from now, that'll be that'll be higher and higher. But 8% already is awesome.

Yeah. That's fairly new. Before this talk, I had never heard of it even. So 8% then out of our audience, that's pretty amazing.

Yep. So hats off to you, I guess. So I would like to remind everyone, you can ask your questions in the Discord channel. And we're going to jump to the questions right now. And the first one is actually what I was curious about. When I was watching your talk. And yeah, I see this is a great open source solution that's now backed by a big company. But what's the business model behind it?

Yeah, well, it's, um, it started as basically to solve a problem for the company I was working on, Link, which was we needed a way for people to be able to deploy apps and for us to build previews of people's apps. And everything out there like Docker or anything else, or just a zip file full of, full of HTML and JavaScript, it didn't fit right. So, the static directory, you obviously can't do any server rendering. And I really wanted to see the future of server side rendering come about and Docker is. It's really heavy, I mean, you're thinking about just how big a Docker image is just for a few hundred K, a couple of Meg of HTML, CSS, and JavaScript, it's using a sledgehammer to crack a nut. And so, basically, it came out to solve that problem. How do we get customers on to Link in a way where we can build every commit and give them a URL to that commit forever, but not locking them into static? And so, from there it's grown to basically, you know, deploying it to anywhere, compiling to it from any framework. These are all just extensions of that first idea, and it's kind of grown from there.

Awesome. And then, is the team you now, or is it more people? Is there a team, or is it you?

It's mainly me, but we're getting more contributors all the time. I've been working with somebody recently to port Flare React to it, and part of the reason there is even though Flare React can already deploy to Cloudflare workers directly, that's what it's designed to do, it's basically Next.js designed for workers, being able to put it into a FAB means it's much easier to test locally, it's much easier to review, you can put it through Link, you can deploy it to other infrastructure and it makes that project much more portable.

Collaboration, Compatibility, and Integration

Short description:

Happy to see collaboration on the Next.js compiler. No stats available for Angular and Vue.js apps on fab.dev yet. Vue.js and server-side rendering are being reworked for compatibility with edge rendering. Angular projects compiled statically perform well with FABs. FABs excel in adding logic, reverse proxies, and previewing different environments. I can connect you with the Nuxt team. Akamai Edge Workers integration with Next.js is untested, but FABs should be compatible. Next.js compiling to a FAB is rapidly improving. Contact me on the FAB.dev Discord for more information.

So, happy to see someone joining to try and bring that about. I've got lots of collaboration on the Next.js compiler, which is one of the more difficult targets for us, so it's really being led by me, but we're picking up contributors all the time. Awesome, always nice to hear that people are willing to help out.

I have a question from Mike, are there any also stats available for Angular, not AngularJS, and Vue.js apps deployed using fab.dev? No, not too much. Basically because, well, Vue and server side rendering is something I've looked into, but I haven't got a very successful story just yet. Nuxt.js, which is a framework built on Vue and server side rendering, was having a bit of rework before their version 3, I think, is the current version coming out, which is going to be directly compatible with edge rendering, like workers, and so in the Vue ecosystem, we kind of paused largely on that development until those sorts of changes have come out. That seems like it's really on the horizon, so I'll be expecting to get back into that soon.

As for Angular, it's not an area that I've got a lot of experience with, but every Angular project I've seen has been compiled statically, and anything static is very, very fast with FABs. It's extremely optimized for serving from the edge, but it's also kind of easy for anything, you can use pretty much any CDN to do that. FABs really come into their own when you want to start sprinkling a little bit of logic, you want to do reverse proxies of slash API to your back end, and then you want to preview that differently in staging and production. FABs make all of those work cases really easy, but the Angular part of your code should perform just fine.

Awesome, just a little plug from my side. Do you have any connections with the Nuxt team if you need help, because one of my co-workers is a core developer there, I can hook you up. Oh, that'd be great. I have a chat room in Discord that we occasionally fire back and forth, but I haven't had a chat with them for a few weeks. I'd be very keen to re-open those dialogue lines again. All right, I will get you in contact with him on Discord. Next question from Mark C, what advice or guidance would you have to leverage Akamai Edge Workers with Next.js? We are hosting our static assets on S3 and would love to see how we can integrate it with Edge Workers. That would be fantastic to talk about afterwards, because that's a target that I haven't tried to deploy to yet. So FABs should be perfectly compatible, but it's not. I haven't had a test case for it. We didn't have any customers at Lync who were looking to do that yet, so there was no impetus to do it. Next.js compiling to a FAB is something I'm working on at the moment that is rapidly improving and gearing up for a proper release in the next week. But the Edge Workers stuff should be completely compatible as far as I understand it. But yeah, it'd be great to collaborate on. Awesome. So Mark can contact you. Yeah. Awesome. If you go to FAB.dev, there's a Discord link on the homepage.

FABs and Performance Challenges

Short description:

You can use FAB with Node Express for dynamic routing on frontend apps. FAB ships with its own express server called FAB serve, which includes a sandbox VM to isolate the FAB runtime. FABs face performance challenges as they build on a standard that has moved away from Node.js. Converting frameworks like Next.js to FABs requires shim code and extra work, resulting in performance hits. FABs can be used as a reverse proxy to target environment APIs with a single build for the front part, allowing for a back-end for front-end approach.

You can join our Discord, and you can DM me there. All right, awesome. Chrissy is asking, does FAB work with Node Express for supporting dynamic routing on frontend apps? Yeah, so FAB when you're working on it locally, ships with a express server of its own, which is FAB serve. Inside that, there is a little sandbox VM that's used to isolate the FAB runtime that simulate basically serverless environments so that the FAB can't break out. There's certain restrictions on a FAB that make it more secure and make it possible to deploy to something like Wekas. So those packages you can use from your own express server or you can call into directly and just use FAB serve on the command line to spin up one for its own.

All right, great. Next question is what are the biggest performance challenges in modern web solutions that you're facing? Yeah. So the biggest thing is FABs are building on top of a standard where we've kind of left Node.js behind. So if you look at a project like Next.js as an example it's a perfectly fast web framework but everything that's contributed to Next.js assumes that you have a Node.js environment, assumes you have express, assumes that you don't really need to worry about bundle size And converting that to a FAB means that you end up having to shim out a lot of those modules. So you end up having to give JavaScript implementations for Node APIs so that they'll run within the FAB. Quite similar to actually compiling stuff to run in the browser. That means that even for kind of simple stuff you can find that the performance is not as good as it should be. I mean, in the example I was showing you with the two graphs of static versus dynamic things in Next.js on workers, those two lines should be pretty much identical. But because of the different code paths through a FAB and the extra work that you have to do just to make Next.js compatible, it ends up incurring extra performance hits. And so long term, we'll be able to solve a lot of those. But you'll find that some projects just won't be able to perform at the same level as others because they will have more deeper connections and more deeper assumptions that they're running on Node and therefore require more shim code, more kind of scaffolding to get executed in a FAB.

All right, thank you. We have time for one more question. The question is from Aston. Can we use FAB as a reforced proxy to target environment APIs with a single build for the front part? Yeah, that's it. So a single build produces a FAB. The FAB can have as little or as much server.js logic as you want. It can include a bunch of assets, and then dynamically the server component can be booted up with different environment variables to point it at different places. So the idea is that a single project can have kind of what's called like a back-end for front-end idea. So it could be an API, it could be a set of proxy routes, it could be maybe not your entire back-end, but it's the stuff that makes it possible to write your front-end. And that can kind of iterate together. So every time you're deploying each commit, you're deploying your back-end for front-end as part of this project, as well as all the static assets you built for that commit. Awesome. That sounds really powerful Glenn. Thanks for your hard work and sharing this with us. Let's get that 8% up, right? Yeah, absolutely. Thanks for being with us and hope to see you again soon. Bye-bye.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Our understanding of performance & user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
DevOps.js Conf 2022DevOps.js Conf 2022
152 min
MERN Stack Application Deployment in Kubernetes
Workshop
Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.