Micro-Frontends Performance and Centralised Data Caching

Rate this content
Bookmark

Common myths about Micro-Frontends hold that they are bad for performance or that developers implementing this architectural style don’t care about the performance implications because they are focusing on fixing the developer experience and organizational issues rather than focusing on the user experience, however, the reality is altogether different. Micro-Frontends are not inheritably bad for performance and, as is often the case in software development, making best use of the technology depends on correct implementation. This talk will demonstrate how Micro-Frontends can make your applications faster and more resilient while keeping the benefits of independent deployments.

27 min
22 Oct, 2021

Video Summary and Transcription

Micro front-end architecture, like microphone tents, can help scale applications by applying microservices principles to the front-end. Microphone tents can be beneficial for performance, depending on the implementation. They can reduce bundle sizes, avoid duplicated dependencies, and ensure independent deployments. The shared API and module federation are powerful features that allow for dependency management. Micro front-ends can improve developer experience and user experience while addressing organizational and scaling issues.

Available in Español

1. Introduction to Performance and Microphone Tents

Short description:

Hello, everybody! Today's topic is performance. I am a senior engineer at American Express, and I work on the micro front-end framework called OneApp. It's open-source and used by 2,000 developers. With millions of users, performance is crucial. Micro front-end architecture, like microphone tents, can help scale applications by applying microservices principles to the front-end. Independent deployments are a key benefit. However, there are many misconceptions and myths around microphone tents.

Hello, everybody, and welcome to my presentation. Today is a very interesting topic, and it's about performance. My name is Ruben. I am a senior engineer at American Express, and that is my Twitter handle if you want to give me a follow.

What do I do at American Express? Well, I am part of a team that maintains a micro front-end framework called OneApp. It's like a meta framework that we use at American Express. And this framework is open-source, and it's used by about 2,000 developers, so there is a 2,000-developer team at Amex that use this framework, and it's also our applications are used by multiple customers worldwide.

Now the question is, when you have so many users, what is the first thing that comes to your mind? Well, performance. We need to make sure that all those millions of users have great performance. Now, before we start with the performance section, I'm just going to get this out of the way because we get this quite a lot. I've seen this tweet so many times. Are microphone tents a thing yet? Or the other one is, why can't we just use components? Let me just answer that question very briefly before we get into the performance section. Well, microphone tents, yes, they are a thing. And, also, they've been around for a while. It's not like they are a new thing or something that is brand-new technology. It's been around for a little while. Basically, what they can do is they can help you scale your applications by applying the same principles that you have for microservices to the front-end.

Now, quick disclaimer here. I am not going to try and convince you that microphone tents are great and that you should come and start using them tomorrow. This is not why I'm here today. But if you use microphone tents, you will get really nice benefits. Like, the main one, in my opinion, is independent deployments. So, if you have a large team or multiple teams, they can deploy independently. They can have their own repositories and they can just have their own thing, everything sets up and they can deploy independently. Different parts of the website, of the web application. The thing with one app is we don't have to restart the server. You can deploy a new version and it automatically uploads itself without restarting the server. If I didn't convince you with this, I brought some pizza, if anyone wants some pizza. I'm not trying to convince you here! Now, there is a problem with microphone tense, a big, big problem. This problem is there are many misconceptions, and there are many myths around microphone tense.

2. Microphone Tents and Performance

Short description:

The biggest myth about microphone tents is that they are bad for performance. This misconception stems from the belief that microphone tents involve mixing multiple frameworks on the same page. However, this is a false myth. While it is possible to use multiple frameworks on the same page, it is not necessarily a good idea. The Strangler pattern is a recommended approach for migrating from an old application to a new one, allowing for incremental transformation of the UI. Microphone tents can be beneficial for performance, depending on the implementation.

The biggest by far is they are bad for performance. Yes, that is the biggest myth about microphone tense, and let me just ... there's one reason for this, and I've found why people think that microphone tense are bad for performance. And the reason is, well, people think that microphone tense are all about mixing libraries on the same page, so we have React, and Angular, and Vue, and Svelte, and Infinity Dash on the same page. Is that what microphone tense are about? Well, let me just tell you this is a false myth, and that's why people think that microphone tense are bad for performance.

The first thing they find about microphone tense is it's about React, Angular, and everything on the page. Let me just ask, is that a good idea? Is that a good idea? My friend Ken thinks it's a good idea. It's not a good idea. You can, yeah, so you can use React, and Angular, and all these frameworks on the same page, but just because you can, doesn't mean you should. So, although, this is there's only one case, one specific use case where it might be a good idea. It's not great, but it is a valid use case of having multiple frameworks on the same page. And this is the Strangler pattern.

The Strangler pattern is the best thing you can do if you are migrating from an old application, like a legacy application to a new one. How many of you here had to rewrite your old AngularJS application into React? Me! It's very common. So, these are very common use case. For the last five, six, seven years, AngularJS is not good, let's replace it with React. What do we do? We just completely stop development, say to the product managers, I'm sorry, we can't do any more features because AngularJS is bad and we can't maintain it. We need to replace it with React. The first thing they will tell you is, what? No! You can't do that! I mean, probably some people do it. What the Strangler pattern will help you do is help you incrementally transform your application. You don't have to do like a big bang transformation, release, rewrite of the whole application. What you can do is start applying the microfront-end pattern and the Strangler pattern to uplift different pieces of the UI.

The best thing is, well, can we just start route by route? Yes, you can, but when you have multiple pieces on the same page that you need to change, and sometimes one page is a lot of work, it's like the entire application is one page. With microphone tents, what you can do is you can start from a very small part on that page that is going to use React, the rest is going to use Angular, and then at some point, this is the key, you remove Angular. You don't keep both on the same page because performance! Yes, we shouldn't do that. So this is the only case where it might be useful to have multiple frameworks on the same page. So microphone tents can be good for performance. Disclaimer there, like a star, fine print. It depends on your implementation. Everything in architecture, it depends on how you implement it.

3. Benefits and Considerations of Microphone Tents

Short description:

Microphone tents can help reduce bundle sizes by providing independent units with their own JavaScript bundles and code splitting. Avoid duplicated dependencies, including multiple copies of React on the same page. Use the new modification feature and the shared API to ensure micro-front-ends have their own copies of dependencies when rendering in isolation.

So if you have an incorrect implementation of microphone tents, they're going to be really bad for you. If you have a good implementation, they're going to be good for performance. So what I'm going to do next is I'm going to show you first how they can be good, also what to avoid and how to solve the problems, the performance issues that you might encounter if you use microphone tent.

Great. So the first thing is this is a nice freebie. This is for free. Kind of. Microphone tents can help you reduce your bundle sizes and, yes, we worry about bundle sizes still. We shouldn't ship so much JavaScript to our users if they don't need it. So the first one is because microphone tents are independent units, they're their own thing Then they will have their own JavaScript bundles, and you get code splitting out of the box, so you don't have to worry about code splitting and where to code split because every microphone tent will have their own bundle. Those bundle sizes, for example, we try to keep them small. We try very, very hard to keep them small for performance reasons. So, this is a free one.

One thing to avoid, and that's next, and is duplicated dependencies. So, yeah, having multiple dependencies. How many have had this problem where you are like all my applications have different dependencies that they need to be shared, and we have done things like externals, and make sure we can access the same code and have our bundle sizes, again, small. We should avoid duplicated dependencies, because they are all using the same thing. Why do you have our own copies? This is especially bad, again, going back to the bundle size for people who have slow internet connections, mobile devices, so we all know this. We should avoid duplicate dependencies, and also React, we can't have React duplicate on the same page. We can't have two copies on the same page. I don't know why. Actually, I do. It's something to do with the scheduler that they erase each other, and all really clever stuff. We shouldn't have React, multiple copies of React on the same page. If the micro-front-ends are independent, how do we do this? There are many ways you can do that. But this is awesome. There is a new thing called modification, and it has a fun API called the shared API. When I saw this, I was like, wow, I've been waiting for this for so long, because I want to have my micro-front-end to get their own copy of the dependency if they are rendering in isolation. If you're rendering in isolation, you need your own copy because you don't have any containers, you don't have any application providing the dependency, so you need your own copy of React. Let's say we have a second module, and that second module or micro-front-end, they also need React on and my shared library or whatever.

4. Shared API and Module Federation

Short description:

When rendering micro-front-ends in isolation, each one can request their own copy of React. However, if multiple micro-front-ends are rendering on the same context, the shared API will ensure that only one copy of React is loaded and shared among them. Module federation, especially the shared API, is a powerful feature that allows for independent deployments and avoids dependency clashes. Check out the documentation for module federation to learn more.

What happens is, if I'm rendering in isolation, fine, get me my copy of React. If there's another micro-front-end and they're rendering on the same context, what the shared API is going to do is hold on a minute, I've already loaded this copy of React, let me for any other micro-front-end or modules that request the same dependency. So, module federation especially is really good, even just for this API. If you don't want to adopt the whole thing and we don't want to over-complicate things with micro-front-ends, this is a really cool feature of module federation, if you want to take a look. It's a shared API. It's like externals but beyond. And there is also something about scopes where you can have a scope A and scope B and that dependency will have a scope so you don't have clashes. Take a look at the documentation for module federation. It's brilliant.

5. Distributed Data Fetching and Microphone Tends

Short description:

Now, in the second part of my talk, I will discuss distributed data fetching and the concept of microphone tends. Microphone tends are mini applications that load their own data, allowing for independence and avoiding coupling. This ensures portability and reusability. However, loading data individually for each microphone tend can lead to multiple API requests and poor performance. To address this, we implemented a distributed shared cache that eliminates redundant API calls. This cache is a basic fetching library.

Now, second part of my talk is distributed data fetching. And let me just stop here. I know what you're thinking. You're looking at this like, what is this? What? We now, we are used to do this. Which is basically, I load my data on my page. I'm using Next.js or any other meta framework you have like your get your server side props or whatever and then that does all the server side rendering and I pass the data to my components. You could also call data on your components. But this is more or less the pattern.

Now with microphone tends... Oh, what's this? This is different. Every single microphone tend is loading their own data. And, remember, these ones, you can deploy them independently. They're like mini applications. Before you shoot me and say, why is this? This is not right. Let me just explain why. Why? Why each microphone tends is loading their own data is because they need to be independent. They need all the data to render, and they need to avoid that passing data down from other microphone tends because that will cause tiling coupling. Rule number one, if you don't want to have a hard time, do not couple your microphone tends. I have a really nice blog post that's talking about the best practices, and that's the biggest one. If you start coupling your microphone tends, you stop having all the benefits, and you with the hated distributed monolith which is a mess, nobody knows what it does, and if you change something, it breaks everywhere else. That is completely the opposite of what microphone tends are about. So, if we keep them independent, which means they load their own data, so we can reuse them everywhere, and that's one of the benefits of microphone tends, they are portable and reusable. We can take one, put it there, it loads all the data they need. They don't need to worry about context, or where is the data coming from, or if I need to put the container and the container has to provide it because they load their own data.

Now, what's wrong with this? If every single microphone tend is loading their own data, what is it going to happen? Anybody? We're making so many API requests, look! The user microphone tends is rendering in the header, the user data. The microphone tends dashboard loads the user, they load the data again, so we have three network requests. Is this bad for performance? Of course it is! This is like, no! Why are you doing this? So, we came up with a solution at Amex which was, okay, let's do a distributed share cache. How is this different from a normal cache? Is that, like I explained with motion, for example, if you're rendering your microphone tend in isolation, take their own cache, but we have a place where if any of the microphone tends have also requested the same data, I'm not going to make that API request, I'm going to give you the cache. Pretty standard caching stuff. So, we created this very, very basic fetching library.

6. Implementing a Simple Solution

Short description:

The solution we implemented is a simple wrapper around fetch with React hooks and a caching layer. It was the simplest solution for our needs and resolved the issue of multiple API calls on the same page.

It's very, very, it's not basic, it's simple. It's basically a wrapper around fetch, and it's got React hooks, and that's it. Well, actually it's not. It has the caching layer. It was the simplest solution that we could find because we could have over-engineered and tried to find these many different solutions. This is not the only solution, by the way. This is what we did. So there are probably, I can feel that some of the questions will be, oh, can you use that or that? Yes, you can. But, Adamic, we felt we need something very simple, so shall we wrap fetch around the hooks and put a shared cache on top? Yes, it fixed our problem with multiple API calls on the same page.

7. Demo of Microphone Tent Setup

Short description:

Now, I have a demo to show you. We have a basic microphone tent setup with a user microphone tent and a header microphone tent. Microphone tents are commonly used for the header and footer. They don't have to be complicated and can even start with iframes. In our setup, the user microphone tent is rendering inside the films microphone tent, which is not ideal. Let's see what happens when we make a request. We have five different API requests.

Now, this is the part that I always say, people, before we go to the conclusion, I have like a demo, but it wasn't working like 20 minutes ago. So, if it breaks, I always tell people, you should record your demos live coding is no good, and here I am doing live coding! Okay.

So, can you all see? Yep, that's perfect. So, we have a nice very basic microphone tent set-up here. We have our user microphone tent, and usually renders on the header, or you have a header microphone tent. By the way, the most common use case for microphone tents, the most common that I heard is header and footer. You don't need it for anything else. That's probably your case. There are more advanced cases for large-scale companies. If your company has got a team that is doing the header, and the header is so complicated, and the footer is so complicated, they have so many different things, and you want to make sure that all your other teams have the same version of the header and the footer, microphone tents would be a good idea if you want to implement it, and then keep it simple. It doesn't have to be complicated. This is another myth about microphone tents. They don't have to be complicated. You can even start with iframes. I'm not saying go and use iframes, but it can be just iframes. So we have this very basic set up here with the header with some user, and I have some films, and you can see that the user microphone tent is rendering inside of the films again because why not? I want to use her again there, and I cannot pass the data down from here to there because I'm entirely coupling my user microphone tent with my film's microphone tent. We all know that's bad.

Let's see what happens. If I make this request, and it doesn't work. Oh, dear. Hold on. Let's restart the server. This is what I was worrying about. Yes, I think it's the Wi-Fi. There we go. Let me just do that again. Clear the network. What is that? That's no good. We have one, two, three, four, five different API requests.

8. API Requests and Server Side Rendering

Short description:

We have multiple API requests for the same data in different microphone tents. To address this, we use a React context that works across multiple environments, ensuring a single true context for caching. By clearing the fetch network tab, we reduce the API requests to two. The main strength of our framework is server side rendering, which is challenging for microfrontends. Enabling server side rendering involves making the same API call on the server and the client.

We have one, two, three, four, five different API requests. Yeah. I don't want that, because all my microphone tents are just loading the same data and they're making different requests on the client.

Let's go to the VS code. The first thing we have is a provider. I don't want to go into details. This is just a demo. It's basically a React context. But the difference with this React context is it works across multiple environments, so it's like you have different microphone tents that have their own context and we just make sure that there's one only true context that keeps all the caching sync.

So what happens when I do that? Let's clear the fetch network tab. That's much better. Okay. We only have two API requests. So we're going to get the user, we're going to get the films. I already have the user data so just give me the cache. That's good. Now this is basic stuff. The main strength of our framework is server side rendering.

And one thing that is really hard on microfrontends is server side rendering. That is really, really hard. Like getting server side rendering with microfrontends working is absolutely hard, and there are so many, the creator of modules is trying to make it work with next modules. It's really hard. I think he managed to do it. So let's enable server side rendering. So again, I'm not going to explain this. This is just like our version of get server side props, or get your data from the server. I'm making the same API call on the server on a client. Let's see what happens. Oh, nothing happened! Hold on. We aren't expecting anything to happen, because there is no fetch on the client.

9. Benefits of Micro Front-Ends

Short description:

If you don't believe me, let's do the proper test. Our micro front-end framework keeps the cache between the server and the client in sync, allowing for efficient data retrieval. Micro front-end can help improve developer experience by addressing organizational and scaling issues. Fixing bugs in production becomes easier as the application is decoupled and isolated. Micro front-ends can benefit both organizational issues and developer experience without compromising user experience.

If you don't believe me, let's do the proper test. This is how you hack HTML. There is. Server side rendering. Here's Luke Skywalker. It's working. And the trick is, our micro front-end framework is keeping the cache between the server and the client in sync, so when the server is downloading the data, then the client is like, oh, the data is already on the server. I already have my cache. I don't need to do a fetch request. Let's just get the cache data.

The best thing about this is you can switch between front-end and back-end data fetching. So it's a really, really cool thing. That was my demo. And just a conclusion. I don't know. I think I'm good. But my conclusion is, well, when someone asks me, why is all the micro front-end stuff so complicated and I don't want to do that? There is also a very particular problem. So if you don't have that problem, why are you trying to fix it? Fine. If you do have this problem, well, micro front-end can help you improve developer experience.

Now, there is a counter argument. Hold on. So you're saying that this is just about developer experience? Yes. Sort of. Organizational issues and scaling issues. But at the same time, if you fix your developer experience, if you fix your organizational issues so your teams can deploy independently, they'll be more efficient. Which means that if you have to fix a bug in production, you don't have to worry that the whole application is going to break because you just need to make sure that all the test pass. You just need to apply that patch to your microphone tent, and it's isolated, and it's decoupled, it means that you can just deploy that, fix the bug, nobody else knew that was a bug. All the teams are carrying on doing their own thing. So it helps organizational issues as well as developer experience. But as we saw, they don't have to be bad for user experience.

QnA

User Experience and Shared Dependency Updates

Short description:

They can also be really good for user experience if we make them work for us and make all these performance issues go away. Would you like to grab a seat and let's go through some questions. I'm going to pick the top three rated questions in the slider. If you update something like a component library, maybe after rebranding, how can you ensure all teams update to shared dependency at the same time? You can fix that problem by having a team that can deploy the shared library and using multiple versions with micro front-ends. Micro front-ends have their own versioning and can be deployed independently.

They can also be really good for user experience if we make them work for us and make all these performance issues go away.

This is me. Thank you very much. Yes. Awesome. Thank you so much.

Would you like to grab a seat and let's go through some questions. I'm going to pick the top three rated questions in the slider. That means that there are many questions. Yeah, there were so many. And we won't be able to get through to all of them. So here's what. If you're here, make sure you grab him afterwards. And if not, go to the spatial chat and you'll be able to chat later in one of those rooms.

I'm going to go with the top rated question. It's from Adam Turner. If you update something like a component library, maybe after rebranding, how can you ensure all teams update to shared dependency at the same time? Okay. So you can fix that problem. Basically what you should do is just have a team that can deploy that shared library. You can have multiple versions with micro front-ends. There is a really... This is something that is... People are like, what? Micro front-ends have same versioning. So you have 1.2.2. And my orchestration says I want 1.2.3 and I'm not ready to update yet, then it's fine. They were kind of dependencies. But they are not dependencies because you can deploy them around time. So you don't have to come and download MPM install, micro front-end 1.2.3. Make sure it passes. No, you don't have to do that.

Loading and Maintaining Micro Front-Ends

Short description:

1.2.3 is ready for production. How do you maintain coding style and conventions over multiple micro front-ends and teams? It depends on your company. Have a DevOps team to take care of component libraries and ensure coding practices are followed. Each micro front-end can deploy independently. Performance issues with multiple frameworks on the same page may still apply with compiled frameworks like Svelte.

1.2.3 is ready for production, it's been tested. I now want to load it into my page. Done. That is... We can go more... That's a great answer.

The next question is from Levi. How do you maintain coding style and conventions over multiple micro front-ends and the teams that are working on them? Kind of organizational. This is an organizational one. And it is a good one. Because, again, micro front-ends is like an answer to organizational issues. technical answer, but the organizational answer, it depends on your company. We recommend... I recommend your company to have what we call the DevOps team... Just make sure that they take care of their component libraries, make sure that everybody's following the same style, and you can just create templates. Every single micro front-end, you have a generator or template. Someone is making the decisions and just make sure that you follow the same coding practices that you do with anything else. The only difference is that these ones are... You don't have to talk to them. They can do their own thing, and they can deploy independently, they don't have to talk to anybody to deploy a new version.

And last but not least, this one's from Jaefen. You mentioned performance issues with multiple frameworks on the same page. Would you say this still applies with compiled frameworks like Svelte? Oh, I'm not an Svelte expert. I haven't... I've tried it. I'm not sure. The main reason there is a problem with having multiple frameworks on the same pages, obviously you are loading a lot more JavaScript. So if you load Angular and view... So with Svelte, if they are not loading any JavaScript, well, we need to look at... I'm not sure how they're going to clash.

Optimizing Angular and Svelte

Short description:

You can optimize an old Angular application when upgrading. If Svelte doesn't add client-side code that increases application size, it can be a potential solution. Thank you for attending the talk and participating. The speaker's favorite question was about versioning, which is not commonly asked. If it was your question, please come to the front or contact us online for a chance to get a T-shirt.

It's not like you can't do it. I mean, you can. You could do some optimisations to get around it, and as I mentioned, there is a use case for it. So when you have Angular as an old application, well, you need to optimise for that. There is no way around it because you are upgrading. So yeah, you could optimise it and if Svelte doesn't have any client-side stuff that is going to make your application huge, then potentially, yes.

Thank you so much. I really enjoyed that talk. I'm a big fan of live coding. So happy that you did it as well. Give him a round of applause. But one more thing before you leave the stage. You have to pick your favourite question so that person can come to the front or maybe find us online and get a T-shirt. Which was your favourite question? Okay, I think the versioning one is interesting because people don't ask that one very often. Which one? The versioning one. So if that was your question at the next break, come find us over at the front if you're online. We will contact you and thank you so much.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Top Content
Do you have a large product built by many teams? Are you struggling to release often? Did your frontend turn into a massive unmaintainable monolith? If, like me, you’ve answered yes to any of those questions, this talk is for you! I’ll show you exactly how you can build a micro frontend architecture with Remix to solve those challenges.
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Too much JavaScript is getting you down? New frameworks promising no JavaScript look interesting, but you have an existing React application to maintain. What if Qwik React is your answer for faster applications startup and better user experience? Qwik React allows you to easily turn your React application into a collection of islands, which can be SSRed and delayed hydrated, and in some instances, hydration skipped altogether. And all of this in an incremental way without a rewrite.
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
React 18! Concurrent features! You might’ve already tried the new APIs like useTransition, or you might’ve just heard of them. But do you know how React 18 achieves the performance wins it brings with itself? In this talk, let’s peek under the hood of React 18’s performance features: - How React 18 lowers the time your page stays frozen (aka TBT) - What exactly happens in the main thread when you run useTransition() - What’s the catch with the improvements (there’s no free cake!), and why Vue.js and Preact straight refused to ship anything similar
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Our understanding of performance & user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
JSNation Live 2021JSNation Live 2021
113 min
Micro Frontends with Module Federation and React
Workshop
Did you ever work in a monolithic Next.js app? I did and scaling a large React app so that many teams can work simultaneously is not easy. With micro frontends you can break up a frontend monolith into smaller pieces so that each team can build and deploy independently. In this workshop you'll learn how to build large React apps that scale using micro frontends.
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion