React Microfrontend Applications for TVs and Game Consoles

Rate this content
Bookmark

DAZN is the fastest-growing sports streaming service and during this talk, we will consider our experience building tv applications (for ps5, xbox, LG, Samsung and other targets) with micro frontend architecture. You may expect time travel to see how it started and what we have at the moment, what makes tv development different compared to the web, same as techniques that allow us to share code between targets. You will learn how we test our application with an in-house remote testing lab as well as our deployment and release process.

25 min
17 Jun, 2022

Video Summary and Transcription

This Talk discusses the architecture journey of a sports engagement platform, transitioning from a monolith to a microfrontend architecture. The ALEP microfrontend architecture is introduced to manage the complexity of a complex catalog and enable efficient deployment and version management. The deployment and release process involves using AliB and updating metadata in the deployment dashboard. The integration to React and TV development process involves using the AliB package and having independent lifecycles for packages. Shared code is used across different targets, and testing is done in a remote virtual lab. Focus management and key moments detection in sports are also addressed.

Available in Español

1. Introduction and Architecture Journey

Short description:

First of all, let me introduce myself. I'm Denys, a principal engineer at the zone. We change every aspect of how fans engage with sports, from content distribution to creating our own documentaries. We are available on various devices, including web, mobile, smart TVs, game consoles, and set top boxes. Today, we'll focus on the HTML5/React targets. In 2016, we started with a monolith architecture for Samsung Hue, but as we grew, we transitioned to a micro front-end architecture with clear domain boundaries. If you're interested in our journey from monolith to microfrontend, I recommend watching a talk by my colleague Max Galla. We also introduced a deployment dashboard for our new microfrontends, allowing independent releases for each domain.

Cool. First of all, let me introduce myself. My name is Denys. I'm principal engineer at the zone. And if you want to check some code samples I'm going to refer to, you can use my GitHub account and also if you want to find any of my contacts, just follow my website.

And at the zone, we change every aspect how funds are engaging with sports, starting from their content distribution to creation of own documentaries and fully extended augmented experience with amazing feature we built, which are working in real time. But what's interesting as well is that we are available on dozens of devices, such as of course, web devices, mobile devices but also smart TVs, game consoles, set top boxes and about these three last three, we're going to talk in now in next 20, 25 minutes or so.

So before we continue, I just want to share this split we have at the zone. We have two groups of targets. One group is HTML5, or we probably can call it React targets, and another one is Bespoke. So today, we will be focusing on the first one. The other one is more for native languages and as you can see it covers so many different targets, which are Samsung, LG, PlayStation, et cetera. There are lots of them.

So now, I'd like you to take all of you to some adventure and show how our architecture journey started, how we iterated over it and what we currently have. And as you can imagine, with Samsung Hue back in 2016, when the Zone just started, it was the way application created by even third party company and we started of course, with monolith architecture because monolith is kind of obvious choice for Samsung Hue as it helps you to grow fast enough at a small scale. Yeah. And it works really well until your development team and features in your application is relatively small. And later on, we stepped into the rapid growth space where we have hundreds of engineers and of course, at this scale, one of the most important things is to give teams autonomy. It's actually where we step in as like engineering company and instead of the third party company, we have rebuilded application completely from scratch. It is a micro front-end architecture where we implemented vertical split of the vertically split of the mains. And just to give you idea of the domains, so domains is something with that time we thought so, with clear boundaries. For example, we have authorization domain which is responsible for signing-sign-up, recovery password flow. We have the catalog domain which is basically responsible for browsing of the content. We have learning page domain which is responsible for C pages and so on. And I believe you get the idea. If you're interested in this journey how we actually like iterate it from the monolith to the microfrontend, I really recommend you this talk from my friend and colleague Max Galla which he did, I believe, last year. Really interesting journey. But same time we also introduced deployment dashboard for our new microfrontends. So this domain can be released independently and that time the only one team were responsible for the entire domain. And everything was well, it was really, I mean, big step forward from what we had previously.

2. Complex Catalog and Microfrontend Architecture

Short description:

We have a complex catalog with features like player, key moments, and a panel with mini-apps developed by various teams. Managing this complexity becomes challenging as multiple teams share deployable artifacts and face release congestion. To address this, we introduced the ALEP microfrontend architecture, which allows for vertical domain splitting and horizontal feature-based splitting. This architecture enables features to be fetched on demand, improving deployment efficiency and version management.

But we continue to grow, we reach the point where we have more than several hundreds of engineers and some of the domains become way more complex than they were initially. So, catalog itself, yeah, it's a place where you can draw the catalog, but it also has quite a lot features. For example, player. Player, quite complex, feature-rich package, adaptive bitrate, digital rights management, other things which it's aimed to support. Later on, we have key moments. You see these dots? Yeah? Right in a player. They are actually representing the interesting moments of the game. We have the thing for various sports, for football, for boxing, for motor GP, and recently we introduced it for the baseball. I highly recommend you to check it out. And we apply various techniques, including the machine learning, to detect them in real time and plot on the timeline correctly according to the video. So, you know exactly when moment happens. As I said, it works for VOD content and for live content.

We have also the panel here, which is sort of mini-app where lots of mini-things integrated, but they all are developed by various teams, such as Formation. They, again, live feature, fully in sync with video. If any transformation happens on the pitch when you're watching the game, it's going to reflect that change, yeah? And as you can imagine, we need something different to manage all of this, because Catalog no longer belongs to the one team. There are lots of teams which are now sit there. So we're getting the same problems. Multiple teams share the single deployable artifacts and we're getting the release congestion because if you're promoting the change between your test environments and probably you have staging for pre-prod and prod environment, and let's say a player tries to release their changes, but they stuck or found an issue, yeah? Any other team kind of blocked with their release because they need to wait until the issue will be resolved or someone going to take out those changes from the code, which is tricky in many cases. We have APAC release statuses even though I just demoed to you. We have a nice deployment dashboard where you can deploy the chapter, its entire chapter. You know the version of the entire chapter, but what version of the package? Well, good luck to find out. You need to either maintain something or develop custom solution for this. Yeah, quite tricky. So we iterated over it and we introduced new microfrontend architecture, which we call ALEP, which stands for Asynchronous Library Loading. So we still keep our vertically splitted domain, but it's complimentary, fulfill it with horizontal feature base split where features can be fetched on parent demand and I'm going to demo it now how exactly it works. So let's consider first from web perspective, what happens in action when you visit the zone. As the first thing, you enter in our website, we're loading your bootstrap. Bootstrap is a model which is responsible to fetch all further chapters. It's also responsible to prepare our runtime environment. It's also like checks your old status to know which chapter to load and some other stuff.

3. AliB Deployment and Release Process

Short description:

As soon as bootstrap is done, it fetches the chapter and the catalog. AliB plays a crucial role in this process, with two steps: deployment and release. Deployment alone does not guarantee consumption; release is necessary. Teams update metadata in the deployment dashboard and the catalog fetches this metadata during runtime to determine what should be consumed.

And next, as soon as bootstrap done, it fetches the chapter and let's say catalog is fetched and this is where AliB comes to stage. And to better understand how AliB works, let's consider it from two sides, developer experience and the runtime. So on that developer experience side, teams they have full autonomy developing and deploying their packages when they're ready. And there is a very special thing. In AliB, we have two steps, deployment and release. And if you have deployed something, it doesn't mean that someone will consume it. No one actually will be consuming until release happens. And teams can do the release, updating the specific metadata from the front-end deployment dashboard. Which means in the runtime, catalog fetches metadata about the package first to see what's being released and what should be consumed now in the runtime. And then initialize it on demand.

4. Integration to React and TV Development Process

Short description:

The integration to React involves using the AliB package and specifying the desired package and options. This ensures independent lifecycles for packages and allows teams to have ownership over minor and patch releases. The micro frontend architecture on TV devices is similar to web, but with differences in the runner and the need for native models. Each target has independent infrastructure, and changes can be promoted independently based on testing. The development process on TV starts in the browser, with options like storybooks and sandboxes for package development.

From the integration to the React, it looks very like on this slide. Basically, there is a hook which is use AliB package. You are specifying the package you are interested in, you are specifying some options, including the major version up to which you are interested, to ensure that all the integrated changes will be non-breaking for your current state. And by that we ensure that packages have a completely independent lifecycle. Teams have full ownership over their minor and patch releases. For the major releases, they need integrational changes, they're going to need to update the catalogue, and it's enforced on the code level. And it fully supports the idea of autonomy and you build it your own statement, which we truly believe in.

Surprise, surprise, we have completely the same micro frontend architecture on TV devices. What we have on TV is different compared to that. We have our runner, or apk, or call it whatever you like, which we're trying to keep as tiny as possible. In best cases it's just URL, in good cases it's just manifest, in some cases it also includes native models because if you need native models they should be integrated there. But later on, bootstrap loaded, and then chapters, and then again packages as you can see on the screenshot. But compared to web on TV and game consoles all targets are different. And they were more different than just different browser engines, because there is a different browser engine, there is different lifecycle events, there are different UI events and different runtime. And something what's available on one target can be just not even possible on other targets. That's why we delegate some extra responsibility on TV to the bootstrap player, compared to web one in addition to the normal functionality. It also supports various transformation such as configuration of various things including key mapping, because left, up, right, down events are all different across different targets. It's also different on Xbox and completely different on PlayStation. Also important, each target has fully independent infrastructure. It's crucial, as you remember. They're all different. So we maintain all our infrastructure with code, and if you're interested how to basically build broad-ready infrastructure for your front end with TypeScript and Terraform, I was doing the talk last React advanced and there is a link for the sample to the similar infra we have. And, yeah, there is a recording as well. So you can check it out.

Similar to the previous one, teams should have autonomy to release to those targets where it's been tested, right? Because they're all different. So, it's possible to promote changes completely independently, so you don't need to go immediately to all targets. You can just release where you've already been tested. But what about development process on TV? Development process from the very beginning always starts in your browser, yeah? So you can use Chrome, Firefox, your choice. For most of the packages we have storybooks, for some we have own sandboxes. It's up to the team to decide what they like for their development needs.

5. Shared Code and Remote Virtual Lab

Short description:

We share state and UI code between targets, even when the UI looks different. Create React Contexts allow us to use components in a common code base and enforce their interface with TypeScript. Feature flags and assigned audiences help toggle components on or off based on conditions. We also have target-specific models and legacy model swapping. Testing is crucial as shared code may not work out of the box. We have a remote virtual lab with a web app and end-to-end tests, accessing devices through the API. The Raspberry Pi controls the camera, TV, and remote. The interface allows device occupation and control via a web interface or a remote. Devices are located in different locations, not in your home.

We also have state and UI code mostly shared between targets. We even have cases where code shared between web and TV targets, even though that UI-wise it looks completely different. But state-wise, it can be very similar and you may want to leverage your abilities there. And in order to share, we use different techniques.

One of these is Create React Contexts, basically very powerful and straightforward. The idea here is that on the common code base, you're using components which are available in your context. So in this case, on the target level, you can define which components to pass in and your common code becomes just agnostic to whatever you're going to pass in and it can enforce the interface of those components with the TypeScript.

With the feature flags, with assigned audiences, other popular and very powerful technique because basically with the feature flags you can toggle on or off your component, straightforward. But with the audience complementary to this, you can gradually define on which conditions you want to toggle it on or off. And let's say if you have incident for something, for let's say you haven't tested that there is a memory leak in your fancy new feature on certain targets, you can specify even the versions for which targets you want to toggle it off. For sure, we have target-specific models. As a legacy we also have model swapping. I'm not a big fan of this approach but model swapping, yeah, with us for a while and it's still there but the main downside of this that you need to provide model with a completely same interface without any help of TypeScript or anything else.

And I just want to remind you, on TVs, all the targets are different so we have lots of code shared but it doesn't mean that it's guaranteed that this shared code is going to work out of the box so we need to test something. Well, utilizing cinema room like on this slide kind of cool but requires lots of space at your home and at least you will need good iron cone over summertime. And to be fair with you, we started with this approach where we had sort of cinema rooms. So we have still have those in our office but we come up with something better for the remote working. And on this slide you see very simplified high level version of the architecture of our remote virtual lab where you can see there are two entry points. One is a web app and the other is one end to end tests. So with the web app, you can, yeah, it's really useful for, like, exploratory testing and manual testing when end to end tests can still access our remote devices through the API. API layer is responsible for authorization, cuing, and proxying request to the Raspberry Pi service. Now, on the Raspberry Pi, we have this shared responsibility to control camera in front of TV to make the recording of it and to control TV itself to toggle it on, off, restart, and control the remote. And let me show you how the interface looks like. So we start on the page, we can occupy the device now, device books for us. So when I'm testing, no one else will be using this device. And as you can see, I can use just the web interface, I can use this fancy remote to control it. For some of the targets, we even implemented debugger. And for those who were working with Cardova many years ago before Chrome introduced remote debugger, this plugin can be familiar, but it's been duplicated for a while and we are supporting their own version of it. I also want to bring your attention to the fact, it's not loaded, here we go, that devices, those devices, they are located in different locations. So some of them are located in Poland, others in the UK, and definitely not in your home.

6. Testing Changes and Smart TV Focus Management

Short description:

To ensure flexibility in testing changes, we implemented an overlay for non-prod environments that allows fetching specific package versions using GitHub run IDs. Teams can integrate these IDs into their push pipelines, and Alip will respect the overrides and fetch the corresponding packages. We also leverage teamplates projects and generators to simplify project setup. When developing for smart TVs, focus management can be challenging. We have an in-house solution and open-sourced projects that address this. Options include manually maintaining a focus map, a distance-based and priority-based solution, and a declarative approach using higher order components. Pointer navigation is also supported, with slightly different behavior for TVs with pointer input methods.

And we need to have flexibility to test our changes. We can't just reinstall every time because it's going to take ages every time we make a change in a new APK. So for non-prod environments, we implemented a really interesting overlay, which you can just open with a specific combination, and it allows you to just type GitHub run ID for your specific push, and in this case, Alip, will respect this run ID for the package and will fetch the exact version of this package. And it's possible because on the Alip side, we have extra thing for non-prod environments. Before Alip does a fetch of the version json to ensure which package should be downloaded, it checks if there's any local storage override. And if there are, then it will respect it. So teams are responsible in this case to integrate a specific GitHub run ID to their push pipeline, which uploads their package to Dev Infra and then Alip is responsible to respect those overrides and fetch them on demand.

As you can see, there are a lot of things teams need to remember, and you may worry how to start a new project with such complex setup. And for this, we are, of course, leveraging the power of teamplates projects. So we have generators created specifically for it, which spin up your project with a set-up pipelines and the best techniques, which we share.

When developing something for smart TVs, there is one special thing about them, right? It's like input control. You don't have mouse or touch, you probably have pointer, but it's a different story. And typically, focus is changed on the reaction on some of your key events. Up, down, left, right. And for this, we have the in-house solution, which is not yet open-sourced, but I just want to share with you an idea how it's possible to address focus management challenges and some open-sourced projects which you can use if you're interested in this topic. So one of the most straightforward approach is probably manually maintain focus map. It doesn't scale at all, but it's very suitable for small apps. And the idea here is that you just create an object which is linked list and you're iterating really in imperative way, switching the active node. The other approach which Netflix utilizes, it's distance-based and priority-based solution where you're just computing closest nodes based on the priorities, left to right, right to left and others, what should be focused next. And the other thing which we're utilizing is a more declarative one where you specify a higher order components like vertical list, horizontal list, and grid. And you make your node focusable by just adding the hook, like use focus in our case. And it tells you it is focus, is it selected, that's all. All those solutions can be combined together so you can check it out and try to implement your own. For some TVs, like LG, there is also another input method which is pointer one, is a magic remote and pointer navigation should be familiar for most of you as it's quite similar to mouse. But with the pointer navigation, we have slightly different behavior on TV because users can move their pointer and scroll at the same time and we need to support it as well.

7. Focusable Nodes and Key Moments Detection

Short description:

You can combine different solutions to make your node focusable and implement your own. Pointer navigation on TVs with magic remotes has slightly different behavior, so designers need to consider it. For performance, avoid unnecessary paint and layouting, render only what is needed, optimize resource delivery, introduce CDN and priority caching, and utilize virtualization. Thank you for listening and if you have any questions, feel free to ask. Denis explains the detection of key moments in sports and mentions that different techniques are used depending on the sport. For football, finding the exact moment where the game started is crucial, and data providers help in plotting the data.

And you make your node focusable by just adding the hook, like use focus in our case. And it tells you it is focus, is it selected, that's all. All those solutions can be combined together so you can check it out and try to implement your own.

For some TVs, like LG, there is also another input method which is pointer one, is a magic remote and pointer navigation should be familiar for most of you as it's quite similar to mouse. But with the pointer navigation, we have slightly different behavior on TV because users can move their pointer and scroll at the same time and we need to support it as well. And another thing in cases if you want to support horizontal scrolling with the pointer, you need to think about even different UI so you need to ask your designers to not forget about it as you can see on the slide. So, it's quite tricky.

Performance wise, yeah, just I had a full talk about performance separately but this is just general advice because on devices such as smart TVs we're really low in memory and CPU in most cases so we really need to think of rendering and startup performance. Rendering performances happen during user interaction when you have front up, yeah, and here of course one of the like most obvious advices is to avoid paint and layouting when you can. Avoid that. It sounds easy but not so easy in many cases. Render only what is really needed. Nadia Makharevich was doing the talk right before me so I think you all saw it. Avoid useless and heavy renders, yeah. About startup one of course optimize your resource delivery, introduce CDN, be closer to your customers, introduce priority caching, do asset optimization and load only what you need. Oh I also forget about like virtualizing, yeah. Like really, virtualization is very powerful so render only what you need on your screen at the moment. And thank you so much, I hope you enjoyed it. Yeah, if you have any questions.

Thank you so much Denis, that was amazing. So much content. I know we had a little discussion about some of your things and even as I was listening I was like I'm definitely going to go and watch some of the other talks you referenced because it sounded like so much amazing content. Why don't we jump in, this is one that I was questioning as well because you kind of talked a little bit about. There was some functionality you showed on Dazon which was the sort of moments inside the map. Yes, key moments. E-moments. So how do you detect those? Do you use a library or is something else doing that? You talked about machine learning or AI? Exactly, so it depends on which type of sport we're talking because we leverage different techniques. For example, for the football part everything's more straightforward and actually it was the first sport for which we introduced key moments. On the football side we just need to find the exact moment where the game started because we need zero position and all the streams are different because we have different broadcasters, different partnerships, even though the game is the same. And then we're utilizing the data provider which gives us the data and we can plot it.

8. Key Moments Detection

Short description:

We use machine learning to detect key moments in sports like baseball and synchronize them correctly. For boxing, we not only detect the start of each round but also detect fighter names, which are not always revealed beforehand. It's quite interesting and impressive.

It's more or less straightforward. With the baseball, for example, we do the detection ourselves with the machine learning to identify the moments and do the synchronization plotting them correctly. That's amazing. That also sounds like such a complex person's problem to figure out. Boxing, for example, is really interesting because when you're trying to detect, we do detection of every round starts, but we also do detection of fighter names because fighter cards are not always revealed prior to the fight, I mean for the entire night, and it's quite interesting. That's impressive. I'll remember that the next time I watch a boxing match. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
React is a library for "rendering" UI from components, but many users find themselves confused about how React rendering actually works. What do terms like "rendering", "reconciliation", "Fibers", and "committing" actually mean? When do renders happen? How does Context affect rendering, and how do libraries like Redux cause updates? In this talk, we'll clear up the confusion and provide a solid foundation for understanding when, why, and how React renders. We'll look at: - What "rendering" actually is - How React queues renders and the standard rendering behavior - How keys and component types are used in rendering - Techniques for optimizing render performance - How context usage affects rendering behavior| - How external libraries tie into React rendering
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a new web framework from the creators of React Router that helps you build better, faster websites through a solid understanding of web fundamentals. Remix takes care of the heavy lifting like server rendering, code splitting, prefetching, and navigation and leaves you with the fun part: building something awesome!
Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Top Content
Do you have a large product built by many teams? Are you struggling to release often? Did your frontend turn into a massive unmaintainable monolith? If, like me, you’ve answered yes to any of those questions, this talk is for you! I’ll show you exactly how you can build a micro frontend architecture with Remix to solve those challenges.
React Advanced Conference 2023React Advanced Conference 2023
33 min
React Compiler - Understanding Idiomatic React (React Forget)
Top Content
React provides a contract to developers- uphold certain rules, and React can efficiently and correctly update the UI. In this talk we'll explore these rules in depth, understanding the reasoning behind them and how they unlock new directions such as automatic memoization. 
React Advanced Conference 2022React Advanced Conference 2022
30 min
Using useEffect Effectively
Top Content
Can useEffect affect your codebase negatively? From fetching data to fighting with imperative APIs, side effects are one of the biggest sources of frustration in web app development. And let’s be honest, putting everything in useEffect hooks doesn’t help much. In this talk, we'll demystify the useEffect hook and get a better understanding of when (and when not) to use it, as well as discover how declarative effects can make effect management more maintainable in even the most complex React apps.
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Top Content
Concurrent React and Server Components are changing the way we think about routing, rendering, and fetching in web applications. Next.js recently shared part of its vision to help developers adopt these new React features and take advantage of the benefits they unlock.In this talk, we’ll explore the past, present and future of routing in front-end applications and discuss how new features in React and Next.js can help us architect more performant and feature-rich applications.

Workshops on related topic

React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Advanced Conference 2021React Advanced Conference 2021
132 min
Concurrent Rendering Adventures in React 18
Top Content
Featured WorkshopFree
With the release of React 18 we finally get the long awaited concurrent rendering. But how is that going to affect your application? What are the benefits of concurrent rendering in React? What do you need to do to switch to concurrent rendering when you upgrade to React 18? And what if you don’t want or can’t use concurrent rendering yet?

There are some behavior changes you need to be aware of! In this workshop we will cover all of those subjects and more.

Join me with your laptop in this interactive workshop. You will see how easy it is to switch to concurrent rendering in your React application. You will learn all about concurrent rendering, SuspenseList, the startTransition API and more.
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Top Content
Featured Workshop
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Top Content
Featured WorkshopFree
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn