React Microfrontend Applications for TVs and Game Consoles

Bookmark

DAZN is the fastest-growing sports streaming service and during this talk, we will consider our experience building tv applications (for ps5, xbox, LG, Samsung and other targets) with micro frontend architecture. You may expect time travel to see how it started and what we have at the moment, what makes tv development different compared to the web, same as techniques that allow us to share code between targets. You will learn how we test our application with an in-house remote testing lab as well as our deployment and release process.

by



Transcription


Cool. First of all, let me introduce myself. My name is Denis. I'm principal engineer at the zone. And if you want to check some code samples I'm going to referring to, you can use my GitHub account and also if you want to find any of my contacts, just follow my website. And at the zone, we change every aspect how funds are engaging with sports, starting from their content distribution to creation of own documentaries and fully extended augmented experience with amazing feature we built, which are working in real time. But what's interesting as well is that we are available on dozen type of devices, such as, of course, web devices, mobile devices, but also smart TVs, game consoles, set-top boxes and about these three, last three, we're going to talk in now, in next 20, 25 minutes or so. So, before we continue, I just want to share this split we have at the zone. We have two group of targets. One group is HTML5 or we probably can call it React targets. And another one is Bespoke. So today we will be focusing on the first one. The other one is more for native languages. And as you can see, it covers so many different targets, such as Samsung, Galaxy, PlayStation, et cetera. There are lots of them. So now I'd like you to take all of you to some adventure and show how our architecture journey started, how we iterated over it and what we currently have. And as you can imagine, with something new back in 2016, when the zone just started, it was the way application created by even third-party company and we started, of course, with a monolith architecture because monolith is kind of obvious choice for something new as it helps you to grow fast enough at a small scale. And it works really well until your development team and features in your application is relatively small. But later on, we stepped into the rapid growth space where we have hundreds of engineers and of course, at this scale, one of the most important things is to give teams autonomy. It's actually where we step in as an engineering company and instead of the third-party company, we have rebuilt the application completely from scratch with a micro front-end architecture where we implemented vertical split of the vertically split domains. And just to give you an idea of the domains, so domains is something with, at least that time we thought so, with clear boundaries. For example, we have authorization domain which is responsible for sign in, sign up, recovery password flow. We have the catalog domain which is basically responsible for browsing of the content. We have landing page domain which is responsible for seed pages and so on. And I believe you get the idea. If you're interested in this journey, how we actually like iterate it from the monolith to the micro front-end, I really recommend you this talk from my friend and colleague Max Galla which he did, I believe, last year, really interesting journey. But same time, we also introduced deployment dashboard for our new micro front-ends. So this domain can be released independently. And that time, the only one team were responsible for the entire domain. And everything was well. It was really, I mean, big step forward from what we had previously. But we continue to grow. We reach the point where we have more than several hundreds of engineers. And some of the domains become way more complex than they were initially. So catalog itself, yeah, it's a place where you can browse the catalog. But it also has quite a lot of features. For example, player. Player quite complex feature each package. Adaptive bit rate, digital rights management, other things which it's aimed to support. Later on, we have key moments. You see these dots, yeah? Right in the player. They are actually representing the interesting moments of the game. We have the thing for various sports, for football, for boxing, for MotoGP. And recently, we introduced it for the baseball. I highly recommend you to check it out. And we apply various techniques including the machine learning to detect them in real time and plot on the timeline correctly according to the video. So you know exactly when moment happens. As I said, it works for VOD content and for live content. We have also much of the panel here, which is sort of mini app where there are lots of, lots of, lots of mini things integrated. But they all are developed by various teams, such as Formation. They again, live feature, fully in sync with video. If any transformation happens on the pitch when you're watching the game, it's going to reflect that change. And as you can imagine, we need something different to manage all of this because Catalog, it no longer belongs to the one team. There are lots of teams which are now seats there. So we're getting the same problems. Multiple teams share the single deployable artifacts. And we're getting the release congestion because if you're promoting the change between your test environments and probably you have staging for pre-prod and prod environment, and let's say player tries to release their changes, but they stuck or found an issue, yeah, any other team kind of blocked with their release because they need to wait until the issue will be resolved or someone going to take out those changes from the code, which is tricky in many cases. We have a pack release statuses. Even though that I just demoed to you, we have nice deployment dashboard where you can deploy the chapter, its entire chapter. You know the version of the entire chapter, but what version of the package? Well, we'd like to find out. You need to either maintain something or develop custom solution for this. Yeah, quite tricky. So we iterated over it and we introduced new micro front-end architecture which we call ALEP, which stands for Asynchronous Library Loading. So we still keep our vertically split domain, but it's complementary, fulfilled with horizontal split, feature-based split where features can be fetched on parent demand and I'm going to demo it now how exactly it works. So let's consider first from web perspective. What happens in action when you visit the zone? The first thing, you're entering our website, we're loading your bootstrap. Bootstrap is a model which is responsible to fetch all further chapters. It's also responsible to prepare our runtime environment. It's also like checks your old status to know which chapter to load and some other stuff. And next, as soon as bootstrap done, it fetches the chapter and let's say catalog is fetched and this is where ALEP comes to stage. And to better understand how ALEP works, let's consider it from two sides, developer experience and the runtime. So on the developer experience side, teams, they have full autonomy developing and deploying their packages when they're ready. And there is a very special thing. In ALEP, we have two steps, deployment and release. And if you have deployed something, it doesn't mean that someone will consume it. No one actually will be consuming until release happens. And teams can do the release, updating the specific metadata from the front-end deployment dashboard which means in the runtime, catalog fetches metadata about the package first to see what's been released and what he actually should be consumed now in the runtime. And then it's initialized on the on demand. From the integration to the React, it looks very like on this slide. So basically, there is a hook which is use ALEP package, you're specifying the package you're interested, you're specifying some options including the major version up to which you're interested to ensure that all the integrated changes will be non-breaking for your current state. And by that, we kind of ensure that packages have a completely independent lifecycle. Teams have full ownership over their minor and patch releases. For the major releases, as they need integrational changes, they're going to need updated catalog and it's enforced on the code level. And it's fully support the idea of autonomy and you build it your own statement which we truly believe in. And now, surprise, surprise, we have completely the same micro front-end architecture on TV devices. What we have on TV is different compared to that. Okay, we have our runner or APK or call it whatever you like which we're trying to keep as tiny as possible. In best cases, it's just URL. In good cases, it's just manifest. In some cases, it also includes native models because if you need native models, they should be integrated there. But later on, yeah, bootstrap loaded and then chapters and then again packages as you can see on the screenshot through Olip. But compared to web on TV and game consoles, all targets are different. And they were more different than just different browser engines because there is a different browser engines. There is a different lifecycle events. There are different UI events and different runtime. And something what's available on one target can be just not even possible on other targets. That's why we delegate some extra responsibility on TV to the bootstrap layer compared to web one in addition to the normal, let's say, normal functionality. It also supports various transformation such as configuration of various things including key mapping because left, up, right, down events are all different across different targets. It's the same thing on the box and completely different on PlayStation. Also important, each target has fully independent infrastructure. It's crucial. As you remember, they're all different. So we maintain all our infrastructure with code. And if you're interested how to basically build broad ready infrastructure for your front end with TypeScript and Terraform, I was doing the talk last React Advanced and there is a link for the sample to the similar infra we have. And yeah, there is a recording as well. So you can check it out. Compared to previous one, teams should have autonomy to release to those targets where it's been tested, right? Because they're all different. So it's possible to promote changes completely independently. So you don't need to go all in immediately to all targets. You can just release where you've already been tested. But what about development process on TV? Development process from the very beginning always starts in your browser, yeah? So you can use like Chrome, Firefox, your choice. For most of the packages, we have storybooks. For some, we have own sandboxes. It's up to the team to decide what they like for their development needs. We also have state and UI code mostly shared between targets. We even have cases where code shared between web and TV targets even though that UI-wise it looks completely different. But state-wise, it can be very similar and you may want to leverage your abilities there. And in order to share, we use different techniques. One of these is create React context, basically very powerful and straightforward. The idea here is that on the common code base, you're using components which are available in your context. So in this case, on the target level, you can define which components to pass in. And your common code can just be agnostic to whatever you're going to pass in. And it can enforce the interface of those components with the TypeScript. With feature flags, with assigned audiences, other popular and very powerful technique because basically with the feature flags, you can toggle on or off your component, straightforward. But with the audience complementary to this, you can gradually define on which conditions you want to toggle it on or off. And let's say if you have incident for something, for let's say you haven't tested that there is a memory leak in your fancy new feature on certain targets, you can specify even the versions for which targets you want to toggle it off. For sure, we have target-specific models. As a legacy, we also have model swapping. I'm not a big fan of this approach, but model swapping, yeah, with us for a while and it's still there. But the main downside of this is that you need to provide modeling with a completely same interface without any help of TypeScript or anything else. And I just want to remind you, on TVs, all the targets are different. So we have lots of code shared, but it doesn't mean that it's guaranteed that this shared code is going to work out of the box. So we need to test something. Well, utilizing cinema room like on this slide, kind of cool, but requires lots of space at your home and at least you will need good air con over summertime. And to be fair with you, we started with this approach where we had sort of cinema rooms. We have still have those in our office, but we come up with something better for the remote working. And on this slide, you see very simplified high-level version of the architecture of our remote virtual lab where you can see there are two entry points. One is a web app and the other one end-to-end test. So with the web app, you can, yeah, it's really useful for like exploratory testing and manual testing when end-to-end tests can still access our remote devices through the API. API layer is responsible for authorization, queuing, and proxying requests to the Raspberry Pi service. Well, on the Raspberry Pi, we have the shared responsibility to control camera in front of TV to make the recording of it and to control TV itself to toggle it on, off, restart, and control the remote. And let me show you how the interface looks like. So we start on the page. We can occupy the device now, device booked for us. So when I'm testing, no one else will be using this device. And as you can see, I can use just the web interface. I can use this fancy remote to control it. For some of the targets, we even implement the debugger. And for those who were working with Cardova many years ago before Chrome introduced remote debugger, this plugin can be familiar, but it's been deprecated for a while and we are supporting their own version of it. I also want to bring your attention to the fact, why it's not loaded? Here we go. The devices, those devices, they are located in different locations. So some of them are located in Poland, others in UK, and definitely not in your home. And we need to have flexibility to test our changes. We can't just reinstall every time because it's going to take ages every time we make a change in your APK. So for non-prod environments, we implemented a really interesting overlay, which you can just open with a specific combination and it allows you to just type GitHub run ID for your specific push. And in this case, Alip will respect this run ID for the package and will fetch the exact version of this package. And it's possible because on the Alip side, we have extra thing for non-prod environments. So Alip does a fetch of the version JSON to ensure which package should be downloaded. It checks if there are any local storage overwrite. And if there are, then it will respect it. So teams are responsible in this case to integrate specific GitHub run ID to their push pipeline, which upload their package to dev infra and then Alip is responsible to respect those overwrites and fetch them on demand. As you can see, there are a lot of things teams need to remember and you may worry how to start the new project with such complex setup. And for this, we are, of course, leveraging the power of teamplates projects. So we have generators created specifically for it, which spin up your project with a set it up pipelines and best techniques, which we share. Observability-wise, I'm not going to focus in because we almost run out of time. And I really want to recommend you one talk, which Konstantinos from the zone did today. So if you miss it, just watch the recording. Really cool one. It's just really crucial thing as well. But we will switch to the focus manager now. When developing something for smart TVs, there is one special thing about them, right? It's like input control. You don't have mouse or touch. You probably have pointer, but it's a different story. And typically, focus is changed on the reaction on some of your key events. Up, down, left, right. And for this, we have the in-house solution, which is not yet open source, but I just want to share with you an idea how it's possible to address focus management challenges and some open source projects which you can use if you're interested in this topic. So one of the most straightforward approach is probably manually maintain focus map. It doesn't scale at all, but it's very suitable for small apps. And the idea here is that you just create an object which is linked list and you iterating really in imperative way, switching the active node. The other approach which Netflix utilizes, it's distant base and priority base solution where you're just computing closest node based on the priorities, left to right, right to left and others, what should be focused next. And the other thing which we're utilizing is a more declarative one where you specify higher order components like vertically, horizontally, and grid. And you make your node focusable by just adding the hook like use focus in our case. And it tells you is it focus, is it selected, and so on. All those solutions can be combined together so you can check it out and try to implement your own. For some TVs like LG, there is also another input method which is pointer one, is a magic remote. And pointer navigation should be familiar for most of you as it's quite similar to mouse. But with the pointer navigation, we have slightly different behavior on TV because users can move their pointer and scroll the same time and we need to support it as well. And another thing, in cases if you want to support horizontal scrolling with the pointer, you need to think about even different UI so you need to ask your designers to not forget about it as you can see on the slide. So it's quite tricky. Performance wise, yeah, just I had a full talk about performance separately but this is just general advice because on devices such as smart TVs, we're really low in memory and CPU in most cases. So we really need to think of rendering and startup performance. Rendering performance has happened during user interaction when you have front up. And here of course, one of the most obvious advices is to avoid paint and layouting when you can. Avoid that. It sounds easy but not so easy in many cases. Render only what is really needed. Nadia Makarevich was doing the talk right before me so I think you all saw it. Avoid useless and heavy renders. About startup one, of course, optimize your resource delivery, introduce CDN, be closer to your customers, introduce priority caching, do asset optimization, and load only what you need. Oh, I also forget about virtualizing. Really, virtualization is very powerful so render only what you need on your screen at the moment. And thank you so much. I hope you enjoyed it. Yeah, if you have any questions. Thank you so much, Dennis. That was amazing. So much content. I know we had a little discussion about some of your things and even as I was listening, I was like, I'm definitely going to go and watch some of the other talks you referenced because it sounded like so much amazing content. Why don't we jump in? This is one that I was questioning as well because you kind of talked a little bit about it. It wasn't necessarily a talk but it was some functionality you showed on the zone which was the sort of moments inside the match. Yes, key moments. E-moments. How do you detect those? Do you use a library or is something else doing that? You talked about machine learning. Exactly. So it depends on which type of sport we're talking because we leverage different techniques. For example, for the football part, everything is more straightforward and actually it was first sport for which we introduced key moments. On the football side, we just need to find the exact moment where the game started because we need zero position and all the streams are different because we have different broadcasters, different partnership, even though the game is the same. And then we're utilizing the data provider which gives us the data and we can plot it. It's more or less straightforward. With the baseball, for example, we do detection ourselves with machine learning to identify the moments and do the synchronization plotting them correctly. That's amazing. That also sounds like such a complex person. Yeah, exactly. Problem to figure out. Boxing, for example, is really interesting because when you're trying to detect, we do detection of every round start, but we also do detection of fighter names because fighter cards not always revealed prior to the fight. I mean, for the entire night. And yeah, it is quite interesting. It's impressive. I'll remember that the next time I'm watching a boxing match. Thank you. And let's talk about some of those different environments. One thing that you showed, you showed that like the HTML5 environments versus some of the more native ones on the other side, if I remember. Yes. And that was a slide, right? Yes. It was about targets, I believe, not environments. And so let's talk about how do you program for each of those different places? Do you use something to translate between them? Say it again? How do you program for each of these different environments, well, these different platforms that they are going to eventually be consumed on? And talk about translating or sort of the nuances between those different platforms. Yes, of course. I think there are lots of nuances, actually the entire talk was about how we share between those environments and what we're replacing. So yeah, there are like components which are developed completely from scratch for the certain environment. There are components which are working across different environments. But to ensure that if something you develop will be working across all the environments, you need to go through all the environments and test it there. And there are techniques which we're leveraging to share the code. As I mentioned, one of them is like React context. Another one is target specific components, feature toggling, etc., which allows you to share and be ensured that, yeah, if this code supported, then it's fine. But every target has own infrastructure. So we can deploy completely independent code to each infrastructure. So they definitely have like every target definitely have different versions of the zone at a certain point of time. They not fully in sync. Never, I would say, because basically something developed early for some targets. That makes sense. I was so impressed with the Raspberry Pi thing that you showed. Yes. When you wanted to test it on so many different platforms. I was like, that was genius that someone thought about. And kind of talking about testing and small QA, because all of these different teams are deploying and working on their products. How do you do Q&A if another team is still deploying at the same time? So this is a good question, because basically there are two parts of the testing. First one is the feature based testing and another one integration one. For the feature based testing, usually teams is responsible themselves for the integration based as a team, which is responsible. In most cases, it's not strictly required for every single time, because if you do just a minor or a bunch changes for your feature and you have full ownership of this feature, that's the whole idea behind the Aleph, that you have autonomy to release when you're ready and you are independent. But in the case if you do the major change and you want integration update between the chapter and your feature, other teams may be affected as well. Then we do what we call UIT. It's basically a testing which happens on final pre-prod environment, in our case it's staging. Nice, nice. And speaking about feature flags, when do you delete them and what is the right time for it? So it always depends. We never delete main feature flags. Basically we always have control over it. I think this question more referred to probably A-B testing, because it's a separate thing, but we also use them a lot. And A-B testing we're deleting as soon as basically one test show positive or negative effect or if just neutral, then product makes a decision, final decision with the data team. Nice. Honestly, the things that you've just shown us is very impressive. I can't wait to check it out. Let's give it up for Dennis one more time.
25 min
17 Jun, 2022

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic