And what we can see above here that we do have like this kind of check that's supposed to happen where if somebody tries to divide by zero, it will alert divide by zero error and return zero. But those two lines have zero hits, so those did not execute. And I kinda like to, as I'm going through, add comments. So, we can say here, this line is supposed to hit but does not. And then anybody else that looks at this replay will be able to then click here and go directly to that line in the code, see the comment and start to debug it.
You can also inspect React components. So, if I come back here and I go to my app, I can see the state of my application that corresponds to, I can go ahead and search here, my app component, search for a file, by app.js. So, I can see that I have... My React state is total next and operation that corresponds to what the value is here. and that changes at different points in the replay. So, when I'm here, my app state is 8.0.0. But when I'm here, my app state is... Let me see, go a bit further down. 8.0 division 8. So, I'm able to also evaluate React state at different point in time. Again, I'm able to evaluate CSS at different points in time. And I'm also able to see the network requests. This is a local UI application. There's not a lot happening with the network request, but, you could see the request headers, the response body, the stack trace, the timings. And then you can also add comments directly to network requests. So, that's a quick tour of what Replay is. This is the manual process using the Replay browser to report a bug. It's helpful for, if you encounter a bug during the development process, if you're doing manual QA, if you're on a support team, those are the use cases that we see. But you can also replicate this experience using automated tests.
So, if I go back to my slides here, we can go ahead and minimize the browser. We have recording automated tests and it's just Replay automated tests is the tiny URL there on the slides. So, what you can do is we have, and we have forks for Playwright and Puppeteer. We do have a very basic proof of concept for Jest as well. And then also a Cypress fork, but those are still being worked on. So, very, very alpha. But Playwright and Puppeteer, and we're gonna be using Playwright one today. Essentially, what it does is it swaps out the regular browser that you would use to run your tests. So when you're running your tests in your machine, Playwright is opening a browser, it's navigating through the test code, it's executing your execution in a browser. So instead of using Chrome or Firefox for that, we're using the Replay version of that browser. That way you can create that recording and you have that asset after the fact, so if a test fails, you have all that helpful information about what happened during the execution of that test. So that's what we're gonna be working on in the next section here. And we're gonna go ahead and start with, just to kind of, I'm gonna walk through what the process looks like. So again we're gonna be recording Playwright. We're gonna be using a fork of an existing setup, so you won't have to go through this many steps. But I'll replay Playwright, it supports Firefox and Max Linux and Chrome on Linux, so when I was talking earlier about how there could be a dependency discrepancy between operating system, this is an example. If you tried to use it on a Windows machine, would not work, for example. So, what this does is there is a replayio playwright, replayio slash playwrights package that gives you access to a replay devices array that you just use in your replay Playwright configuration in order to use the replay enabled browser instead. Once you've done that, it'll go ahead and use the replay enabled browser to run the tests and report instead. So we do have a GitHub action, which is what we'll be using today that does this automatically for you. So it records your failed playwright tests and replay and it automatically uploads them to your replay library. So your replay library is essentially where all your replays are stored. So if I go to my test team we can see, you know, this morning I ran some tests and those failed recordings are all here in my library. So that happens automatically as a result of the GitHub action. So then when I have the failure that occurs I can come in here and I can review the recordings.
So, just to show you what that looks like we're gonna be working with the floating UI fork here. I'll go ahead and pop that in the chat and then also in Discord. So I'm gonna go ahead and pop that in the chat. And as you can see, this morning, I ran, I updated the readme just to add the texts that we're using the replay browser for this fork and everything passed. So, link and type checking pass on my unit test pass, functional test pass. We create a Google request with a change and it is a breaking change, and so now our tests did not pass. So if we go into here, I can see that my unit test and my functional test did not pass. If I go to my functional test, I can see at the end here, upload finished, view your replay apps, and it has a link here to the replays that were generated of the failed test. So it creates a replay for each individual failed test. It's added to my library. I can also click on the link here from the GitHub Action in order to access that. This is very, very, this is similar to, for example, we looked at with the Cypress example, right? Where Cypress example has the screenshots and the video reporting. This, this takes it one step further and it gives you that debuggerable replay with all the information from the test run. So if we were to take a look at one from this morning, I'll go ahead and just click on this one here. So let me go to viewer. I can watch my test run here and I can see it floats. It's off, it's supposed to be lined up, but it is not lined up and that is why it is failing. So arrow should not overflow floating element left end and it is overflowing. So I can come in here, I have access to my code, I can see what code executed, what didn't, and start to debug. I could add comments to it. But what we're gonna do is essentially we're gonna recreate this process.
Comments