Introduction to JS Functional Testing at Scale

Rate this content
Bookmark

In this workshop we will take a look at the JavaScript framework landscape for automated functional testing, such as Puppeteer, Playright, Cypress, and others. We will explore what their differences are and how to choose the right tool for your project. Then, we will look at various scenarios, from basic functional test concepts to complex visual and frontend performance tests and complete the session with scaling up our solution using a cloud vendor like Sauce Labs.

197 min
05 Jul, 2021

Video Summary and Transcription

This workshop on JS functional testing at scale covers topics such as browser automation, evolution of automation frameworks, setup and configuration, troubleshooting, extending print setup with Alura reporter, reporters and custom services, creating custom services and Allure report, CI/CD integration, cloud integration and running tests in the cloud, Sauce Labs integration, speed and parallelization in testing, network mocking, WebDriver debugging, page objects and test abstraction, and performance testing with Lighthouse and Speedo.

1. Introduction to the Workshop and Speakers

Short description:

Thanks everyone for joining our workshop on JS functional testing at scale. I'm Christian, working at the Open Source Program Office at SAS Labs. I help standardize the web.io protocol and assist our engineering team. I'm Nikolai, a Senior Solutions Architect at SAS Labs, passionate about testing and automated testing. I have over 10 years of experience and work with customers worldwide to improve test automation strategies.

♪♪ Again, thanks everyone for joining our workshop into intro into JS functional testing at scale. Thank you everyone for joining from Argentina to Europe to Japan. It's a pleasure. Let me introduce ourselves first before we get into the weeks. Starting with myself, I'm Christian. I'm currently working at the Open Source Program Office here at SAS Labs. Many people maybe know me as a maintainer of web.io. I have a lot of other hats as well. I'm helping W3C standardizing the web.io protocol for cross-browser automation and working at SAS for helping our engineering team be a good Open Source citizen. And yeah, that's about me. Did you go ahead? Yeah, thanks so much, Christian. My name is Nikolai Advilatkin. I'm a Senior Solutions Architect here at SAS Labs. All that title ultimately means is that I am a geek that's passionate about testing and automated testing in general, started my career in manual testing over 10 years ago, and then transitioned to automated testing, I've been doing it ever since. And because I love it so much, I even started a blog where I can talk about test automation in my spare time because I have nothing better to do. And now I work for SAS Labs basically with customers all over the world helping advise in different kind of test automation strategies, whether writing more unit tests or API tests, UI tests, it doesn't matter, just creating an overall strategy to help teams move faster, more efficiently, and do better testing.

2. Introduction to Browser Automation and Evolution

Short description:

Today, we will start with an introduction to browser automation and the evolution of JavaScript and automation testing frameworks. We will focus on WebDriver.io and cover topics such as setting up the test suite, using reporters and services, network mocking, and integrating with CICD. Additionally, we will explore visual and performance testing approaches. Browser automation has a rich history, starting with Selenium and WebDriver, which eventually merged into Selenium WebDriver. The development of web standards and the emergence of front-end frameworks like React and Angular have influenced the evolution of automation tools. We categorize these tools into conventional (e.g., WebDAV, Selenium) and non-standard (e.g., Cypress, Puppeteer) based on their automation strategies. There are three ways to automate browsers: JavaScript APIs, browser APIs, and the WebDriver protocol. Each approach has its advantages and limitations. Salt Labs has developed the Sauce Test Runner Toolkit to provide a more efficient way to set up and maintain test automation using frameworks like Cypress, Playwright, Test Café, and Puppeteer.

Awesome. Our agenda today is as follows. We will start with a little introduction presentation to set the tone and give you some kind of introduction about how browser automation has evolved over the years. It's kind of like a light version of my talk that's going to happen tomorrow. A really light version of it, but it's a good introduction when we talk about JavaScript and automation testing in general, and we speak about all these frameworks that are out there to get you an idea about where these frameworks come from, what the reason of their existence, let's say, are and how everything came together as it is right now.

So then we have a GitHub repository where we have prepared a bunch of tasks and challenges for you to start working on. And we will support you on that way. We'll start with setting up our test suite. Here, we will mainly focus in this workshop on WebDriver.io as it really helps us to really dig into all the areas in automation testing. We would love to have used all the frameworks for all areas, but sometimes not possible. So we use the one that can be used with all topics that we've prepared. But if you're looking for the Cyprus workshop that was yesterday, that might be recording soon on YouTube, next week on February 3rd, there will be a workshop on Puppeteer and Playwright that you can visit and learn about those tools in more depth. But in this framework, we will focus more on WebWL. Then we start with our test tube setup. We will look into reporters and services. That means how we can add reporting and how we can encapsulate our automation code much better into services. We look into network mocking, something that WebWL now supports as well. So we will mock API end points so that frontend engineers don't have to deploy a whole application from end to end, just focus on the frontend application. Then we want to bring this into CICD and ultimately scale this up, which is the main part our workshop is about. And lastly, we have some extras where we look into some visual and performance testing approaches, which are exciting as well.

So I want to start, give a little bit overview about what has happened in the last decade. Browse automation has been around for a long time already, and it has been interesting how it has evolved given all these developments that happened in the web ecosystem. It all started with Jason Haggans who wanted to automate expense tool at ThoughtWorks. He needed a tool to automate that. And some of his engineers, they were all using the latest Firefox, and a lot of his business colleagues, they were using Internet Explorers, so there was a challenge of keeping the application healthy on both browsers. So that person, Jason, created a tool that is called Selenium, or a project that's called Selenium. Almost a year later, another person came around called Simon Stewart, who created a new automation tool that is called WebDriver. And you know, this kind of like idea of browse automation evolved over time, even to a point where Jason thought it would be good idea to create a company or found a company, which was Sauce Labs. And then over the years, it became more and more apparent that each of these projects, the Selenium project and the WebDriver project, they were working fine, but they all had the limitation. Selenium, back then, was running similar like Cypress in the browser. They had their limitation around certain browser automation methodologies, like changing the, for instance, changing the size of the browser window, things like that, that you cannot do when you run inside the browser. On the other side, Simon with WebDriver had some problems automating elements on the website. So both got together and thought it would be a good idea to merge these projects and call the projects Selenium WebDriver from there on. In addition, a couple of years later, it became even more apparent that this project, Selenium WebDriver, was not able to maintain all these different browser drivers anymore, and that for true cross-browser automation that works on all browsers the same way, it was necessary to create a web standard around it. So David Burns, who was currently working at BrowserStack, he initiated a working group at the W3C together with Simon to create a standard around, to make sure that the click in Chrome, for instance, works the same way as when you do a click in Safari on web. This kind of created really a lot of trust in the ecosystem, which was the reason why a lot of new automation framework came around. One is for instance, Reptile.IO, the first common was 2011. But we also see Projector and the official Selenium bindings being released in the NPM ecosystem. It's also interesting, we see other projects popping up like the Appium project that brings the same concept to the mobile space. Then however, something interesting happens. At some point around that time, we see a lot of new front-end frameworks popping up in the space. We have React Vue, Angular versus two and Svelte that kind of like ultimately have changed the way we build front-end applications these days. So what has been a static rep application, a static web server that would serve static applications for most of the time now becomes a full-fledged JavaScript only single page application with a lot of JavaScript that needs to be automated. And that kind of also changed the way, changed the amount or changed the requirements people in front-end engineers had when they wanted to test the applications. And that created a new kind of drive in this space. We see that companies like Cypress.io got founded and building new kind of tools that use new approaches for browser automation to address all these new requirements content developers have. In addition to Cypress.io, we see other releases like TestCuppy, Puppeteer and last year, Playwright, that all take a different kind of approach of automation how people used to be. However, the web standard that which initiative was created in 2012 was still being developed during all that time. And as it is with standards, once you work on the standard, it is difficult to rewrite it from scratch. So the goal of the working group at the GCC was to finish up the specification, which happened in 2018 and start immediately work on a new one to address the same issues that we've seen in the ecosystem. And that's what happens today where we see a new effort, a new standard being developed by all the browser vendors that addresses the issues that we see that makes tools like Cypress, public-key player and be successful to bring this into a standard and allow them to do that for us. So with that we can kind of like group the tools that we have out there into two categories. I call them conventional and the non-standard tool. We have conventional, we have tools like WebDAV, IOP selenium and TypeWatch. We're on the other side we have Cypress, puppeteer, PlayWire, TestCopy. And all of them use different automation strategies. We have Cypress and TestCopy on the one side that use web APIs to automate the browser. So it's essentially a JavaScript running next to your application in the browser that triggers, clicks, triggers certain, finding an element, finding texts off an element, all these sorts of commands. It's being triggered within the website within the browser. We have puppeteer and PlayWire that use the browser APIs, which used to be, it's used to be done by WebDiver, when the WebDiver project, but these days we have much more sufficient and exhaustive browser APIs that allow much more automation in a reliable way. And we have Lastly, Selenium and WebDiver.IO that use, that still rely on the WebDiver protocol. However, interestingly enough, we have for WebDiver.IO and Cypress, they both use a mixture of both. They both use the browser APIs and some workarounds, Cypress, for instance, to save screenshots and WebDiver.IO uses browser APIs to allow performance testing or PWA tests as part of your interim test. So to sum this up, there are essentially three ways how you can automate browser. Through JavaScript APIs, browser APIs, and through the WebDiver protocol. The JavaScript APIs that tools like Cypress and TestCube use is actually one of the first generation of automating the browser as it has been used by Selenium back in 2004 already. It provides a lot of control and comes however with the limitation of only running within the browser. So there comes some limitation with it that you need to consider when you pick that framework. On the other side, we have browser APIs that is being used by Puppeteer Playwright, that either might be a second generation of browser automation as WebDiver used to do that have approached through for instance, common interfaces back then in Internet Explorer times where they were using a lot of browser extensions to trigger automation in the browser. So this kind of will be the second generation of browser automation. Today, these browser APIs are really, they have really high amount of commands and events that you can listen to, however, they are different for each browser. That means that you can only automate one browser at a time. Luckily, we see some good initiatives between for instance, Chrome and Firefox that want to settle on one browser API and they want to prepare for the work for the new web protocol. So that's why Puppeteer can automate Firefox Def like the latest Firefox version, now as well as the protocols became more and more similar. But ultimately browse APIs are different for each browser. And even for some browsers like Safari, they are not even accessible by choice. And lastly, we have web protocol, the third generation as an official standard for automating the browsers cross browser. It has advantages that allows you that the web protocol and web-based tools allow you to automate the browser. All browsers, even mobile, however, they have limitations. The first design of that protocol was not really suited towards automating front end application that are heavily relying on React, Angular, Svelte and Vue. So the design of the protocol, which was ultimately to everything that the user can do, that is failing these people that have been building those heavy JavaScript application. And so in the future, the new Reptile protocol will particularly pay attention to those kinds of needs to allow not only YouTube do what the user can do, but also do what the developer would like to do to be able to inspect a lot of more things in the browser. So in a lot, for a long time browser vendors like Salts Labs have been working on only supporting the Reptile protocol. Moving forward there's, for everyone, there's something different to it, and everyone wants to like something different. So there's a challenge how we want to, how these new frameworks are being supported in these different infrastructures, not only at Salt Labs but also in other browser vendors.

Nikolai will now go into the solution of what Salt Labs has been doing in the past. Yeah, thanks so much Christian. Yeah, so kinda as Christian alluded to, that's why we came up with the Sauce Test Runner Toolkit. Sauce Test Runner Toolkit is basically a more efficient way to set up and maintain your test automation that will allow you to run in your framework of choice, Cypress, Playwright, Test Café and Puppeteer as well. We'll actually show you guys some examples of that throughout our presentation, throughout our workshop.

3. Workshop Introduction and Activities

Short description:

In this workshop, you'll learn how to easily download a Cypress Docker image and execute it in our Sauce Labs Cloud. We'll cover the benefits of simplified setup and maintenance, parallelization, and the need for a single dashboard to display test results. We'll guide you through the workshop activities step-by-step, answer any questions, and provide hands-on coding experience. We'll also address concerns about installing WebDriverIO CLI and discuss the role of WebDriver in relation to Cypress. Our goal is to advocate for open web standards and ensure that the web can be tested comprehensively.

You guys will get to see how you can easily download, for example, a Cypress Docker image and then execute it. And Christian, if you could just go to the next slide for me. And execute it in our Sauce Labs Cloud, which it provides a few benefits such as the simplified setup and maintenance. Part two that you'll get out of the box is the parallelization, right? You'll be able to run multiple tests in parallel, whether through CI or on your local and then we will show you actually those approaches as well in our workshop, you guys will get to work and don't make that happen.

But then of course, the nice thing that we are noticing through the industry being test automation, kind of experts working with customers all over the world is the need for everybody to put data into a single dashboard that can display results for the entire team, right. Testing is no longer only about the QA team, testing is now an entire organizational level effort from the developer all the way to the QA team, shifting left and shifting right. And so these tools, the test runner tool kit, allows us to bring all of that into a single dashboard where we can see all of the results and decide which actions we wanna take on the different outcomes of the tests.

So with all of that said, let's go ahead and actually jump into the workshop. We will take you guys through a bunch of different activities, kind of as Christian already showed you guys or everything that we got planned. We will take you guys step-by-step. We're here to answer any questions as you all go through it. Christian will pull up the train testing, the soft training tests yes repo. So you can navigate there and we'll just take you step-by-step through every single activity. We'll have you code hands on and we are here to answer any questions along the way. Take it over, Christian.

Hey Christian, someone is asking how to install WebDriverIO CLI. That was the name of the package I guess.

Oh yeah nice, thank you Christian. You're on top of it. Pretty good, interesting, vulnerabilities, I think those are messages that always come up with MPM, right? Well good maintained package should not have that to be honest. That concerns me too because the project usually automatically updates dependencies and should not have any vulnerabilities. Even though vulnerabilities for, yeah it doesn't matter if it's for front end apps or for front end packages, or for testing packages. It should be both. Both should be paid attention to. So I'm just running a check. Otherwise I will just release a new version right after the workshop and get this fixed. Hm. Hey Danielle, let me look into that to see what they're saying Yeah. Sorry, Danielle. Danielle. That might be a little the wording might be a little bit confusing. But really all that wording should be the objective of the chapter is to and then do those four points. I hope that makes it more clear. Yeah, I'm going to clean up. Clean that up right now. Thank you for that. There you go. I cleaned it up for everybody else to not have the same issues. How's everyone doing? Anyone install WebDriver? I already. The CLI, the setup wizard is super helpful. Hopefully, it should make it very easy for you all. It's a pretty heavy installation. Yep, with WebDriver, the approach is that many packages do one thing. So if you have a reporter, there will be a package. If you have a service, there will be a package. And WebDriver itself has a lot of sub-packages for individual tasks. But I guess it's for big NPM projects that come with a lot of sub-packages. Hey, Christian, did you see there's a few people getting that error about setup? That's a good question, whether or not WebDriver will supersede or replace Cypress. So my opinion, I'm from the standards direction. Like, I'm advocating for open web standards, and so I'm always rooting for WebDriver protocol to ultimately address the concerns and the needs of the frontend developers today. But I don't think Cypress will ever be replaced in any way. I think what eventually Cypress might be adopt to the WebDriver protocol, or they find, they continue working on their own automation solution. For me personally, it's important that the web can be tested in all areas where web can exist. So that makes the solution that Cypress right now takes not ideal, it might work, I'm not sure. But I'd rather advocate for web standard that browser vendors agree on and we can all agree on to move forward and to work together on, that's my take on that.

4. Dependency Updates and Setup Errors

Short description:

The project usually automatically updates dependencies and should not have any vulnerabilities. I'm running a check and will release a new version after the workshop to fix any issues. I'm cleaning up the wording to make the chapter objectives clearer. The setup wizard for WebDriver is helpful, but it's a heavy installation. Some people are experiencing errors during setup.

That concerns me too because the project usually automatically updates dependencies and should not have any vulnerabilities. Even though vulnerabilities for, yeah it doesn't matter if it's for front end apps or for front end packages, or for testing packages. It should be both. Both should be paid attention to. So I'm just running a check. Otherwise I will just release a new version right after the workshop and get this fixed.

Hey Danielle, let me look into that to see what they're saying Yeah. Sorry, Danielle. Danielle. That might be a little the wording might be a little bit confusing. But really all that wording should be the objective of the chapter is to and then do those four points. I hope that makes it more clear. Yeah, I'm going to clean up. Clean that up right now. Thank you for that. There you go. I cleaned it up for everybody else to not have the same issues.

How's everyone doing? Anyone install WebDriver? I already. The CLI, the setup wizard is super helpful. Hopefully, it should make it very easy for you all. It's a pretty heavy installation. Yep, with WebDriver, the approach is that many packages do one thing. So if you have a reporter, there will be a package. If you have a service, there will be a package. And WebDriver itself has a lot of sub-packages for individual tasks. But I guess it's for big NPM projects that come with a lot of sub-packages. Hey, Christian, did you see there's a few people getting that error about setup?

5. Setup and Configuration

Short description:

WebDriver may supersede Cypress, but Cypress won't be replaced. The base URL is react-redux-realworld.io. Start with the default setup and add packages later. Chrome driver is recommended. The instructions for the base URL will be fixed. The NPM install may be slow. Update Chrome to version 88. You can use Jasmine with WebDriver. The Chrome driver version should match your installed Chrome version.

Hey, Christian, did you see there's a few people getting that error about setup? That's a good question, whether or not WebDriver will supersede or replace Cypress. So my opinion, I'm from the standards direction. Like, I'm advocating for open web standards, and so I'm always rooting for WebDriver protocol to ultimately address the concerns and the needs of the frontend developers today. But I don't think Cypress will ever be replaced in any way. I think what eventually Cypress might be adopt to the WebDriver protocol, or they find, they continue working on their own automation solution. For me personally, it's important that the web can be tested in all areas where web can exist. So that makes the solution that Cypress right now takes not ideal, it might work, I'm not sure. But I'd rather advocate for web standard that browser vendors agree on and we can all agree on to move forward and to work together on, that's my take on that. I'm sure it will be not replaced or anything.

The basic URL, you can take the URL from the workshop, the react-redux-wealload1 but without the hash tag. Because the basic URL is like everything till the hostname and the domain. Yes, Stavros, don't use to do MVC because then... I mean, I guess you can if you're familiar with it, you can code your own test but it might get you into trouble that we don't expect. Hey, Christian, did you see there's a question on top? There's Vasilis. He's facing an error. Something went wrong during the setup and it looks like Ryan is having the same error too. That's interesting. I would need to know if there's like... If you have picked any packages, I just ran the setup and it was fine to me with the default settings. But it's likely that the plugin that you picked has a problem. I would need... If there were any choices made like with additional plugins, I would recommend you to start with the basic setup, so with the default answers. Don't experiment too much in the beginning, just start with like something really small. And then during the workshop, we were going to add more things and experiment with new other packages that will help. Yeah, Chrome driver is definitely something that we should start picking up because it's just installed as the driver to automate Chrome. Eventually we will extend that to run on source labs. But yeah, Chrome driver is good to add. It's also, that's why it's the default choice. What is the base URL option in the setup markdown? It says to add the todo MVC for view, but then it says the React, Redux, Real World app thing. Oh yeah, that's because, right? Yeah, the base URL can be this, the React, Redux, Real World example, but then, in your test, the browser URL command should starting with a slash to actually take on this base URL. Hey guys for I.C., Brian is having some issues setting up. So, Brian, first you want to, you want to, once you have your directory, as you see in the setup there, right, we created the directory call it, says you has workshop, then you want to do NPM init-Y, that'll initialize the package.json, and then you want to do NPM install WebDriverIO 6 CLI, as you see the command there, and that should get you into the wizard to setup WebDriverIO. I just saw that they or someone mentioned that a Chromedriver, the driver service thing is not mentioned, specreporter, yeah, it's indeed, yeah, we will update that. Hey Christian, I'm thinking do you wanna just pull up Visual Studio code and just show them the install? Live? I can do that. Yeah, I mean that's simple enough and everyone can just follow along and I can tackle the questions in the chat if there are any. Prepare my window. I think we also have time. Okay. Paloma, thank you, thank you for pointing out that URL. That is incorrect and that's our fault. It should be the react-redux-realworld.io, I'm gonna fix that in the documentation as well. This is the base URL, guys, I just threw one there. That should be our base URL. I know the instructions are incorrect, and I'm going to fix that now. And Christian will also show you guys as well. So we create a new directory and make a default NPM package. So that's why we have a package.json in our directory. And so then we can install the Web.org CLI package. We can store this to our dependencies. Paloma, thanks so much for pointing that out. Everyone, I fixed the base URL in the markdown file. For people who just joined or have joined in the meantime, I will share again the link to the workshop repository. It's also shared on the... I cannot share it. Can you share it again? I got you, Christian. Thank you. It's also shared in the, on Discord. Please read it anyway. The NPM install is quite slow. Chrome only supports Chrome version 88. Is your Chrome version that you have on your computer up to date. Can you, you know, open up Chrome, click about and make sure that you get the update. I am updating it right now. Oh, okay. Fantastic. Yeah. Let us know if that resolves your issue. Hm. Yeah. The Chrome version that is being installed is the latest, and the Chrome driver version needs to meet the version that you have installed on your machine. So that's why that could be problematic. That's true. Hey – sorry, Christian, I was going to say. Hey, Richard, yeah, you can definitely use Jasmine as well if that's what you're comfortable with. If you know how to set it up, we want to try to get everybody on the same fundamental level in terms of tech so that if anyone runs into issues, they're kind of similar. But if you're comfortable with Jasmine, you know and you can troubleshoot it yourself, you definitely can. Yeah, but WebDriver comes with an assertion library that is similar to the one that Jasmine uses. It's actually the one that Jest uses with some extra commands on top. So if you pick Mocker, Jasmine, or Cucumber, it doesn't matter. You have, like, the same assertion library through all the frameworks. So by default, in your package, Jason, you see the dependencies that are being installed and you see here ChromeDriver being installed as latest. So this is right now 88. You can adjust that by removing the uptick here and changing the ChromeDriver version and remove the dependencies and reinstall them again. That will could fix it. So I have installed default packages using the dash Y parameter. So that takes a local runner, a mocker framework, SpecReporter, the ChromeDriver service to start from, and the WDISync. And it also comes with some tests prepared. So here we have already some test packs and we can look into them. They essentially, they open the example page and do a login test there, and we would need to rewrite it. But we should be able to run it already without any problems.

6. Troubleshooting and Toolkits

Short description:

To resolve the Fibers install errors, try switching to Node version 12 instead of 14. Node 12 has been successful for many people. If you encounter issues with Fibers, you can write your tests asynchronously without Fibers, but it's easier if Fibers is installed. To switch to Node 12, use NVM, the node version manager. Many participants have been able to successfully run their WebDriver I/O tests. You can also install the Sauce Labs Runner Toolkit to play around with Cypress, Puppeteer, Playwright, and TestCopy on Sauce Labs at scale. The toolkit runs on Docker and allows you to automate these frameworks in a stable way.

But we should be able to run it already without any problems. So there are, ideally there should be three npm commands to get up and running. We're working on getting this to one to make it even simpler.

So the test works with the spec reporter and all this output. Hey, Christian, there are, I don't know. Hey, Daphne, are you still getting failures with about fibers install script? Yeah, I do. Let me read the exact thing. Failed at fibers 4.0.3 install script, blah, blah, blah, blah. Something went wrong during setup. No. Which Note version are you on? 14, I upgraded for the, 14 I have upgraded for the workshop. Can you try, I actually said 14, right? I am on 14 as well, which confuses me. Are you on Linux or Windows? I'm on Mac. On Mac. Yeah, it's interesting. I'm on 14, 15.0 works for me. I'm on patch 4 in the 14, 15. Yeah, maybe upgrading might help, but then it should be the same because it comes pre-bundled. You should not need to compile it. I'll try, what Martin said. Let's try this. Oh yeah, okay. Good point. I will copy that. Hey Vasilis, I see you're having an error. Can you put the error message in the chat for us, please? We can see. Oh Maximiliano, which error are you facing? Yeah, if you guys can, when you guys talk, because we get a lot of chats in here. If you can put the specific error that you're having from the console, that would help us a lot to help you guys troubleshoot. Okay, cool, yeah. Christian looks like multiple people are having Fibers install errors. How do you even use the Fibers? Vadim? Vadim said that test didn't work for him on Node 14, but worked on Node 10. Okay, cool. Okay, Jeremy, are you on Linux or Windows? Windows, okay, so there's a few people on Windows now getting Fibers install errors. Let's see here. Yeah, switching to a different Node version like 12 ideally should work. This is with Fibers sometimes difficult. You can also not have Fibers. It's not required, but then you, instead of having to write the test like this, like synchronous, you would need to async a weight on them because every command is asynchronous. So you would need to write async a weight in front of every of those commands. That's we're to, but it's easier if Fibers is being installed and everything runs synchronous. Hey, Christian, do you want to maybe show people how to switch to know the 12, if they want to try it with no 12 instead of 14? Sure. So the best way of doing that is using NBM, which is the node version manager. We'll share that in the chat. And. You can try N too. I think it's a brew or NPM global. Hold that. N, just N. And that's... Yeah. Just run that and you can say, how do I do this? NBM use 14 or 12. Node 12 works for me. Nice. Yeah, from the chats it seems like node 12 has been pretty successful for people versus node 14. It's interesting, I never had issues installing fibers with node 14 or 12, like even if I, Yeah same here. I remember I have fibers issues back in the day, maybe with node 12, I think actually, or maybe node 10. Just for those node versions, like 14, the LTS ones, the fibers come prebuilt and it does not need node jib, and so it doesn't need to be compiled, but it's interesting. I have a fresh Catalina, like I got this computer three days ago. Oh, interesting. So this is as clean as it gets. But you solved it with the Xcode select one, right? No, it didn't work. I had to change versions from node 14-15-4 to 12. Okay. Hey, guys, just curious, who has been able to successfully run their WebDriver I O test? Maybe if you just throw that into the chat. Discuss theblib? Nice. You just posted public comments. You have a result to show, I had tried to make but somebody said no, and wanted to see if they could find out that maybe this caused some problems. Christian, would you like to drop us a direct message, so we can ask Christian, what you think is the best way to find out if it's getting rid of the Google Chrome ACE in five minutes? Do you wanna proceed to Sauce Labs Runner Toolkit? Have them try and install that? Good amount of people, looks like they got it running. Yeah. Okay, so, just for information purposes as we continue with Reptile.io. But, if you like to play around with Cypress, Puppeteer, Playwright, TestCopy on Sauce Labs at scale as well, you can install that tool. It's also available via NPM and it allows you, I just run quickly through it on this directory. So, I think I have it installed. Let's see which version. Yeah, the only thing you guys will need is your, is a free, your free Sauce Labs account. I'm putting the URL in there. So, if you haven't signed up yet, just sign up there and you'll be able to use the solution. I don't know if you actually need it for, to run locally. I think you do, right Christian? Yeah, but it's, it runs on Docker, so it uses Docker to automate these frameworks for you in a stable way. And so it would need to download a Docker image that might take some time, which is already installed by me by default. So you can just show... Okay, yeah. Test my new Docker image with... Oh, okay, they have updated recently. But yeah, usually I would need to go back to the documentation to see how it gets started. But this would allow you to run those Docker tests locally, those Cypress or TestCopy tests locally and later on, expand them into the cloud to run on source. Yeah, guys, if you're following throughout, I'd say try the Cypress. So when you do the command, it'll take you through an installation wizard, pick Cypress. I know that that solution is more up to date with features than the other ones.

7. Extending Print Setup with Alura Reporter

Short description:

The upcoming chapter will focus on extending the print setup by adding reporters, starting with the Alura reporter. The Alura reporter allows you to create an HTML report, which is popular in the WebWR ecosystem. Other reporters like the Markaralsson reporter can also be explored. The documentation provides instructions on how to integrate the Alura reporter and generate an Alura report directory. The base URL can be set in the WebDriver Ion config to parameterize the target application's URL. The next chapter will automate the generation of HTML reports using the Allure CLI. WebDriver IO can be extended with various reporters to meet different reporting requirements. While there is no tool for migrating from another framework to WebDriver IO, there are tutorials and a community channel available for assistance. The need for Java for running Allure is uncertain.

I know that that solution is more up to date with features than the other ones. So yeah, just... It will be more, less buggy, you know. Yeah. I started working on that for a couple of months now and we will GA this tool soon to be available for everyone for free in the basic version and people can start running the Cypress tests on TarSatz as well. So it's still a beta but soon to be stable.

Okay. So our third chapter is gonna be extending our print setup. And for now, we're gonna start adding reporters as the first step. And here we're gonna start with the Alura reporter, which is a popular tool in the WebWR ecosystem. It allows you to create an HTML report. In Alura it's pretty popular. And that report allows you to look into the test barriers and the amount of tests that have been flaky. I can show how it looks as a screenshot. It's a nice report. There are also other reporters that you can try during this chapter. I mainly use the Alura reporter to have a HTML report, but there's also the Markaralsson reporter and some others that you can try out as well. But for this chapter, we want to focus on the Alura reporter. And I would just take a look at the documentation for the reporter. It tells you how to integrate it easily. And the goal of it is that your test then afterwards should have an Alura report directory that you can just serve statically with an HTTP server download from NPM. You can also, if you run those tests on Jenkins later on, or CISD, and other CISD tools that can be used to refer your CICD server to those HTML reports and have them, have the CICD server render those apps and blocks. So that's quite nice. I've seen a lot of people using it together with Jenkins.

Alura has a Jenkins integration that's nice to work with where next to all your runs, you can see a link to the Alura report. I'll show them the solutions just in case people previous chapter are trying to catch up. Yeah, we have the solutions in here. We hope you guys don't cheat. Of course you can look ahead if you want to. We can't stop you, but if you want to, you know, after fix your errors or see how this, anything was implemented there and the solutions folder. And you can see per chapter. Yeah, the modified tests don't does not use the base URL. So it has the complete URL in the URL parameter, but it would work the same way if you would just have a slash route, hash bang and then login. If you have to base your all set, that should work too. I use the attributes, this is a CSS selector to select the inputs and just calling the set value command on it. Add value works as well. The difference is that the set value clears the input before setting it. So if you know that that input is valued, it's empty, add value does the trick as well. And it's even a little bit faster, so it saves one command. And then we click on the Submit button. There used to be a Submit command in the old JSON Wire protocol that has been removed, there's no such thing as a form, like there's four elements in HTML, but the ideology of forms is difficult to standardize. So to click on the Submit button does it do. And then we use the WebDriver Research Library to expect that an element that, a link that has the test.js submit in it is to be existing. Emmanuel, you don't have to put the full URL if you set the base URL in the WebDriver Ion config. One, two, one, two. If you want to change the version of ChromeDriver and just modify it in your package JSON, there's one package for that. So I'll move this one. So here we have ChromeDriver set to version 87, which was the case a couple of days ago when I worked on this. You can hard code the version, you can set the strict version to 87 and that should get you the correct ChromeDriver version. For all other WDAR packages, it's always recommended to use the latest if possible. Yeah, ideally, why do we have to put the full URL if we have to set the base URL? Yeah, my version, my solution is not really correct because I haven't set the base URL in my WDR config, but if you have, even better, because the advantage of you having a base URL instead of the full one is that you can configurize the URL, the target application in your configuration so that your tests here work for the staging environment in the same way as in production or on development because you can parameter, parameterize your base URL. So let me fix that. So adding, just starting with a slash and so then it looks for the base URL and that I need to adjust in the WDR config. Doing it here. And here we have the base URL which is default set localhost which we now want to be for the WeDax version. Thank you for letting us know. Nope. I mean, I don't see currently people that want to join I've seen them before. Yeah, I just let them in Christian. Okay, perfect. So now we can see we have the base URL here in our config and now when we, for instance, call Web.io can I write this somewhere? Just see, like, we can do npx wdao run the config file and can say base URL equals and then our, for instance, development version of our app. So that allows you to easily switch or parameterize the test execution. Okay. We got, hey everyone, I'm curious, has anyone set up Allure reporters? I did, but I'm getting only XML files when I run the test. So there's no serving them. So for that, let's have a look at the, I don't know where's the third point here. You can use the Allure CLI. And in the next step, we are going to automate that to be that the HTML is automatically generated. Allure just creates XML files, which are the test results. And in the configuration, in the documentation, you can see here that if you install the Allure command line tool from the website, which is just an NPM wrapper, Allure minus command line, minus, minus site dap, you can install, you can generate the HTML once your tasks have been executed. And this is like, at first it looks like an annoying step to do to create the HTML report, which is true. And so that's why our next chapter will be to automate that, to make this part of our setup. Looking forward to it. Thanks. That's an interesting challenge. When you build frameworks like that, if you really limit yourself to the capabilities and really focus on doing it for one specific environment, you will have less challenges to have these integrations. So for WebDriver IO that wants to run in every environment, we get a lot of bug reports for mobile, for web, for any kind of environment people run the tests in. The same for reporters, if you want to have a rich ecosystem for reporters, there are often challenges to integrate them well, but that's why the WebDriver IO can be extended by all these reporters here easily so that this job does not become your job anymore. I think even with the version seven of WebDriver IO, which will be released next month, there are even more reporters available. Yeah, we have a bunch of more. Markdown reporter, some of those will be added. So everyone who has like different kind of interest and requirements to have reports generated, they can easily create their own reporter themselves, which is also something that we look into later. Is there a migration guide from like either framework to WebDriver? I'm sure there are blog posts about that. I don't think there is a tool that you can just run and it will replace your existing setup into a WebDriver 1. Unfortunately not, but I think there are a lot of tutorials and there's a community channel where you can always reach out if you have any issues with that. Hey, Vadim, I see you are getting a Java Home error. Okay, you're still getting the same error. You should not need Java for running all this. Do they, it looks like it's about for a lore. Do you know if a lore needs Java, Christian? Oh, that would be, I don't know that about that, but it would be weird if you would need to do that.

8. Reporters and Custom Services

Short description:

For those without Java, the Mocha awesome report or a different HTML reporter can be used. Personal preference may lead to not using reporters in test automation. HTML reports are often used to share results with bosses or teams. XML reports are commonly used and can be displayed in dashboards. The complexity of reporters may not be worth it, but feeding data into a dashboard can be useful for large organizations. Majestic is a tool for Just Runner reports, but CLI data disappears quickly. Moving on to network mocking and the next chapter on creating custom reports and services. WebDriver provides information throughout the test life cycle, and Sumo Logic reporter allows for creating logic dashboards. Custom services can automate Allure reporter setup. Services help extract code from the config and can be shared between teams.

I mean, I have it installed, it's a good question. I mean, yeah, it looks like according to Vadim, he tried to do a lore generate and a lore open and he's complaining about Java. Yeah, it looks like maybe a lore needs Java from what I'm seeing from people. Oh yeah, Lure requires Java eight or higher. Okay, for that, for the people that don't have Java installed and don't want to install Java, we can do the same work, or the same task with the Mocha awesome report. I think that one, I think that one is a little bit outdated. I'm not sure how how updated that one is, but it should still work, I think. Or there's a different HTML reporter here that we can be used. That should be.

Okay, you want to show up an HTML reporter. I was gonna say just a spec reporter could do too. Yeah, that comes out of the box, but this one is about editing. But yeah, you can try multiple reporters for different to unit one. Hey Christian, do you know one that won't require maybe Java and that we can just set up quickly and easily for everyone so they can see? Let me try with the Mocker Awesome Reporter even though I'm not sure how recent that one is, but I can try with the example that I had. So you switch the screen. You share. Okay.

Personal preference of mine for, this is one of the reasons is I actually never use reporters in my test automation because you can see one of the challenges they pose is it's just an extra barrier to entry. I personally prefer just logging all my outputs to the console and then when we set up a CI pipeline, usually those like you guys will see today, they can handle all the reporting that we need. So I haven't used reporters in probably at least five years. What are there even use case? Like why do I want a reporter not in my CI environment, like in my development environment? I think- Not for me. I think want, which is a good question, Dettner. People can correct me if I'm mistaken. I think people like to use HTML reports to send them, send it to their bosses or to their teams. I think that's the typical use case. Let me know if I missed anything guys. We use them a lot to keep track of analytics around testing results. Mike are you, are you feeding those analytics through an API or are you guys through an API? Yes. But then are you constructing, what are you actually doing with the HTML reports? Well, I misspoke. We don't actually use the HTML report. We use an XML, X unit style report that comes out. And then you guys pop those into a dashboard that you display? Correct, yes. Yeah, that's a typical thing I've seen, but usually, yeah, but I don't think you're using like any kind of HTML report, right? Not typically, because most of the things are consumed by humans or by machines. And then the visual dashboards provide the human element to it, but the HTML reports provide that human element. So if you don't wanna read the XML, X unit stuff, you can look at HTML if need be. But that now I hope that answered your question. It gives me another answer as to why not to use reporters, but yeah, altogether. Yeah, for me the complexity, that extra complexity is not worth it. I do like the idea of feeding data into a dashboard that you can display somewhere. I think if you have a large organization, that might be more relevant. We had a pilot in my work using a tool called Majestic that is like a Just Runner reporter thing. You can also run them live and there's an ICOI and it held for about like three, four days before people returned to the CLI just because they don't wanna stare at the tests while they're running. Yeah. That's fair. Depending on the frequency of your builds, the CLI data disappears quickly. And so you want to make sure to have that data stored somewhere. That's why we use the API to pump it to another dashboard that people can go view that has time elements associated with it. Cool. That makes sense. Curious, everyone, has anyone has anyone been able to set up any reporter with their WebDriver IO? I know, I know we're getting some XML reports out, Okay, cool. That's fine. Yeah. At least we went through the exercise to, I'm looking for how Mocker Awesome generates right? Okay. Yeah. To be honest guys, we don't have to get this 100% right because we have a lot more fun, cool stuff along the way anyway. So if you don't get the reporter fully down, again, the solutions are there for you all afterwards and then you can give it a shot on your own if you feel like the reporter's super relevant too. Okay. So I think I'm on a good path with the mocker results. Yeah. I think we can move forward. Hey Christian, do you want to move on to network mocking? Yeah. I think it's fine. So. So again, I put the URL to the solutions in the chat. So of course afterwards you can always reference them. Even while you're coding you can reference them. But we'll move on to the next chapter. I mean for people with Java installed, the services one would be the next chapter where it's about um, creating a custom report and custom service. The custom reporter is pretty straight forward. It should show you the name of the tests and the duration time. So, um, you can look up into how custom reporters are being created in the docs. But you can always, if you are interested in something in a different chapter, you can always go ahead. These, all these chapters build on top of each other until I think to the mid of the workshop where we go into CICD. But you can also go ahead and look into the network one, which is when it's more exciting to you. But, uh, if, if you're interested to build your custom reporter based on, you know, test information that you get because you have a specific requirement, then you can build it on yourself. And so WebDriver reports a lot of information throughout the test life cycle that you can use to even like push information up to like a server somewhere in your environment. There's sports, the Sumo Logic reporter that reports to Sumo Logic. So it allows you to create similar logic dashboards that are test functionality as possible. For people just, for people that were, you know, complaining about the Allure reporter that's so difficult to set up with the HTML. There are another tasks in this chapter is to write a custom service that can extract that logic of automating this through the... Through the... Through the... Allure generator through the Allure command line. You can see here an example how it would work in a hook. I think some of you have maybe might have that implemented already. I can surely do that as a collaborative way. And the goal is to now move this code out of our config. We cannot just have... We cannot just use the configs, the hooks in our config because then our config would become massive. And so with services, you can nicely extract this out into either your own NPM packages that are made to age in your company and can be shared between teams.

9. Creating Custom Services and Allure Report

Short description:

You can create your own custom service and publish it to NPM. We have many people who have created services for various things. The Chrome driver service was created by someone else. We have integration with different vendors and tooling that makes the ecosystem rich and powerful. I'll show you how this works with the Allure report. First, I need to install the Allure report and move the output to allure results. Then, I create a directory to keep all my services. We export a class called Lua service that allows access to hooks defined in the config. We are interested in generating the HTML when everything is complete. We add the service to the list of services and can parameterize it for configurations. Now, we have the lua report in a new reports folder. We can share this functionality once in our company and across developing teams. Moving forward to network mocking and sharing the code. I could have used AsyncKuwait instead of promises, but wrapping in a promise is better for handling the exit code and timeout. Some people prefer using a single weight for services, hooks, and tests.

You can even create your own custom service and publish it to NPM to be used by anyone else in the ecosystem. That's awesome, too. We have a lot of people that created services for a bunch of things, for the Chrome driver service was created by someone else. And we have a lot of integration and then it lost a lot of integration with different kind of vendors and tooling that makes the whole ecosystem pretty rich and powerful. So I can show that how this works here with the Allure report, since it is working for me. Let me just redo real quick the steps that I had to make sure to do for chapter three in my example. And then I can share my screen to show how I would do that early BS Code. So first I need the allure report. So right now we have Maca awesome one. That's great. I need the lower one. So I'm gonna install this. We can so I moved the output to allure results. That should do it. And the next step is a little bit display explain a little bit down there. Um, so we installed the you'll command line. And then, so this case, um, for this example, I just create a directory where I keep all my services. Um, shared within one project again, we can create npm packages out of this to make them shared along the company that can be really useful service. And so here we export a class that is called Lua service. And all I want to do is... So the Lua service allows you, or any service that you create allows you access to all these hooks that also depend, defined in your config. And that gives you access to multiple situations during the test life cycle. So after command, before test, after test, and so on. And so we are interested to generate the HTML when everything is complete. Okay. I think we don't need all of that. We are not interested in that. And we can almost copy paste what we see from the docs. So when you return a promise, it will wait until the promise is resolved. And what we here need is the cellular... I'm assuming it's... All right. we'll command line, let me double check though. Yeah. So this now, if I'm correct, that should create it. So we can test lake.how, So it runs my tests and to tasks create the allure results. And didn't create the allure report because I haven't added the service to my config yet, that's what I'm gonna do now. So I import it, services, allure service and all I need to do is just add our service to the list of services. And I think that's it. We can also make this, parameterize this so we can apply some configurations that can be overwritten and then in the constructor of the service, you can you know read these configurations and make for instance the targets directory configurable. So this we ran this unexpected identifier, that's great, okay powers, okay, like this I'm not afraid. Okay, if we do this this way, let's do it the right way, okay. So now I have a new reports folder that has the index HTML. I still see all the results. But now if I go into the word results, we can see, can change the screen again, to a new great browser. Okay, there's one directory, one second. I want to see the XML. I want to see the, oh, perfect, okay. So, I need to reuse a newer folder, you know. Okay, one second. We got the same one. One second. It's heavily caching my session. But now, in the refresher window, we see that we have the lua report. We have the one test that have been running. It has all the request information in there so that you can share with your teammates and again, we have now moved this into a service so it can be, you can write this once functionality once in your company and move it and share it across your developing teams. So that can be really powerful. So that was for that. Um, how are we on the custom service and we put a site on chapter four, I think. Yeah. Can people. Okay. Moving forward to network marking. I think we have a break at some point. But Dima's asking if you can share the code. Um, you can share the code of what you just wrote. It's a solution. So here we see. Yeah. The solution URL. Oh yeah, we can share it. I'll share it. I'll drop it in the chat. Okay. Nice. Um, and then we had another question for you, Christian, uh, Herman was asking why you're using promises instead of basing Kuwait. Um, yeah, I'm on not 40 and I think I could also have used AsyncKuwait. Um, so let me just rewrite that. And that's a good point. Um, let me see. I don't need that now so I could just do, yeah. The thing is that this function is not, this function does not return the promise. Um, so I can, I can technically do this though. So I haven't wait here and continue to return something else or do something something different. Um, but this Allure result, like the Allure Command Line does not get support promises. So it's, um, it's better to wrap this in a promise to properly handle the exit code and the timeout. Um, you can surely use a single weight, and not only for your services and hooks, but also for your test. Um, some people prefer it. Um, I know a lot of people that come from the Java world for languages that use synchronous languages or synchronous coding. And so it's much easier to write like without async to weight in your test rather than with, because at some point it can be complex and you have to always keep in mind which function runs synchronous and which asynchronous.

10. Network Mocking and CI/CD Integration

Short description:

The network mocking part will only work locally and will not work in CI/CD. Browser vendors may require you to upload or provide your test to be executed next to the browser in the cloud. The access to browser APIs is too big for running tests at scale in CI/CD, as it can potentially DDoS your own server. We will save the network mocking for later and focus on CI/CD integration using GitHub actions and workflows. GitHub workflows provide pre-configured CI pipelines for easy setup, and we will use it to set up a CI pipeline for functional tests. We will go through the steps of setting up the pipeline and running tests using NPM install and NPM run test. Additionally, we will add the necessary browser configuration using the puppeteer container.

Um, I personally prefer synchronous execution. Cool. Um, let's hang on, let's take a break. So who have you have already looked into the network locking section. Um, I think that was a, I find that one of the definitely interesting ones and here, um, let me share my screen. with the network marking. We have the case where Web2.io was leveraging the browser APIs to do those those kinds of things. Um, uh, so as you might have known, there's this chrome-devs-bricks-protocol, which is the browser API for, for Chrome. And, um, it has a lot of capabilities to not only listen to network events, but also to, um, modify them, um, in the fetch domain. And this protocol is specific to Chromium based browsers. So you will be able to automate a Chromium match, um, uh, Google like Google Chrome, as well as Opera. I think Opera, if people still use it, this is also still, it's still it's also based on Chromium. And so when you connect locally, uh, to Chrome, it can connect to that browser because it runs next to your test. Um, the problem with that is that if you run it at scale and in the cloud, browser providers or browser vendors won't be able to give you access to this protocol because, um, yeah, it would require access to the VM that runs in the cloud, which is for security purposes, the challenge. Um, so what I see or what we see in the, um, in the market is that browser vendors will, um, will probably are likely ask you to, you know, upload or provide your test so that it can be uploaded either into cloud and into the cloud vendor directly where the test can be executed right next to the browser. Um, but we all can also see, uh, directions where, you know, you can run your tests everywhere as long as the test is close to the browser. And I think with the new web protocol, this will be also continue. That trend will be continued. Um, so the idea of running a test somewhere in CI and have the browser somewhere running on the completely opposite side of the world does not really work anymore in the future because all of these, the access to these bars APIs that are really useful and important for a lot of testers and funded developers. Um, they, the information exchange is just too big that you, if you were to run the same scale of tests, um, uh, with the co-def tooth, like let's say hundreds of tests in your CI CD using from death roots protocol with let's say puppeteer, um, you are likely to DDoS your own CI CD server. So, um, with rep dover command where you just set one command and one response then works over the cloud perfectly, but with the death tools, um, Chrome death tools protocol and the future wrapped our protocol, this will likely to change. Um, so that's why this network marking part will be only working locally when we move to CI CD. This will unfortunately not work anymore. Um, but I think we just talked to Nikolai and, uh, due to time limitations, Nikolai will not continue with this CI CD integration part and once we're done with that, we will go back into and then look into the results of the network locking. Forgot the chat.

Okay. Cool. Are you, are you okay for me to take over Christian? Yes. Go for it. Sure. Thanks, ma'am. Yeah. So kind of, as Christian said, we are a little bit behind on time, so we want to make sure we show you guys everything. Um, so we'll save network mocking for the end. If we have extra time. In the meantime, let me just share my screen. We're going to do a CI CD integration. So basically that's this chapter right here. That's basically taking everything that we did so far, which is really only right one task in WebDriver IO. And if you have the chance to write a same test in Cypress, and we're going to integrate it into CI CD. Let me drop the link for you all into the chat for that. That's this chapter here. So for our CI CD, we're going to use a GitHub actions and workflows. Why we're going to use it because it's really simple to use. One of the simplest CI CD tools I've ever used, and it's also free. So we all love free stuff, right? So I'm just going to show you all there's instructions on, well, there are no instructions here. I'm just going to show you all how to set it up and you all can follow along with me. Feel free to push your code to your GitHub account and then we'll all set up a CI pipeline. I'll try to go a little bit slower so that everybody has time to catch up. So you'll go here to the actions in your GitHub account, and you'll say a new workflow, and the really nice thing about GitHub workflows is it automatically comes with a bunch of already pre-configured CI pipelines that show you how to set stuff up right for us, since we're working with node, this one seems extremely relevant. So we can just go ahead and select this one. And then as you can see, it already has a bunch of steps in here. So we can give it an appropriate name, right? For you all, maybe since we have just one testing there, we can do something like functional tests or something like that, or whatever you want to name the YAML file, right? Let's call it that. That's okay for me. And so then here we'll see a bunch of steps that are already configured. So here is the name of the pipeline. That'll appear whenever it runs in GitHub. So why don't we say, they'll just say functional tests. And so here it says that on a push and a pull requests to the master branch, we're going to execute this pipeline. Sounds excellent for us. Nothing this is, this is all okay. Of course you can configure it all. For our demonstration purposes, it's not necessary. Here they're also configuring it to run on multiple node versions, which we actually don't need. So we are just going to get rid of this step right here. We can of course do that, but on necessary at this point. And instead here, we're just going to put the node version instead of a variable, which is going to be 14.x as they have there. And here we'll just rename this as well. 14.x. All right. So, here you can see we have a bunch of steps. Check out the code, use node version 14. And then it will run MPMCI. We don't need any of these. So let me pull up. If you remember with Webdriver.io all we needed to do was do NPM install. Actually, here let's do this. We just needed to do an NPM install. And then you can do NPM run test. If you configure that in the package.JSON, and of course we want to CD to our directory, wherever that was. So in my directory, which is the GitHub directory here, I'm going to navigate to GitHub directory here. I'm going to navigate to solutions and then I'm just going to copy and paste this path here. This is in the solutions and then here, so we're navigating to the directory, running an NPM install and an NPM run test. One thing that's missing here, because we don't have a browser on a VM, right? So we need a browser for that. So we are going to also add that here. Yeah, I'll say. Oh, sorry. We'll say users and by the way, give her workforce documentation is super nice. So if you just want to look into it further, you definitely can. But this is Ian Walter and then puppeteer container.

11. CI/CD Integration and Job Execution

Short description:

Let's give this a shot and see what happens. We have a bunch of jobs already set up for this demonstration. Let's open the Functional Tests job and see it in progress. We can also check previous jobs that have executed. The setup process is easy to understand and provides robust results. The steps are well-documented, and the logs give us all the necessary information. We can troubleshoot any errors that occur. Let's make some adjustments to improve the setup and give it another try. The process may be slow, but it allows you to set up your own environment. The mistakes made are intentional to help you understand the process better.

OK, so this will have our browser that we can run our test again because we're in CI. We need a browser, right? So I think this looks good to me. Let's give this a shot, see what happens. Worst case, it fails and will troubleshoot the errors.

Let me just make sure. Install run test. Cool, everything looks good. Yeah, this message is good enough for us and will commit a new file. So give it a second to start up. It's funny, I noticed that always during demos or presentations GitHub seems to be extra slow for me. Sometimes I've had it where it's like taking a few minutes to actually start up. But anyway, while that starts to execute, I can actually show you all here. So you can see it gets this yellow dot, because now there's a job in progress. We have a bunch of jobs that I've already set up for this demonstration that I'll take you all through. But let's see if it has our new one here. See it's called Functional Tests. This is the one that we just set up. So let's open that one up. And here you can see it's actually gonna be doing stuff. So here it's building the container. It'll do a bunch of other stuff. Let it run. I'll take you guys through another one that's executed previously. This is one I set up for the workshop. Same exact thing. This one's also in progress. Give it a second. Did that run? Let me see if there's any ones that I've already... Here's one. No, this is WebDriver.io Cloud. I don't even remember which one this is. This is WebDriver.io. So this is WebDriver.io Cloud. One's that I've already... Here's one. No, this is WebDriver.io Cloud CICD. Here's one that CICD that's already executed previously. So you can see it gets each of the steps. It does our actions here, we install the dependencies and execute. So here you can see it does all the appropriate stuff, pulling the image. And this is actually with Cypress. So that'll confuse you guys. Oh yeah, that was Cypress CICD integration. Where is my... Oh, here, WebDriver.io Cloud. Oh this Cloud integration. Sorry, too many pipelines. Ah, here. And let's see this run tests. Yeah, perfect. So here you can see our run tests. Navigating to the WebDriver.io folder and then doing a bunch of random stuff and then finally running the tests. Trying to show you guys the command, where it executes. Install Node. Yeah, and then basically runs the tests. Does a POST checkout and complete. And the nice thing as you guys saw, I hope it seems pretty easy for you all to set that up. And you get all the products. This was one of the reasons why we had that discussion about reporters and logging is that this, once you have it set up with the appropriate tools, is pretty robust and gives me at least everything that I need to see if something goes wrong. And so you can see here's the one we built live, everything succeeded. Here's all the steps that happened. I wonder why I did it twice? We can take a look at that after. Oh, I didn't even run tests. So this one didn't run any tests. I'm trying to check out. Interesting. Let's take a look to see what's going on with it. So that goes in your GitHub Workflows folder here. Here is the one that we just created. So let's see. We got steps, that checkout, we got to use null version. To be honest, we don't even need this. We can just get rid of it. Going to hide this. We'll use this PuppetTierContainer. And then we will do here. We'll give this a name. Let's change this to that. There you go. Let's give this another shot. And see if that works better. Hopefully that's slow enough for you guys to set up your own as well. I think the mistakes on my end are a deliberate slowness for you guys to get up to speed with this as well. So here they are all running again. Let's take a look at the details. So here's getting the container, run actions. It's so interesting. I don't know why it's doing it twice. Let's one push. That's so weird.

12. GitHub Workflows and Caching

Short description:

When using GitHub workflows, there can be issues with running multiple steps of getting Puppeteer containers. Caching can speed up the process, but it may cause unexpected behavior. The cache operation can be configured in the GitHub action YAML file. Common configurations can be placed at the top of the YAML file to be used in multiple jobs or steps. Examples of using environment variables and running tests with WebDriver IO and Cypress are provided.

It's always, I swear, always when I do demos with GitHub workflows, funky stuff happens. Isn't it supposed to be still in beta, technically? No, I don't think so. Not that I've heard of. Yeah, I don't. Unless I'm missing it. I don't think so either. It should be stable by now. Yeah, I know we use it all over the place for our software. What is the issue again? It's running two steps of getting two Puppeteer containers. I don't know why. It's a building one, and then running it. Yeah, good call. Yeah. Yeah, good call. That's exactly what's happening. Probably the bill is being cached. And then when you run it, it's like, every time, it's really quick. It does it to me. When you use a cache, I cache my NPM installs. And it does that too. It has like three steps for caching. Oh, nice. OK. Perfect. Yeah, thanks. Yeah, there you go. And see, you see how it run a functional test that we just divined, we gave it a name. And then here, it's performing the operations. And then it's going to run our test. Here, it's executing the one spec you can see. Give it a moment. It runs pretty quick, which is super nice. But Dean, you don't need to cache anything. You just set it up, and it just does its own. Cool. Nikolaj, how do you do the cache operation? I'm sorry. How would you do the cache operation? I think there's a parameter that you can do in your GitHub action, right? Oh yeah, that's a good question. Yeah, exactly. I'm sure there's a parameter that you can pass. But Dean, I've never used the cache operation, but I'm sure that you can if you take a look at their documentation. You can also, if you wanted to get more complex, I don't know if this you could start configuring. Where is their functional? You can start configuring the YAML if you have multiple steps. I'll actually show you guys at the end a huge CI pipeline that does a bunch of stuff. But if you have common stuff, you can start putting on top. That'll be used in a bunch of jobs or a bunch of steps. You guys will get to see that towards the end of the presentation. Actually, I can show you guys now, but we'll see it run later since there's a question about that. Here's for all tools for examples. So you can see, for example, at the top, we start using environment variables everywhere. So those are at the very top. And then there's a bunch of jobs like using WebDriver I.O., using Cypress. You see it's also running visual tests as well. And so that's how you put it all together.

13. Setting up CI Pipelines

Short description:

Has anyone been able to set up their CI pipelines and get them to run? If anyone is having trouble, let us know. The output of GitHub workflows is super useful. The approach works similar to other CI, CD pipelines. There is documentation available on how to set this up in the web.io docs.

OK, cool. Has anyone been able to set up their CI pipelines and get them to run? Yeah. You're getting it cold plus ones. Awesome. Yeah, and plus ones, Yeah, yeah. Hey, everyone. If anyone is having trouble, let us know where we can see what's going on. The output of GitHub workflows is super useful. And we can usually pretty easily figure out what's going on. And the approach works similar to other CI, CD pipelines, like CircleCI. I feel like Jenkins as well, where you provide the Docker image that is being pulled. And then you can run your tests based on that Docker image. It's just a matter how you define your workflows. And that is different from one CI environment to the other. But there are also some documentation about how to set this up in the web.io docs.

14. CI/CD Integration with SauceCTL

Short description:

If you have Cypress installed with SAS Test Runner Toolkit, add it to your CI pipeline. Create a brand new YAML file and add SAS Test Runner Toolkit. Installation is relatively simple. After installation, create a Cypress folder with a config.yaml file and your tests in the integration folder. The SauceCTL provides parallelization capabilities and allows data from all tests to be aggregated in a single place. It is still in beta and open source. The vision of SauceCTL is to be part of the entire application development process, offering additional testing capabilities like API and security testing.

Cool. So if you are able to set it up, I got a little challenge for you all that you have to try on your own, but of course we're here to help. Try to do it with Cypress as well. If you have Cypress instilled with SAS Test Runner Toolkit, just add it to your CI pipeline that you just created. Actually you can create a branch and just build your CI pipeline instead. I just learned that I'm using the CI, actually you can create a brand new pipeline. That's simply just a brand new YAML file. Oh, awesome. I don't know if that's OK. I think they're talking about something else. Yeah, create a brand new CI pipeline, which will just be a brand new YAML file, right? You can see how we have many. Give it another name. And simply add, oh, sorry, and then simply add SAS Test Runner Toolkit to there. And I gave you a hint of how to do that. So it should be relatively simple. Let us know if you're struggling on anything along the way. We're here to help you. And if you, by chance, were not able to install SAS Test Runner Toolkit, let me show you again where that documentation is. It's actually relatively simple. So that's how you do it. You install it. This is to have a brand new test. OK, at that point, you can pick Cypress. And then that's it, and then you just execute. You can, yeah, you can execute by pointing it at a config.yaml that comes along the way, or you can do it exactly like I showed you in the challenge, which was you can just cd to the correct directory and then do NPM install and Saucetl run after you have the Cypress code in there. And I can show you all how that might look. Let me give you a second. After you do the installation of Saucetest Runner toolkit, if we look here. So you could have a Cypress folder, for example, like this. Actually, it will be set up here. You have a Cypress folder, would have this config.yaml that does configure. Oh, and by the way, yeah, that does configure Cypress. And then you will have your test in the integration folder when you do, when you do saw ctl, new, it'll come with a test automatically. And so then it's a matter of just adding into your yaml config file. And I won't show you that solution because that's your challenge. Let me know when somebody has that going. If anyone was able to implement that, or if anyone is confused. It's hard. I'm trying to look at the videos in the, in the Zoom and I don't know if you guys are super focused on doing something or just super confused. It's like the same look, you know. Yeah. I'm seeing the wdio.sas.config.js, very long name. But the docs only talk about integration with like Cypress and Playwright and all those with SAS and not WebDriver.io. WebDriver.io is different here because it's the WebDriver based framework so it doesn't need Sauce ETL to run. So integrating with Sauce is as easy as having this user and key defined as an option in your wdio.config. So now, with a chapter, I think, which was it? It's the ICD integration right now. But with running it on Sauce, with the Sauce integration chapter. We're gonna run Sauce, let's run our toolkit on Sauce in the next chapter. Okay. Cloud integration, that will be the next one. Right now we will do CICD and then we'll integrate into the cloud as well in the next one. But to run What Develo in the cloud on Sauce that's browser stack or the other platforms, you just need to write a user and a key. Sometimes the host name and port as well for the ones where WebDriver can detect the user name, the XS key, the links of the XS key for the ones that are specific to specific vendor, it can detect it automatically and you don't need to do anything else. And then instead of making the commands to a Chrome driver that in our case currently runs in the GitHub action, it sends the request to the cloud. And as I mentioned before, that is currently possible with the WebDriver protocol as one command represents one request and there's one response to it, but Cypress with public-geo-play-right, these are all, they all run on the different protocols that require much more communication and that doesn't scale. It works maybe for one test, but it doesn't scale for more tests. So that's why there's a source at the out of source test one toolkit that we are developing so that it handles the test infrastructure for you with Docker and the dockerization. So that container either runs in the GitHub Action or in your CICD pipeline. So your browser runs in that pipeline for the test where, you know, Chrome's sufficient but you will eventually be able to run them in source labs in our device farm in our data center to access all the other browsers as well. Yeah, Vasile asked, I guess a similar form of the question saying why would you use SourceCTL for test running in CICD or even locally versus Cypress standalone. So the main reasons for Cypress and SourceCTL, for example, is the parallelization capabilities, right, that you will get so you can run in as many containers as you want hooked up to the Sauce Labs Cloud as well. So you'll see, you get video and logs out of the box as well which also comes with Cypress standalone as well. But you're kind of limited there. And then also, the data from all of the tests can go into a single place that you can then aggregate and take action on. To be honest, it's still in beta, so we are definitely taking feature requests and improvements, and it's totally open source. Yeah, Visily's exactly kind of like you're saying, like Zelenium, yeah, pretty much like Zelenium, but of course for all the common JavaScript framework, Cypress, PlayWrite, Puppeteer, and Test Cafe. And it's in beta so feel free to suggest any features, but you'll get to see that in the next chapter of how we integrate into the cloud. And maybe that will also give you more of a visual answer to your question. I know that Cypress I-O, they have their dashboard thing that does the same thing. How is it better? Like currently I'm using dashboard with a Cypress I-O. If I want to move, I want to keep my tests in Cypress, but I want to move to SOS. Why do I want to do that? Well, eventually, if you just use the dashboard capabilities to get your results that you would not need to change because you see the same results in Cypress I-O than on SOS. However, we have a lot of on our roadmap in terms of features, and we will not only show you the test results. Because we run our own Docker containers, and our preview Docker containers, we will instrument them in a way that we allow you other capabilities with Cypress. Things like API testing, security testing, all these kinds of things that will be possible on the SOS infrastructure. I'm not sure if Cypress I-O has probably planned similar things. Whatever works is cheaper, depending on your requirements. But there's ideally the vision of SOS, is to be part of the whole development of your application. Not only testing it at the end, but also during development. We want to understand a lot what's going on on your application. We try to add more testing capabilities as to not only functional testing, but also API testing, security testing. We have visual testing as part of that. It will be all in one giving you the most capabilities of them all. Okay, so it's a price thing and a future investment. Correctly. And eventually, SOURCE-EDL will also take on WebDriver-based frameworks like WebDriver IOs, Selenium to run in our cloud because once the WebDriver protocol has evolved, allows more capabilities. We won't be able to use the current infrastructure, the current way of how we execute tests. We can't do this anymore. So we use SOURCE-EDL to provide you the infrastructure in one command so you don't need to set all of this up again. You know, all this hassle of reporters and services that will slightly...

15. Speed and Parallelization in Testing

Short description:

Tests are inherently slow, especially end-to-end tests. However, Sauce Labs can help in parallelizing test execution, reducing test times from days to minutes. WebDriver-based frameworks like Selenium can also benefit from this approach. Sending commands to Sauce Labs or a cloud provider can take up to 200 milliseconds, but with Sauce ETL, tests can be executed faster by moving the browser closer to the test environment.

step-by-step will go away. And SOURCE-EDL provides you. Think about it as a Kubernetes for testing. In that sense. I know that the biggest pain we have with Cypress and everything and testing in general is speed. Like, tests are inherently very slow if they're doing like end-to-end. Is SASE something, anything that can help except for the parallelization? So the directs... Like, if you run Cypress in Docker container versus Cypress locally, there will be not any improvements there. We can definitely help in the parallelization process and parallelize all your test execution to an extent that you can... We have customers that run 1,000 tests in parallel and they have a test execution time of five minutes, where if they would do this like one test at a time, it would take days sometimes. So you can drastically reduce that at some point. But the same will be for WebDriver-based frameworks like Selenium, which is also known to be slow. This is all because sending a command to Sauce Labs or to a cloud provider can take up to 200 milliseconds. And with Sauce ETL and moving the browser close to the test where the test is being executed, this will make those tests much faster, similar as when you run Puppeteer locally right now, those tests tend to be faster as well. This will be all for WebDriver-based frameworks too.

16. Cloud Integration and Running Tests in the Cloud

Short description:

Parallelization is one of the easiest ways to handle the slowness of tests. Breaking up large end-to-end tests into smaller, more atomic tests and running them in parallel can significantly reduce execution time. Cloud integration offers the ability to scale and run tests in parallel, as well as access to video, screenshots, and logs. It also provides cross-device and cross-browser capabilities, extra analytics, and a single dashboard for actionable insights. In this chapter, we will tweak the local setup to run tests in the cloud, integrate with CI, increase browser coverage, and run tests on multiple desktops and mobile platforms. To use Sauce Labs, you will need a username and access key, which can be obtained by signing up for a free account. Additionally, Sauce Labs offers free licenses for open source projects. The final solution allows running tests on different browsers, real devices, and virtual cloud tests, providing video, screenshots, and logs without extra code. Contact the sales team for more information on the differences between Sauce Labs and other testing platforms.

Okay, thank you. Thank you. Yeah, I think at this point, especially with the up-to-date, like JavaScript frameworks and Cypress, WebDriver, Ayoda tests, if you code like small ones are already really fast. You know, I mean they're running in a few seconds. So parallelization is one of the easiest ways to handle the slowness. And then the other way that we recommend customers, which is more challenging, but it also is more technically correct is breaking up large end-to-end tests into smaller, more atomic tests. And so of course, if you can run in parallel and instead of waiting for like one five-minute test to finish your execution, you can break it up into multiple tests that you can run in parallel and then executing in like a minute versus five minutes. I helped one customer break down. They had like 18 large end-to-end tests that were running in like 24 minutes. And then we broke it down into like 180 tests that we ran in parallel, and then that ran in under three minutes. It was more work, but those are the kind of strategies I think we now need to apply for getting faster results. Okay, so just more best practices. Yeah, yeah, yeah. Okay, cool. I think let's move on. Sounds like everybody's okay. Was anyone able to get the YAML file set up for Cypress? Let me see here in the chat. Oh, wait. Oh, nice, perfect, awesome. Yeah, it should have been super easy, especially if you did the first one. Awesome, let's move on here. We'll go into the next one, cloud integration. So Chapter 7. Here we will do some cloud integration. Some of the advantages of cloud integration is ability to scale and run in parallel, kind of as we already talked about. If you have the need, you can do web and mobile testing. Also, you do have access without any extra code to video, screenshots, logs. Some tools like Cypress, for example, are very comparable to that locally, but of course, they don't provide cross-device, cross-browser capabilities. Of course, it depends on your needs, and of course access to extra analytics, which that's ultimately our goal is to have everything from the entire DevOps toolchain going into a single dashboard to provide you actionable insights on your software and bug fixing. So, the objectives of this chapter, we're going to tweak what we did locally and make it run in the cloud, and then we'll integrate it into CI in that manner, as well, and we will increase browser coverage and run on multiple desktops and some mobile platforms, as well. So, for all of that, you guys will need a SauceLabs username and access key. So you can just go to SauceLabs and sign up for free. Here's the URL. You just sign up for free. And let me drop this into the chats. Thanks. Also, since I'm working in the Open Source Program Office, if you ever have an open source project that you want test capabilities for, just let me know. There's an open source page where you can sign up for your project and then we give away free licenses where you can test your application with. Is there, somehow I can join, like, the initiative? What do you mean by that? Like joining the open source thing, writing tests for our open source projects. Yeah, we'll send you, we'll pick out the link and then you can have a forum where we receive your request and then we give you a free license based on your account. No, I want to write tests, not like, get tests. Or did I misunderstand? Yeah, so we give you free licenses so you can use ClassLabs for free basically to a capacity of, I think, five different, like five VMs at a time. Like, you can run five tests in parallel and your free trial account usually runs out after two weeks. So that open source account will stay forever, like stays with you forever. Okay. Okay, cool. So all of your challenges right here, as you can see, just give it a shot, follow the instructions, and at the end, what you're gonna be able to do is basically run your tests in the cloud. So while you do that, I'll show you guys the potential final solution. Let me see. So you potentially have a folder that contains all of your WebDriver IO tests, and then, of course, you all might have, you might create a separate config. You might have your own config. Of course, it's up to you. I have a separate one here, okay? And let me show you all my package.json for that one. And of course I'm just configuring my package.json to do mpxwgio and then just pointing to the config. And so, of course, we can run that command. And that's gonna run, you can see it's gonna run multiple tests on different browsers. And if we take a look in here, you'll see I have some real devices running with tests and also some virtual cloud tests which are just browser tests that just executed. You can see less than a minute ago. So this is what those tests look like. It's the same test that we've written previously. All of these are using WebDriver IO here just to not confuse anyone. That's what the test look like. So that's one of the advantages you get with Sauce out of the box is you get all of this, video, screenshots, logs without having to write any extra code that the test here is exactly what we've written before. Just a simple write test right here. And you can see it executed across multiple different operating systems and browsers. But then of course the cool thing is also to run on real devices. If you wanna test mobile web for example, and so here running on my Google Pixel, for example, here's an iPhone 11 Pro, iPhone XS, you can take a look at one of these. It pulls up on an actual real device. We can play our test. And that it just performs those operations there This is iPhone XS. And that's kind of what the final solution looks like. What you should see, oh, and I think I showed you all by the way, if you wanna access your username and access key, that's down here in user settings. Once you open it up, you'll see your username and access key in there. I'm just not opening mine so you guys don't have access to my access key and then start running tests on my platform. And let us know if you all are struggling. I see ConnieKorn is saying she's using she or him. They're using browser stack right now and it's very similar. I would say just talk to one of our sales people. They might be able to provide you some really good information there. Me and Christian, we're obviously the technical people. So I can tell you in terms of like technical features where it's different, right? And by the way, I haven't used browser stack in a long time. I know they're similar. But I know that Sauce Labs, we can do stuff like visual testing and performance testing, API testing, which we actually just acquired like a month ago, but we're too technical to answer this question. You can contact the sales team. Probably they'll even contact you if you create your free account and then you can talk about that there. Vasily said that nothing beats the joy of manually testing on a phone for eight hours straight. Yeah, that sounds like a dream Vasily. I feel sorry for you, man. Oh, Vadim, it's up to you if you want to create a new WDIO config. So you can, or you can just configure the one that you have.

17. Configuration and Saucelabs Integration

Short description:

You can have multiple configurations for different environments. Specify the environment you want to run in with the WDIO command. The Saucelabs service provides integration and additional capabilities. To run tests on Saucelabs, remove unnecessary configurations and add the Saucelabs service and capability. The Saucelabs service offers features like updating test job names and starting Saucelabs Connect. If you encounter credential errors, try hard coding them directly. Remove the Saucelabs service for now and check if the capabilities are compatible with your account. Make sure you have the necessary packages and configurations. If you're still facing issues, share the error message for assistance.

It's up to you. If you, obviously we have all the solutions in here. So it's not like you're going to lose the local configuration. So it might be just easier for you to just configure the current one. Best practice here though is that you might want to have multiple configurations for multiple environments. So there will be always one WDIO config file where you have your main configuration for all your environments located like your framework, I don't know, some of your main hooks that you use for all environments. But then you have like a specific SaaS configuration that requires the main configuration file and then modifies the properties in there. So then you can, with the WDIO command, you can specify which environment you want to run in. And that config file has then specified the tests for that specific environment with the capabilities for that specific environment. So let's say for your local test, you have a dedicated config that defines the Chrome Diver service and the Chrome capability, and then next to it is your SaaS configuration file that has a SaaS service when you integrate to SaaS. And a lot of more capabilities because you run more devices or more browsers on SaaS. And the same works for testing environments. So let's say not only execution environments, also testing environments. Let's say you have a local development, a local app, or you run on the post-production test suite. So you can, this is a good way to modify your configurations for specific files.

Yeah, thanks for bringing that up, Christian. We actually have a public repo here with our best practices examples for specifically for functional testing with WebDriver plus others. And here's what Christian was talking about, having different configs, one for local. Here's the share config that is everything common. And then here would be like a softslabs config, right? And then you just point to the appropriate one based on your environment. Adi, you can do like this, your username and access key in the config. But you won't have the config because this is coming from the VCR. Let me show you the actual solution that'll be more efficient instead of throwing you off. Let me show you a quick demo. Here, it's part of this config, I think. So for everyone working along, I can show you all that configuration. You'll probably have something similar like this before, and then we are just adding some SaaS credentials. You'll need the WGIO SaaS Lab service. And then, at that point, we can start configuring our capabilities for different browsers and browser versions. You all maybe just wanna start on something small like this, and then, get that going, and then, we can scale out to more browsers and operating systems. And that really should be all there is. And if you're curious what the SaaS config looks like, that's just this. You guys don't actually even need it. I was just putting in build name in there for extra verbosity, but you don't even need that, honestly. Just to mention the SaaS service that WebWR provides, it's not necessary to run on SaaS, but it provides a nice integration because it automatically updates the test job name, the test name and adds some comments to your dashboard there. And it allows you to simply to start things like SaaS Connect, which is our secure proxy. If you test your local application on localhost, or if you don't want to... publish your application to the public and have it on your own local network, SaaS Connect proxy is a way to go, and it helps you to set all this things up for you. So I'm curious here, has anyone been able to run their test on SaaS?

Working on it. And of course, if you're facing any issues, feel free to throw them in. We are here to help you all. So, maybe to speed this up and help you all along, let's show you all the solution for the config. That's what ultimately your config will look like. So feel free to just, and as you can see at the beginning of the config we merge our main config or local config into that. So this is our base and then we just modify the capabilities that are specific for that environment. So maybe we want to have a more verbose log level, the user and key defined to connect with the cloud and then specific services and not just that with capabilities.

Yeah, thanks for pointing that out, Christian. For anyone a little bit confused about that just to make sure, because I know we have so many different levels here, probably the simplest thing, you guys will not have a local config, you all are probably just modifying your one WDIO.conf that you have. And so then you really only have this one section here and you're just basically adding this stuff for Saucelab. So you're adding this, you're adding the soft service and you're adding just one capability for now. We'll expand some more later. And so that's really it. I'm getting that my credentials, my secret is incorrect even though it's copy pasted from user settings. Oh, interesting, I bet. It's possible that the IDE is not reading them sometimes. Why don't you just hard code them here directly for now. That's what I did to try and fix it and I'm still at the same problem. Oh, really? Yeah. You're saying invalid, your credentials are invalid. And is it displaying the credentials in the log, when in the output? Well, a bit of them, it's displaying like the end. It's like XXXXX, and then the four last characters are displayed. Can you drop the error in the chat? Oh, there. Yeah, well, yeah. Just the error message, you don't even have to put your credentials in there as well. Well, actually the credentials aren't displayed, right? Okay. Oh, incorrect. Testing Bot credentials. Oh, you might have, you might be using a testing bot service. I am not using anything specific. I removed the Chrome service and added the SaaS service, exactly how the markdown says. Hey, Christian, are you familiar with that error? Testing Bot credentials. Could we, do you have the SaaS service set? Yes. Do you have in your package JSON? Yes. Can you remove the SaaS service for now? It might be, can it be that some of the capabilities that you have set are RTC capabilities and I'm not sure if users that have just signed up can use RTC. Michael, do you know that? No, but I bet with that, I think you're just running on Chrome Windows 7? Yes. I actually copy pasted from the marketing so I won't miss anything. Ah, good, good call. So maybe just here, let me show you guys, then you want to just remove because, remove everything and only leave this one thing and give that a shot. Delete, so let me show you what to remove exactly so I'm clear. Remove all of this below so it's possible that, yeah, remove up to here, right? You'll want to keep the square bracket. Get rid of all of that because I bet your guys' free accounts don't get access to real devices so you might be encountering an error. But you can go- I have only Chrome latest on Windows 10, only that one. And that's still false, the same error? Yeah. Yeah, yeah. User q- Are you comfortable sharing your screen and showing us? Sure. Oh, yeah, okay. Yeah, yeah, this is my secret code, you can steal it if you want. You can reset it after the workshop. So here's the user in key. There's like the services here only SAUS.

QnA

Troubleshooting Sauce Labs Integration

Short description:

In this section, we discussed issues with credentials and access keys for running tests against Sauce Labs. We explored the importance of storing credentials securely and the challenges that may arise during CI/CD integration. We also identified a UI bug that caused errors when the API key did not contain dashes. Overall, it was a valuable learning experience for everyone involved. Moving forward, we will focus on integrating cloud services into CI pipelines, specifically with GitHub workflows. The process is similar to previous setups, with the addition of setting environment variables for the username and access key.

And it can type. I see it's there on line 50. Yeah, only this. Could you remove the SAUS service for now to see if that service maybe did does something funky with your configuration. It's run. I'm actually getting the same error. Wait, wait. Yeah, same error. That's very interesting. I'm testing bot credentials. Okay, what's your username? Can you paste it in the chat? I can look up the username. Has anyone else similar issues? This is my credentials up there. Jay, yeah, another person is having the same issue. Only user, is that true? I mean, we try to... We produce this... We have to work... Hey, that name, I think you can stop sharing if you want. Boop. Yeah, you guys are asking some good questions, so... The... The variables don't need to be in environment variables. So, sorry. Username and access key don't need to be in environment variables. However, when we move into CI-CD... It's... It could pose some challenges for you guys, because I don't know if you all work on teams, or let's say we're going to be committing into GitHub, right? You don't want your username and access key going out into the GitHub world and being shared with everybody, so that's why people store them in environment variables. So, when we do CI-CD integration, it actually just works. And I was going to show you guys how to set up secret environment variables in GitHub workflows. So, if you're okay sharing your credentials with the world and GitHub and committing them, that's okay. If not, you should place them in environment variables. I was not able to run this test, but it maybe has to do with the user. Did you just sign up? Or do we have this account for longer? Did you ask me, or...? I signed up at the start of the workshop. Can you sign in? Right now, we're trying to change the access token and see if that helps. I generated a new one. It looks actually vastly different. Like it has these dashes. Oh, you didn't have dashes in your API key before? No. Oh that's... Yeah, that might be it. You should have dashes in your API key. And that's a UI bug. So interesting. Curious friends, has anyone else been able to run tests against SoftLabs? OK, there's progress. That did change the situation. I'm getting a completely different error. Nice. I'm getting a forbidden. Or 401, not forbidden, unauthorized. Oh, nice. OK, that's good. I see... Maybe there's something with the copy function wrong, but usually the XS key should have dashes. Hey, Vasilis, you're getting your error because you did not... Oh... But could not be connected to a known cloud service. Hey, Vasilis, I wonder if you have the SAUS service here, as well as your username and access key provided. As well as your username and access key provided. Seems like that would be the issue. Something was wrong with the credentials, either from copying from the website or... But usually that should... If that error comes, it means that the credentials are actually invalid. Can you double-check your access key? I would know that I am posting to US West. I'm just seeing this. I'm posting to US West, even though I'm in the Eastern. That should be fine. That should be fine. And I can show you how such an access key should look like. I'm just modifying a couple of values. So, this should be an access key. It has four dashes in them. Ok, good call. If that is there, we can forward that to the team. Wow, so interesting. Good to know guys. Thank you all. Thank you all for doing pre-testing for us. Yeah, as Christian showed. Hey guys, in the interest of time, we have about 26 minutes. We still have one or two more, one and a half more topics to talk to you all about. So of course, you can pull down the code, pull down the solution, and just keep working from there. That will put everybody on the same spot in terms of the code base. So, with the cloud integration, I wanted to show you all how to integrate that into the CIO pipeline. Instead of doing it all together, just in the interest of time, I'll just show you all how to make that happen. So it's just another GitHub workflow. Right. And so, for example, here's how we would do it for WebDriver IO. It's everything the same exact way, oh I'm showing the wrong one. Sorry. Everything the same exact way as before. The only difference is you're setting your environment user name and access key.

Cloud Integration and Visual Testing

Short description:

If you did not hard code your credentials and stored them in your environment variables, you can access them through the secrets object in GitHub workflows. The credentials can be added in the settings > secrets section of your repository. Cloud integration with CI is simple and can be done with WebDriverIO or Cypress. Visual testing ensures that your application looks exactly as intended by comparing screenshots to baselines. It has evolved to be smarter and faster, allowing you to render entire pages and perform hundreds or thousands of validations with just a few lines of code. Visual end-to-end testing is useful for cross-browser and cross-platform testing. The workflow involves running tests, rendering screenshots, comparing them to baselines, and manually evaluating any changes detected. Screener is a visual end-to-end testing tool that can be used for this purpose. Once configured, you can navigate to a URL, use the visual init command to capture snapshots, and give them names. These visual tests are different from functional or unit tests and require the page to be fully rendered.

Of course, if you hard coded this you don't need this, but it will be public in GitHub. But if you did not hard code it and stored in your environment variables, this is the value that we're going to be passing to the keys of the environment variables, and we're accessing it through the secrets object in GitHub workflows. Let me show you guys where that's available. So, that's in settings, secrets. And then in here you would add all of your credentials. So, you can see I created an access key, source user name, and then when you just click new repository secret, you just enter, you know, your key and then provide its value, and then save it. And, that's really it. And then you just reference it in your YAML file. And so then when you have that, everything just runs like before. Let's see an example of EICD. Here's an example of WebDriver IOCloud CI-CD. Nothing different, but this one, of course, is just running much more tests. You can see it's running all of these, it's the same spec across multiple different browsers and operating systems and then providing the results here. And of course, all of that shows up here in the Sauce Labs dashboard for you to see. Oh cool. James said that he just regenerated his access key and the new one has dashes. The tests are now running. That's awesome. So we might have a race condition where, for some reason, when you first sign up, you don't get dashes in your access key. Okay. So that's Cloud integration. Super, super simple in terms of with CI. I also wanted to, if you guys want to do that with Cypress, for example, we have a config for that as well. Same thing, just putting your SaaS username and access key at the top and then you're just running your SaaS TTL run command with your Cypress code base, which ours is obviously in this directory here, and then just doing npm install and run. And then it'll run. In fact, I can show you guys that here. Oh, this won't run because it's just, I have just the file here. Oh, that'll, here I'll show you guys here. You can see we have a cypress cloud, and that's that. And it looks, it looks just the same as pretty much all the other ones. Here's installation of the dependencies and execution. And you can see it pulls down the Docker image, and then just runs a test, provides you some reporting. And then this also goes into software as well. So if you want to see the jobs. Details, here's that job. And you can see it executes, really fast. What's up? Okay, cool. So with that, a few people still are having issues. I'll let Christian help you guys troubleshoot them. We'll move on to another topic, which is more expansion of automated testing. We'll move on to visual end-to-end testing. I do have to give a quick and boring high-level overview of it. Hopefully it's not so boring. I was just joking. Can you all see that? Okay, the presentation. Yes. Cool. So I'm curious for the audience, anyone know what visual testing is? Visual regression testing? Yeah, it could be regression, it could be non-regression. It's comparing the actual picture that comes out when you go to a page. Nice, that's a pretty good definition of it. So, yeah, it's basically ensuring that your application, at a certain point in time, a screenshot of a page or it could be even a component, looks exactly how you want it to look. For example, think of google.com, you can capture that screenshot on Chrome and Safari, and then that saves it as your baseline. And then every execution in a future point will compare your new test execution to your baseline and it'll check for visual differences. But, visual testing has gotten a lot smarter over the last, I would say, five or more years, where before it used to be a pixel-to-pixel comparison. Now, almost nobody does that. People now do it in a much smarter way. And so, I'll show you guys that solution, but it's a really good way to do visual checks of either entire pages or components. Before, what we used to see was a lot of individuals would maybe pull up a page and start checking whether URLs exist, whether buttons exist, and stuff like that. You no longer need to do that because you can now just open a page and basically render the entire page and do hundreds or even thousands of validations with a few lines of code. Which is why visual testing is really nice. It's also really fast because, again, you don't have to execute so many functional tests. Instead, you just check the baseline to the actual results and then decide at that time whether the new changes are bugs or are they requirements changes. Here's the workflow that is typical for Visual End-to-End Tests. So a test will run, the commands are called, the screenshots are rendered across different browsers and resolutions. So one other thing Visual End-to-End Testing is really good for is cross-browser, cross-platform testing, let's pretend you have a responsive web app. Visual testing is really good for that because you can just open up multiple pages and then render them on all of your most important devices, browsers, operating systems, and make sure that everything looks as it's supposed to look. So then the screenshots are compared against the baseline. If changes are detected, as a user, this is a manual step that we have to perform. We either have to decide if those changes are relevant and are a bug, or if they're not relevant, then they're potentially a requirements change or we can ignore certain changes. And so that's what we're doing here, and then it repeats. I'll show you guys how to do this, and I think that's pretty much it. Let's get back to the fun part of the coding. So since we are giving you guys so much stuff and we're shorter on time, I recommend just pull down, continue to working through our pull down repository here, and then we'll take a look at this chapter, Visual End-To-End Testing, Chapter 8, and we'll continue working with the same exact test that we setup in Chapter 2, but now we will add a screener to it. So, Screener is our Visual End-To-End testing tool that you can access here, it's actually, I believe that today, it's actually going GA. It was in Beta up until today and I think now, today, it's going GA, so you can sign up for notifications. Right now, I know it's unfortunately a manual process for every user to get signed up for Screener, but if you guys, you know, send you the key, be posted, you'll get access to it, but in the meantime, I'll just show you guys how it would work. So, the configuration for it is all here and once it's ready, there's only a few little changes that are required to make it work. We still navigate to a URL like we used to do before with our WebDriver IO task. So, this is one of the reasons why we're showing you guys WebDriver IO primarily is because this does only work for WebDriver IO or any other programming language supported by Splendium WebDriver or there's also visual component testing as well. That'll work for any like Angular or React applications, which is a little bit different than visual end-to-end testing, but so then once you open up your URL, you'll use the visual init command that will tell the test that it's a visual test and then you'll capture a snapshot of the page whenever it's rendered and give it a name. So, I'll show you guys that in live as well. Let's take a look here. So, here's that example of exactly the test that we've been working with and let me close all of this so it's not so messy. So, here's the exact test that we've been working with. I'm putting a pause here because it's really good to make sure that our page actually loads here. Because you gotta think of these as different tests from functional tests or unit tests and so on. These are visual tests and so here, the most important thing is to make sure that a page is fully rendered and yeah, so this does DOM snapshot yeah, it automatically captures snapshots of the DOM when we pull it up. Yeah, I'll show you guys in a moment. And then here just saying this one is my React Redux app, right? And this page is called the global feed page. Here I'm pulling up another page, capturing a snapshot called the sign page, and then here we're going to the register URL and capturing another snapshot.

Responsive Testing and Configuration

Short description:

We can run the test across multiple devices and resolutions. We set baselines for acceptable changes and can programmatically ignore irrelevant elements. Visual end-to-end tests and visual component testing are both valuable approaches. Suggestions were made to improve configuration and device presets. WebDriver IO and Screener integrate with Storybook for component testing. Feedback and questions were addressed, including GitHub secrets and the applicability of the testing solution.

Okay, so that's one test. And then the very powerful thing about it again, right? It's a responsive web app that we wanna see in multiple different browsers and configurations. And so we can point it to multiple different devices, right? Like here for example, I'm pointing it at Chrome on this specific viewport size, because this is 28% of the usage in the world. So the number one most used viewport size for browsers is gonna have number two and so on and so forth, right? And then I get into like iPhone X, iPhone six through eight and then we get into the Android resolution. And so that's just the one test and we can run it across all of those platforms. So let's do that. Next up, we do Validux tests. And then we just do NPM tests like normal. Let me show you guys my packages. JSON with NPM test does that's what it does. And then the right one. Yeah, that's all it does. And so now it's good because we've updated the NTP technique. Let us check one of my ideas. Hold on. Yeah, it's actually good. That's good. So basically now you can get rid of MNPs, all the data and you can get a better app. That's nice. And now to actually start the test, we need to go back to the glyph icons, 10 accepted. These are the ones that we've previously executed if you're running it for the first time. You would need to decide for example, what do you want to do with this screen on Chrome 88, Windows 10, this resolution? Is this acceptable to you or not? And so if it is, you set it as a baseline. But here's some of that change that are a little bit more interesting to look at. So this has changed, and so here it shows previously accepted and the current. And you can see, it's highlighting some changes here that we can see, and if we come back here, here's the DOM snapshotting that actually happened. So here it's saying that there used to be this tag that was now added. Here for example, it's saying there used to be this text that was now replaced with another text and so on and so forth. And so with these changes, this website is actually really nice for this because it's super dynamic. So you can decide actually what matters here. You can decide through the UI, for example, that all of these random changes don't matter, or you can decide that they are bugs and report them to your team. And you can also do this programmatically where you ignore certain areas that are not relevant. For example, maybe the amount of pages here would constantly keep increasing, right? Or maybe you have like a date field that is constantly updated. You can programmatically just ignore that entire element and then it won't, the screener will not worry about whether it's changed or not. So it's just, that's a part of configuration that you kind of have to play around with your application to do and see. And ultimately, let's say that this change is acceptable for us and we say that everything here that's changed is a requirement, we're okay with it. So, you accept it and so now every future execution of this test on this configuration here will check against this new baseline, which is this one, using offset, this one is the right one. And so, then that's a nice thing again, powerful thing about it is that you can see we've had the one test. It opened up multiple URLs, right? It's checking our global feed, it's checking our sign in page, it's checking our sign on page and it's doing it across multiple different resolutions. So here's all the different devices and resolutions that we did it against. I know that was a lot. You guys have some questions?

What Nikolai just showed, it's visual end to end tests, which is one part to use visual comparison to test your web apps, but a different way that is also pretty common in the market is to do visual component testing and people, someone has mentioned Chromatic, which is something almost similar to what source visual also provides where there is a test runner that goes to all your storybook components and make screenshots of them. So you can test your individual components and directly for every build that you have. So that is the different way of how you can use source visual by using the former screener runner which allows you to, which detects your storybook components and runs the test automatically for you. Yeah, Dapner, and you also pointed out a great thing that it would be really nice to just, instead of saying the resolutions, right? We just say device name, right? iPhone X, Samsung Galaxy S7. Maybe like presets, you know, like I set a browser platform, blah, blah, blah. So I can just like, you know, in the SOS options maybe, or I'm not sure how it works behind the scenes, how much you can touch, but somewhere I could just define the type of, like configurations I'd want, that is a bit more universal, so I won't mess up. Yeah, 100% agree with that. So we used to have a solution before this that was actually like version one of this service was called ScreenRunner, and it used to be able to do that where we supplied that device name, for example, iPhone X, and then it would just run. I don't know if we can do that now, but that's a fantastic suggestion that I took down and I'm gonna tell our product owners to make it happen. Any other questions?

Yeah, we will take that feedback for sure. Does anyone else have questions, concerns about anything we showed? I have a question. This is from what I've seen only for end-to-end tests. Is this applicable for integration tests or even some fancy unit tests? Yeah, that's a good point, Dan. So what I showed was an end-to-end testing solution, right, using like WebDriver I.O. and pulling up the entire page. We also do have the Visual Component testing, which will plug into Screener. Sorry, which will plug into Storybook. I said Screener. Sorry, too long without any food. My brain is running out. So this will integrate into Storybook as well. So if you have, you know, your React app or Angular app, and you're using Storybook, it will automatically check all of your components for you. So it's not so useful. Once again, there's no thing about that. So it will automatically block your applications. So it's not a, it's an application that can only work on simple applications where you just want to access your blog in an app that would be available to you. So all of these stuff, they will in fact be incorporated into the object manager, which you are not used to. You definitely can't use it in data management. And you can't get a great demo of it. So, yeah, so there's issues and I, I can comment on them. Yeah. And yeah. And if you guys even want to reach out afterwards, you can reach out to us as well. And the recording, I think will be available. We'll be sent out by the conference team. I think via email. I'm sure, or it will be sometime on YouTube. Thank you. I do see one technical question. I missed the moment where you placed the user key when I set up GitHub. Okay, that's super easy about being GitHub. Keys, the secret keys are in here. Settings. Secrets. And then you just add a new repository secrets and then you put your key and value here and then you save it. And then you reference it in the GitHub workflows with this secrets object. So secrets dot and then your key. Secrets dot your other key. And then it's like a magic string or do I have to reference it somewhere in my code? Oh, excellent question, Danna. So this is our key that we defined in our WebDriver IO config.

Network Mocking and Custom Responses

Short description:

You can use the moc command to mock specific URLs and customize responses. This feature is currently only available for local tests and can be used to filter requests based on headers, status codes, and more. You can even overwrite images in your test suite. By defining mocks before opening the application, you can check if specific endpoints have been requested and make assertions on the network level. This is useful for testing integrations with third-party services. Modifying responses allows you to simulate different scenarios and ensure the proper functioning of your application.

So remember when we were doing this, that matches this here. Make sense? So we're basically saying, the key, this sauce underscore username key, its value equals this thing that's coming from GitHub actions and same thing for the other one. Does that make sense?

Yes. You need to define which secrets you want to propagate in your workflow. Like which secret should be accessible for your environment there, and if you define the environment at the root scope, like Nikolai did, then the variables will be available for all steps that you have defined and for all the workflows, but you can also set the environment property for individual steps if you don't want to propagate them into other steps that you might have.

And I think, I don't know about you, Christian, I think I have another demo to do today, like in one hour, but I might be able to hang out a little bit in that channel or whatever to answer your guys's questions if you have anything after as well. Yeah, I can maybe also go real quick. I mean, we still have time on the Zoom. I can still go over the network recording since, network testing since some people have already started, I mean, I don't think we get kicked out. Oh, not, yeah. Yeah, go for it. So just let me share my screen. So I will just go over the solutions and we'll tell what exactly happens here. And again, this network capability tests or network mocking test will currently only work for local tests. So the ones that you run directly in CI CD. For sauce, we are working on making this happen as well. With web W, but right now it's only working for local. And the idea is that for those tests, you have the moc command, and you can specify a glob pattern here. So in this case, I want to match all the URLs that improve slash API slash tax. And you can actually even more specify even more details. Like you want to mock. And improve mocking, it's not that you completely mocked the request. It's just that you have a special URL that you want to treat differently. And so you can filter and even more saying, I want to filter or I want to mock all requests with a specific header, with a specific status code, with a specific whatever you have, I think the documentation has a nice example of what is possible here. So yes, you can filter on status code, header, response headers, and post data even, if you send something over as in a form. And then this mark, if you just define it like this, nothing will happen. The browser will continue to work as normal, but you can say, okay, I want you to respond with a custom response. And here we have an article response, which is directly coming from a JSON file you can define in your test suite as a picture. And so this article response can check that out, it's like, contains the custom response that is defined by the ABI. If you use open API, you can even auto generate those. There are tools that allow you to auto generate custom example responses. There's a feature request in the project that will allow you to record all the API endpoints. If you want so. So this will be coming soon I hope, but ideally this will be then plugged into the response so that the browser responds with a custom browser response. And that doesn't only work for requests like for API requests, but you can also overwrite an image. So here we take from our custom response, we take the image URL of the author. So this is this one, badmin avatar, and what we do in our next step is to return the cat, I think. Yeah. So instead of showing the custom image of our custom response, it responds with a cat image. So here we open then our URL after we defined our mocks, which is important. You first have to define how you want to respond with, and then you open the application on the test, and then you can see the amount of articles that are shown, not like 10 or whatever you have here in the global feed, usually. Usually it's like 10 or something, it's now exactly the amount of what we have in our article response. And there's a different mock. Oh yeah. This mock that we have defined before, the 10, this is actually more correctly referred as to a spy because we don't, with this mock we don't modify the response. We just mock it because we want to actually check if the browser has requested that endpoint. And we can go fancier and say, okay, I expect this endpoint to be requested with certain properties. So we have here, it's an expert in our assertion library for network here to be requested with. So we can have assertions on, it's similar to what Jest provides you with executing functions, but here it's on the network level. You say, okay, I want to check if my mock has been requested with this specific URL or with a post record method or with a specific status code. It has had a specific status response. The status codes and response. I expected to specific request headers, I want to specific response headers, post data and a specific response. And that can be very useful if you have for instance, integration into third party services, like Google Analytics. And you want to make sure in your post production tests that these integrations are integrated and that you actually do a call to Google Analytics when you do a certain action. I can maybe run this real quick. I say, I have this available test JS going into the solution for network mocking. And here as you can see, I have one WGAO conflict that actually should not work here or should not be, that's the wrong solution actually, should not go into, should not be called WGAO conflict I think it doesn't even go to WebDriver I know. Let's see, it's runs locally. It has Chrome driver. It's just a long name for it. So we move this to call it local conf. Just, WGAO conf is right. I was confusing something. Okay, so if we run this now, let me actually, make a little break at the end of the test so we can see that the images actually have changed. So here in the end, after we have opened our browser, we can just say browser.pause for let's say 15 seconds and that will help us to see what's going on. That starts ChromeDriver, runs our test and starts my browser. So now I see my awesome article and another awesome article and with the cat image right here. So instead of actually doing the requests, it replaces those requests with the custom box that we have there and then the tests are passing. So let's say we would change our tests in the way that it would say we expect this by just to see how it feels. Nope. We look into the docs again. Let's say, we expect this response, which would be false, but for the sake of testing. No, that's an error. Oops. I'm not used to working Vim, that's why I'm a bit slow here, but let's see. So browser, the requests are replaced again, and I forgot to put out the break, the pause. So after some seconds, oh, it still passes. Interesting. Maybe I didn't save. No, but that needs to be, that should be actually checked why this is like this. Or it has, or the API has changed from there. Yeah, this way you can, modifying your results, you can also go over things like, re-login your user all over again, and it helps you with charting your test. So you have one test that makes sure that your login works and all the other tests use the same login behavior over time.

WebDriver Debugging and Future Plans

Short description:

Cypress has a tool that allows you to step forward and backward in the browser, but WebDriver does not currently have this feature. WebDriver sends commands to the browser and receives responses, so it doesn't have real-time access to the browser's state. However, WebDriver Selenium is working on improving debugging experiences, and the new WebDriver protocol may make this feature possible in the future. It is a critical tool for developing complex tests and debugging failures. Although there are initiatives to implement this in WebDriver, it requires a significant amount of work. Time constraints make it challenging, but the goal is to have a similar tool in WebDriver.

Okay. The... I have a question about WebDriver itself. Okay. Cypress has this tool. You know, you open it and then there's a whole UI and tells you the steps, and you can pause it and step forward into it. A whole shenanigan of stuff. Right. This WebDriver has anything like it or planned to have something like it because that's a big feature. Yeah, that's absolutely, I absolutely agree. We can show you a slide that I have for to explain this a little bit. So this is actually not working right now with WebDriver or WebDriver cannot provide that because the way Cypress works is they run in the browser directly. And so it's a little sneak into the slide deck for tomorrow. The Cypress test runner runs in the browser. So they can allow to have this step forward and backwards because they have access to the school environment in the browser. With WebDriver it's kind of like we sent a command to the browser and the browser executes that. And then it comes back with the response. And so we don't know at all times what's going on in the browser right here. So when you run the Cypress test runner, it starts the test runner in the browser. It loads it in the browser and then your application on the test, it's loaded in an iFrame. And so the test runner, where you can step through steps and see things, it's part of the browser. It has all this access to all this data and can store history about the dome structure and all of that over time. At least as long as your session goes. And so with WebDriver, that is not possible. WebDriver kind of works in a way that, you have an assistant that goes to your browser and does something and then comes back with a response. And that response, that request response takes sometimes up to 200 milliseconds and it cannot be provided this way. However, we are working on the WebDriver tools, WebDriver Selenium. They were working on making those debugging experiences better. And with the new WebDriver protocol, I'm sure of this, something like this will be possible in the future as well. If something will stop me from migrating, this is this tool. Like this is critical to developing the more complicated tests or debugging locally why the tests are failing. It's like really, really a big deal. Yeah, absolutely. And yeah, I would wish to have something like that in WebDriver too. Maybe someone comes around and likes to implement those kinds of things. We definitely have those initiatives in the project to move forward with that, but it's also a lot of work to get this implemented. And, you know, you only have too much time in, though COVID keeps us all at home, but it's still 24 hours per day.

Page Objects and Test Abstraction

Short description:

To abstract complex tests, some people suggest using page objects. Instead of typing and modifying URLs directly, you can operate on objects that represent pages. Modifying the design of your application doesn't require modifying tests, only the page objects. Different teams can handle different page objects, making it easier to handle different pages, elements, and interactions. The second test example shows how page objects can dynamically abstract complex implementations, removing repetition and making tests easier to read and share.

I just wanted to give a quick overview of the other examples. So the one where the goal was to move to page objects, and here we created three page objects for the login, feed, and editor page. So those are, we have seen the login page at the beginning. This is the feed page, and we can say the new post was the editor page. And so in general, to abstract complex tests, to abstract the way the complexity of tests, some people suggest to use page objects, and I for one agree with that because it can really extract the complexity of it and allows you to more concentrate on writing tests.

So instead of having browser URL commands where you type in URL and where you modify the URL, you have more like an object that you can operate on, and you can say, okay, login page open, and the login page object contains all the information about which page it has to open and what it needs to do to wait until it makes sure the page has been loaded and things like that, and then it provides methods to interact with the page. In this case, we have a login page that provides, where you can provide username and password, and then you have an element that is assigned to the feed page that you can work and operate on. So you also don't need to know selectors anymore in your tests. And one reason for that, as well, is that you can go ahead and completely modify the design of your application, completely redesign the page, and completely restructure it. If the flow stays the same, you don't need to modify your tests. You only need to modify your page objects. And this is also something that different teams can take care of. So if your team works on a login page, if you have a dedicated team working on a login page, you can make this team responsible for writing a page object for it. So for other teams that need to interact with the login page, they can reference those page objects easier. And it makes it overall more, as you grow your test suite and test framework, it much makes it easier to handle all those different pages, elements, and ways how you interact with the page.

The second test shows the example for the editor page. You open it and then you publish by providing an article title, subtitle, then your text, then your text and your tags. Same principle here. What's interesting is that this page object returns a set of other page objects. Here articles is not a set of elements, but it is a set of sub page objects. I can go into for more detail, but it shows how dynamically you can abstract things away. While this is in itself is like something complex too, but it also helps you to remove a lot of implementations that you would repeat all over the page. So here you see in my feed page objects where I have a lot of articles if I go back to the main page. So I have a global feeds and every global feed has elements, articles with username, date and text and whatsoever. So what happens here is if I fetch articles here, if I get all the articles, then it makes sure that I have at least one article. So one article at least exists so it's a wait here. And then I fetched all the articles and for every element that I find, I create a new page object and that page object represents articles. And if we look into that, we see that this article page object, it has root element so you can never go out of this article element. So it looks within the root element that you provide and with that, you look at it, it makes selecting things easier. So you only need to find the selector and return the text of the test ask for the author, for the date, for the title, for the about. And so everything is a little bit more separate way. It might overhaul the test. In my opinion, it's easier to read and definitely easier to share it across a lot of organizations and teams in your tests.

Performance Testing and Future Plans

Short description:

In this chapter, we cover performance testing with examples of running tests locally and on Sauce Labs. We use Lighthouse for performance testing and recommend running performance tests after functional tests. It's important to separate different types of tests and use cloud-based solutions like Sauce Labs for storing performance results over time. We also introduce Speedo, an NPM tool for running Lighthouse performance tests in the cloud. Speedo allows you to test specific page loads and compare performance metrics against baselines. Additionally, we mention the possibility of future chapters on API testing and encourage following the repository for updates.

Any questions to that? If not, I can also go real quick over the last chapter, which is performance testing, and something that I spent a lot of time with last year. So also something that, you know, we like so much that we provided some GRAPTAR extensions for Sauce Labs, so you cannot only run this locally, but also on Sauce, and I can show you an example of how that works. So in this case, we run our test locally, and we just assert directly in that test, and we use Lighthouse, Web.io uses Lighthouse under the hood, as well as what you want Sauce Labs.

Hey Christian, sorry to interrupt. There was a audience question from somebody, how do you make all page objects available from each other to avoid requiring them in each test file and be able to use each other in each page object? In each page object. I think they're trying to not have to reference the page objects files in every single spec.

Okay. So, with that one solution you can do if you're not, if like, if you don't like, for instance, importing all page objects here is that you, by design, all these page objects, as you can see are static objects. So, if I go into that you will see that this logging, like this module exports an instance of it. This is because web pages in general are static websites. And so nothing changes here. This page object doesn't need to have a state where you store like a username for a longer period of time in your test because you actually interact with the browser and not with this object. So, since everything can be static, you can register all these page objects in the global scope as part of your Webtraver-io hook in the before hook or... Yeah, in the before hook, before all tests start, you can just have a service that registers all these page objects into the global scope so that you can remove the imports. That would be one solution that I would take. I think... In the same subject that you just said, do we have the lifecycle hooks that Mocha gives you like before each? Yes, exactly. So, Webtraver-io actually uses those ones to allow you to run things like before suite and before test and before hook. Like we use the Mocha hooks here for that. The other ones like before command, before those are all hooks before session that I implemented by Webtraver-io directly, but those are leveraged from the framework itself. But here I would, for instance, for rejusting the page object in global scope, I would use the before hook, which gets executed before all tests are in the worker.

So you have the performance test. Here's one example, how we run it locally and one on SOS, because both a little bit different. For local, we use, you will see that the setup uses the dev tools service, which is a service that allows you to interact with Lighthouse and the Chrome Dev Tools API. So we can find the service in here with some explanation how you integrate it and install it. But essentially you install the service and add it to your service list. And then you have some additional commands to your availability. Ones is for instance, to emulate an IphoneX. And here you don't need to remember the resolutions anymore. In this case, web.io knows what the resolution is of an IphoneX and changes the resolution to that. And then it calls enable Performance Audits. And here you can define your network following capabilities and your CPU following capabilities. I would suggest to have something like good 3G and four times to emulate a mobile device. Because if you run a performance test on a supercomputer your results would be not really how would you call it? You would not really test the user that would struggle with the performance of your website. So that's why it's always recommended and that's why Lighthouse usually runs those performance tests on a mobile emulated environment. And so here you can do that with those commands as well. Then you open the page and then you can use the Getmetrics commands to assert against certain metrics that you return. This is essentially the same as you would go on, you would go into your DevTools and would pick Lighthouse tab and you create a report. The advantage here is that if you continue going to different pages, like opening different URLs, WebDriver will always take the performance of the pages that you open. So let's say we transition, once we have opened the Feed page, we transitioned to a different page, you would be able to again, get the metrics for that consecutive page load. And this is really useful if you want to make sure for instance that you have, that you can, that your cache, you know, kicks in that your consecutive page load is much faster. You can test how your website performs if you know, have an offline experience, things like that are possible. And even if you, you know, submit a form and then you submit on a button that page transition also is being captured and you can assert against that. So as opposed to just checking for specific URL, you can integrate this into your flow. That being said, your performance test should be always come after your functional tests. You should not mix them together, otherwise you will, you will have to, you will get results and have to figure out if this problem now comes from a performance test or from a functional test. And it's always better to run both in separate steps. So first your unit tests, then your integration test, then your end to end test. And then at the end, you have things like performance testing and other things that you might like to add. Maybe accessibility testing, things like that.

Hey, Christian, we had a question from the audience. James asks what reporters do you recommend with a performance testing? So yeah, again, integrating this into web.io, you can, this will be automatically propagated to the reporters that web.io provides. There are no special reports to it. It's like treating it like any other test. If you'd like to have special reports, on performance tests, I recommend to, to run it on source on a consecutive level, because then source apps can store your performance results over time, and can give you like a trend of how your performance, how your page performance has evolved over time. I think there's an article about that source labs performance testing. So that's the advantage of running things like that in the cloud is that, you know, we can store the, we can store this over time. And then we can give you a report that shows, you know, the this behavior over time. Trying to find the right image because I don't have truly a performance test, but here it is. Yeah, and Christian if you go to our wiki, I bet we have examples of that there as well. So I guess you can see it from this YouTube video where you can see that we see, we capture performances over time and we create some sort of baseline where we check, okay, we understand how your page performance works and how it goes over time. And so according to a recent page load, we don't want to see your page loading slower than a specific threshold, because even though we run the test in the same environment, given all these network flakes that you can have on the internet at all times, you will never receive the same performance of your page load at all times, every time again, which is why we create some sort of baseline where we say, okay, we expect your page load to be between 2.2 and 2.7 seconds. And not only the page load, but all the metrics that we have and yeah, that's what I see as advantage for running this in the cloud. And in fact, I created a NPM tool that helps you there if you want to get started with it really easily, it's called Speedo, and it allows you to test your application, to run Lighthouse in the cloud essentially. So you pass in an URL, and we run the same, we run the performance test for that specific page load 10 times in parallel. And then we check what's the average page, what the average performance score is, what's the average like first paint metric is. And then we allow you to assert against certain of these metrics and compare the baselines with each other. So this is really, if you're really interested in just having one tool to test the performance of one specific page load, you can do that. You can also use Speedo for multiple page load that happens during one functional test. There's also functionality for that. If you're having a specific performance test that you run, where you have a flow like going on a page, submitting a form, and then delete the session, you can use Speedo to actually test against those two pages transitions that happened during the test. And important here is that you provide the extended debugging and capture performance source options. But more on that is in the documentation code sheet, I can share that in the test, in the chat. Thanks. So, yeah, this is about that. And I think this would go over all the chapters. We might add future chapters about API testing and other areas. So follow that that repository, we will extend on that and hopefully come back next year and have all this chapter available for the frameworks that out there. Any other question that I can help answer? Thank you everyone for coming along with us and staying on the workshop.

Watch more workshops on topic

React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
WorkshopFree
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
React Day Berlin 2022React Day Berlin 2022
86 min
Using CodeMirror to Build a JavaScript Editor with Linting and AutoComplete
Top Content
WorkshopFree
Using a library might seem easy at first glance, but how do you choose the right library? How do you upgrade an existing one? And how do you wade through the documentation to find what you want?
In this workshop, we’ll discuss all these finer points while going through a general example of building a code editor using CodeMirror in React. All while sharing some of the nuances our team learned about using this library and some problems we encountered.
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
WorkshopFree
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
WorkshopFree
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Top Content
Do you have a large product built by many teams? Are you struggling to release often? Did your frontend turn into a massive unmaintainable monolith? If, like me, you’ve answered yes to any of those questions, this talk is for you! I’ll show you exactly how you can build a micro frontend architecture with Remix to solve those challenges.
TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
Remix Conf Europe 2022Remix Conf Europe 2022
37 min
Full Stack Components
Top Content
Remix is a web framework that gives you the simple mental model of a Multi-Page App (MPA) but the power and capabilities of a Single-Page App (SPA). One of the big challenges of SPAs is network management resulting in a great deal of indirection and buggy code. This is especially noticeable in application state which Remix completely eliminates, but it's also an issue in individual components that communicate with a single-purpose backend endpoint (like a combobox search for example).
In this talk, Kent will demonstrate how Remix enables you to build complex UI components that are connected to a backend in the simplest and most powerful way you've ever seen. Leaving you time to chill with your family or whatever else you do for fun.
JSNation Live 2021JSNation Live 2021
29 min
Making JavaScript on WebAssembly Fast
Top Content
JavaScript in the browser runs many times faster than it did two decades ago. And that happened because the browser vendors spent that time working on intensive performance optimizations in their JavaScript engines.Because of this optimization work, JavaScript is now running in many places besides the browser. But there are still some environments where the JS engines can’t apply those optimizations in the right way to make things fast.We’re working to solve this, beginning a whole new wave of JavaScript optimization work. We’re improving JavaScript performance for entirely different environments, where different rules apply. And this is possible because of WebAssembly. In this talk, I'll explain how this all works and what's coming next.
React Summit 2023React Summit 2023
24 min
Debugging JS
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.