How Low-Code Enables Continuous Testing in DevOps

Rate this content
Bookmark

As an industry, we understand that effective test automation is a key enabler - or inhibitor to - realizing the potential of DevOps. While automation is critical to innovating with speed and quality, very few of us are happy with the results. This talk will cover how low-code test automation solutions - like mabl - enable teams to embed automated tests directly into the development pipeline, strategies to overcome traditional challenges with test automation, and how to build a foundation for an efficient and effective test strategy.

31 min
18 Nov, 2021

Video Summary and Transcription

Today's Talk discusses how Low Code enables continuous testing and DevOps, emphasizing the importance of test automation and the drawbacks of siloed approaches. The next era of quality engineering aims to overcome automation challenges by incorporating machine learning and intelligent automation. The development process involves local testing, pull requests, and comprehensive testing to ensure quality before merging. Low-code tools like Mable help democratize testing and achieve higher test coverage. Mable's coverage report includes performance metrics and test results, making testing easy and accessible for any team member.

Available in Español

1. Introduction to Low Code and DevOps

Short description:

Today we'll be talking about how Low Code enables continuous testing and DevOps. Test automation is central to succeeding in DevOps. Siloed approaches don't work. Low-code automation gets rid of silos and produces better results in quality engineering efforts.

Hi, everyone. Welcome. Today we'll be talking about how Low Code enables continuous testing and DevOps. It is so great to be here with you all today. My name is Juliet McVail, and I'm a Product Manager at Navel, which is an intelligent low-code test automation solution. I've been with Navel for about 2 1⁄2 years now, and I'm currently the Product Manager for our browser and API testing team. And that really focuses on test creation and execution across both browser and API tests.

So in this talk, I'm going to be focusing on three key points. The first is that test automation is really central to any effort in order to succeed in DevOps. The second is an assertion that siloed approaches to test automation don't work. And finally, that low-code automation enables us to get rid of these silos and produce better results in our quality engineering efforts. So let's dive in.

2. Quality Engineering and Test Automation

Short description:

Quality engineering is an enabler for key trends in software development. Test automation is crucial for deploying changes with confidence. However, very few teams have achieved the necessary level of automation. Without it, there is a risk of bottleneck and limited capacity to verify changes. The next era aims to overcome these challenges by building intelligence into the automation process.

So first and foremost, there's really such an exciting time to be someone who's focused on quality engineering because there are so many critical trends in the industry. And we're also realizing that quality engineering plays a critical role in this as well for enabling innovation. And whether you're looking to broaden your adoption of agile or most DevOps, perhaps your team wants to migrate to the cloud or shift left. Quality engineering is really ultimately an enabler for all of these key trends.

And effectively, what we're trying to do in software today is accelerate the pace of innovation with quality. We really want high velocity and throughput in these pipelines, and we want to be able to create and deploy changes constantly. So whether that's through code or configuration or upgrading, or even dealing with a change that's happening with your integrated partners, because you likely consume a lot of services via API from third parties. And so we want to be able to embrace that change with velocity and throughput. And now that also needs to be under the watchful eye of a system that can ensure quality. And that's really where test automation comes into play. And we know that we're successful in implementing test automation when we have a high-level automation, but we also have high confidence in our ability to deploy changes with good quality.

So this is really an interesting point. And it's also the problem that we saw in last year's DevOps report. We know that we have a low level of, when we have a low level of test automation, we also have relatively little confidence in being able to deploy changes. And as that level of testing and deployment automation increases, we can see that confidence increases as well. And this is really key as you move towards continuous testing model. Because despite the fact that we know we want to get to this high level of automation, so we're confident in dealing with all of that change, very few teams have realized the level of automation necessary to deploy with confidence. And we have work to do to get there. And really, the risk here is that if we don't, is that we don't realize this vision of those high-velocity, high-quality pipelines. Because what we have to do here is slow down the throughput in order to manage quality. And that means we have limited capacity that would be able to verify those changes from a QA perspective. And so this is the risk that we end up having this bottleneck, despite the fact that there's been so much innovation on this side. We're not actually able to have that throughput. And so this next era really tries to overcome that. And this all starts with the assertion that if you want to automate a process, you have to build intelligence into it. And we know this intuitively and places outside of test automation. So for example, if I wanted to build a self-driving car, I wouldn't just say, OK, I'm going to build the engine, give it a set of instructions, and then just go have a drive, which is effectively what we've done historically with test automation. You would really recognize intuitively that the car has to have a lot of sensors and a lot of data. We need to use a GPS. And then you have to actually be able to read what's happening in real time when you're out on the road.

3. Intelligent Automation and Organizational Rollout

Short description:

Building machine learning models and plugging them into the control plane enables potential in automation. Low-code is a key tenet of quality engineering, allowing more people to participate. Separating intent from implementation allows for intelligent automation. Auto healing updates tests automatically based on learned information. Maple enables importing existing Selenium tests and incorporating intelligence into execution. Organizational rollout and fitting the solutions in the right places and roles are crucial for success.

And then we have to build machine learning and other models in place in order to make intelligent decisions. And then once you can do all of that, you can plug that brain into the control plane that actually automates the driving, and then we have some potential there.

And effectively with test automation, that's where we're going now, right? And we're saying, look, it's not just about the drivers that can move a browser, move a mobile app or interact with an API. Once you understand the intent of automation, you have to collect the data, analyze the data and make good decisions in order for that automation to be effective. And so low-code here is really a key tenet of quality engineering, and if we don't focus on low-code, then we're likely to limit the number of people and the roles you can actually participate in quality on our teams.

So when I talk about intent, we're really referring to testing the things that you would look for if you were manually testing, and we can separate that out from automation. So when the intent is actually manifested in the test itself, automation can then drive the execution of it. And we actually let Mable handle this part. So when we allow teams to focus on the intent and the functionality that they want to share, we present them with a low-code interface, and then we let the system handle that implementation. And what that means is that not only developers but manual testers, product owners, support people, and others can all participate in quality, and we don't end up in silos.

So another aspect of this is that once you separate the intent from the implementation, is that we can build a system that can be very intelligent. So as an example, let's say that the intent for a test is to submit a form, and there is a submit button on that form. Perhaps my team is looking to make some changes, and they end up changing the ID of that submit button. With more traditional test automation solutions, a test is going to fail because it relied on the ID in order to locate the button. But in this new era, since we're collecting so much information as we're running the tests, the system knows that even though the ID changed, the button is still there, and we can actually locate it using numerous different techniques and attributes. The system will attempt to locate that button and proceed with the test. And when we can correctly identify that an element has changed, we'll actually update the test automatically based on the information we learned. And that's what we call auto healing. So we're able to accomplish this by separating intent from implementation, letting the system actually handle that implementation, and enabling people to express intent with as little code as possible.

The other important thing to note here is that so many teams have spent engineering years building out sophisticated script-based test automation frameworks. And we don't want to lose that work. And with Maple, you can import any existing Selenium tests that your team may have and export those tests to Selenium IDE. So this really allows you to avoid vendor login but also leverage the hard work your team has done and incorporate intelligence and machine learning into the execution of these tests. And so that's really the technology side of low code plus intelligence. And that gives us the capability to solve a lot of the problems that we've seen with test automation in the past. However, half of the technology by itself is really insufficient. Because where we're going to see the benefits of this is when we can roll out these solutions organizationally. And fit them in the right places with the right roles. And if we don't, we actually end up seeing a lot of failures with these initiatives, despite the fact that the technology is there. So, as you've seen in DevOps, the vision here is really to build intelligent automation in the very beginning to the very end of the pipeline.

4. Development Process and Pull Requests

Short description:

Today we're going to talk through each of the different stages of the development process. Let's start with the code base on a developer's machine. We want to have working changes and core end-to-end test coverage. Developers should execute initial tests locally using tools like Navel or Jest. Quality engineers can also test the changes locally. Creating a branch for related tests in Navel ensures readiness. It's important to assess the remaining testing work and consider adding additional tests. The pivotal point is the pull request, where changes are proposed for merging into the main branch. Sufficient testing is crucial before deploying or merging.

And that starts when you're working on a local branch for a feature or a change. And that really goes all the way through the production and that change actually reaches production. So, today we're going to talk through each of these different stages. And I'll provide you with a couple of examples of what I mean when I say it's really about figuring out who will do what at that stage.

So, let's start out with the code base that's local to a developer's machine. The goal of the stage is that we want to have working changes. So, perhaps I create a new feature that you want to validate before actually merging that to main. In this stage, we also want to make sure we have core end-to-end test coverage in place. So, for this feature, it may not test all of the scenarios, but we're exercising that happy path. And then from a quality perspective, in this stage, we also want to have a plan. So, we want to know that – what testing we need to do in the future, what coverage we already have, what the risk is, and so forth, so we actually can enable our team to make those changes.

And so, just to give you a couple of examples, in this stage, what we believe is that the developers who are creating those changes should be executing some set of initial tests end-to-end locally. And so, if you're using Navel, you can actually use the Navel command-line interface, and if your team's already using Jest, you can actually automate this via the command-line interface to happen on every commit. And that means that if you have a set of related tests, you can run them in an unobtrusive way. And we have a couple of tabs in the background to let you know whether your changes are breaking any existing tests, and if those tests are finding defects. But while you're still working locally, the goal would be to know whether or not you're breaking any of the core tasks in your application, and to have those tests help you find out whether you're introducing regressions. So, this process can really happen seamlessly by running those tests in the command-line automatically. And what's really important to note is that this isn't just limited to developers on your team. If you have a quality engineer who's paired with a developer, that developer can be pushing their branch right back to GitHub, and the quality engineer can be pulling that branch down, running it locally, and beginning to get more comfortable with those changes. And next, the other thing we can be doing during this phase is that while you have your code branch, you can also create a branch for that set of tests in Navel and test those changes that will be related to the code branch. So, you can see in this specific example, I have my branch, I have some tests that I've created or modified so that we're all ready to go when we reach this next step. And before we do that, we'll also want to know how much work is remaining from a testing perspective around this feature. So, for example, if I was working on a Workspace feature, I'd want to know what other tests are related to this feature already. Am I likely going to need to change any of those tests in order to have adequate test coverage? And in the next stage, am I going to want to add any additional tests here? And with a tool like Mable, we actually already have a coverage feature where you can search basically by page and see what tests you have related to that page. Are those good tests? Do they have enough assertions? Are they effectively validating the functionality of that page? And so you can use that feature when we're in the coding stage to get a sense of what changes you might want to make as we move forward.

So let's move on to what I believe is the most pivotal point in the process for a DevOps-oriented team. And that's the moment of a pull request. Meaning I have a set of changes, both from a testing perspective and a code perspective, that I'm proposing we merge into our main branch. And we have some goals here as well. So the first one is, I don't want to deploy or merge something right away without sufficient testing.

5. Merging and Deployment Stage

Short description:

The first goal is to avoid merging anything that would stop the pipeline. Effective end-to-end test coverage and long-term team success are also important. Using a low-code framework like Mable allows anyone to participate in test logic review. Specialized knowledge is crucial for reusability, set-up, tear-down, and test coverage best practices. Collaboration includes executing regression tests in the pipeline and reviewing all testing before merging into the main branch. The deployment stage ensures that defects are not allowed into production.

Because once we reach that main branch, by default, and most teams, it's on its way out the rest of an automated process for our pipeline. So the first goal here really is, let's not merge something that we know is going to stop our pipeline. And so coming out of this stage, we also want to ensure that there's effective end-to-end test coverage that's related to my change. And then finally, and perhaps most subtly, we also want to know that we're setting the team up for long-term success. And I'll talk about that more in a moment. But before any of that, let's make sure we're not breaking anything before we merge this into main.

So in this particular example, let's say we have a set of Mable smoke tests that run automatically and continuously as part of our build process. So as soon as you put up your PR, we're running a set of headless smoke tests. And if your build fails, you won't be able to merge that PR or get it approved. So you'll know that you're filling those core headless tests and all that's actually automated.

So as another example, when I talked about effective test coverage, using a low-code framework like Mable, anyone can actually participate in reviewing and providing feedback on the logic of tests. Is the test actually structured correctly? Are we fully validating the feature with the right assertions? And you can see here that it's intuitive. Anyone can review this test and understand. You don't have to necessarily understand the nuances of the framework or have a development background. So we really can avoid silos here as well. But there's also an area in this type of automation where having specialized knowledge is definitely important, especially around approaches to reusability, set-up, tear-down, environments, and overall test coverage best practices. And that's important for making sure we're setting ourselves up for long-term success. And this is a moment where you can really have some automation expertise on the team. Perhaps you have a central automation lead that's participating here as well, and they can work with your various point teams to ensure that we're not incurring tech debt.

So another thing that we can do at this stage, from a collaboration perspective, is that we can execute the regression tests that have been created before this release. And we can do all that right in our pipeline. So if your team uses preview environments or ephemeral environments, where you put your changes up for PR, we can run those full regression suites and cross-browser tests at that stage. And so we actually can get all that information within the context of the PR itself. And so before I approve that change or merge those changes into the main branch, I can And you can also review all of the testing that's happened in all the code, including whether or not we validate our core scenarios. And you also can actually click right into the detail from the PR as well.

So next, we're going to hit our deployment stage. And let's say that we know everything looks good in terms of the core functionality in the code. We've gone ahead and merged those changes into our main branch. Now this is all really in our automated pipeline. And so the goal here is let's make sure that we're not going to allow defects out into production.

6. Comprehensive Testing and Quality Understanding

Short description:

We want to ensure comprehensive testing and a broader understanding of quality. We can quickly identify and triage issues with failed tests. Creating issues in JIRA from test results provides all necessary information for developers. We can detect quality issues even when tests pass, such as JavaScript errors and broken links. Monitoring page load time helps identify performance issues. Data-driven testing allows for easy expansion of coverage without writing code. Libraries like Faker and Math.js enable data randomization for realistic test scenarios. Testing across devices, including mobile, is crucial for responsive applications.

We want to make sure that we have comprehensive testing related to our changes before any of this gets deployed. And we also want to have a broader understanding of quality beyond pass or fail. And that includes a comprehensive understanding of the change and what its impact is on quality overall.

So there are a couple of examples here as well. The first is in that stage, let's make sure we can identify issues and triage them as quickly as possible. So let's say here we have a failed test and enable. We actually have a button where you can create an issue in JIRA directly from your test results. And when you create that issue in JIRA for a test failure or a bug report, you can see that it automatically populates all the information that a developer needs to understand the issue. And that includes the screenshot, the HAARF file, the DOM snapshot to ensure there's an understanding of the status of the product during that test failure. And this also allows you to avoid any unnecessary back and forth between teammates during the triage and investigation process.

So next, let's move beyond passing and failing tests to really developing a shared view across the team in terms of quality. So maybe we'll automatically detect other quality issues, even when your tests pass, that we see any new JavaScript errors in our console after our latest deploy, do we have any new broken links within the application that were a part of those tests? And for each test, we're also able to see across all of your steps, what the total page load time was, and how that's actually trending. So in this particular test, we can see that it took about 20% more time than it usually takes, which allows us to identify trends and performance issues early on.

So now here's an example of what we can do in this stage to really focus on expanding coverage. So you all are likely familiar with this concept of data driven testing. And once your test actually exists, with just a few clicks, I can actually add in a variety of different scenarios that I want to test using data tables enable. And that doesn't require writing any code. And it really allows me to multiply the coverage that I have. So adding new scenarios is really just as simple as adding a new row to this table and typing in additional values. And so again, this really focuses on making testing more accessible. Even as a product person, I can really easily contribute to this. So another exciting aspect of this is that without writing any code, you can also take advantage of libraries like Faker or Math.js. And this allows you to randomize your data to increase test coverage by creating realistic data that you can tailor to your specific use cases or scenarios. Which is especially helpful if you're testing various form inputs or looking to generate test data. And we can actually also expand coverage across devices. So perhaps I'm looking to test a responsive application. So not just focusing on testing across major browsers. I'm also testing across different devices. And reviewing those changes in an intelligent testing service to confirm my application is appropriately responsive. And really testing across mobile is becoming more critical than ever.

7. Mobile Testing and Low-Code Tools

Short description:

In 2020, over 60% of U.S. website visits originate on mobile devices. Mable enables validation of user experience across responsive applications. Low-code tools like Mable help the entire team participate in quality and achieve higher test coverage. By incorporating machine learning and intelligence, tests can develop alongside the application. Democratizing testing allows everyone to build and maintain tests across the development lifecycle.

In 2020, over 60% of U.S. website visits actually originate on mobile devices. And historically, mobile testing is not an easy task. Mable allows your team to validate the user experience across responsive applications and deliver a seamless experience for your users regardless of the device they may be using. And, with the benefits of low code, this is also another good example where you don't have to have highly specialized automation experience. And so I hope those are some good examples of where low-code intelligent tools like Mable can help get the entire team to participate in quality, which will help you get test automation coverage and confidence that you need to help innovate quickly. And many teams are currently on this journey. And we're also really excited to see so many teams seeing an order of magnitude benefit in terms of achieving higher test coverage and reducing the maintenance burden associated with testing and, in general, reducing the amount of effort they spend on regression testing. So this is really what we're working towards. We're looking to enable the move to DevOps by integrating testing deeply within your workflows, making sure that it's fast and flexible for the entire team that works with modern stacks, whether you're running in CICD, using single page application frameworks or otherwise, that everyone in the team can actually participate in quality. And we also want to ensure that the tests we build are robust and reliable. By incorporating machine learning and intelligence into automation, tests can continue to develop alongside your application. And once we have all of these key pieces, we're democratizing testing to allow everyone to build and maintain tests across the development lifecycle.

QnA

Q&A on Poll Results and Mabel's Coverage

Short description:

Thank you all for your time today. Let's discuss the poll results. Most people have some DevOps with automation. It's a journey to achieve more automation and efficiency in pipelines. We have a question from Jumuru about Mabel's coverage report for integration tasks. Our coverage includes page and release coverage, providing metrics on performance and test results. Next question is from Elias.

So thank you all so much for your time today. I'm really looking forward to hearing all of your questions. Hi, Juliette. Thanks for the lecture. It was awesome. It's such a pleasure to be here. Let's discuss a little bit about the poll results. So we have the poll results on screen and it seems that most of the people or the majority have some DevOps with some automation and there are less percentage for the other answers. So what's your takeaway on that?

Yeah, you know, it's really interesting. I do feel like, you see a lot of folks here kind of falling more in that middle group. And I feel like journey is truly the right term for this experience, because it does take a lot of time and energy to reconcile these various tool sets and implement tooling and automation. So I think this is really indicative of a lot of folks now, moving through that journey and moving through that process of achieving more automation, more efficiency in their pipelines.

Yeah, exactly. As soon as we are progressing into that direction, I think that's a good sign, right?

Yeah, absolutely. I think it's one of those things where you're never fully done. You're always working towards it. So I do hope my talk provided some insight and how you can start moving in that direction as well, especially if you're in that aspiring stage. Exactly, yeah. So we have some questions from the audience. And the first one is from Jumuru. I apologize for not knowing how to pronounce your name. So can Mabel give me any coverage report of integration tasks? And if so, how is that calculated?

Yeah, that's a really interesting question. So our current implementation of coverage in Mabel is specific to all of your tests within Mabel. So we do offer both browser and API testing, if you're looking to integrate testing in that way. The current way our coverage works is we offer both page coverage and release coverage. So how are you testing pages across your application? Are you validating aspects on that page using assertions and different additional types of validation? And then our most recent release is release coverage. So that uses your existing Mabel tests to determine how many tests that I run for this current release, whether that's a timeframe or something along those lines, how many of those pass and failed. And we also provide you additional metrics around performance and information on that line to give you a better understanding of how your releases are progressing over time.

Great. And the next question is from Elias.

Implementing the Process in Small Companies

Short description:

The process can work for small companies, especially if they work closely with their development team. Mabel aims to make testing easy and accessible for any team member.

Do you think this process will work on a small company? The process that you lectured to us today? Yeah, I think that's a great question. We certainly do have smaller companies here at Mabel. We have quite a few startups using our product, even with QA teams of two people. I certainly think it is possible, especially if you're working closely with your development team, to really incorporate through that throughout your entire pipeline. I certainly think it's feasible. A big part of our goal at Mabel is to make testing as easy as possible and available for anyone across your team. I certainly think it's possible to bring it to your company regardless of the size. I agree with that.

Handling Small UI Changes and Test Subsets

Short description:

At Mabel, we have the concept of labeling tests, allowing you to run a subset of tests based on specific features or environments. This is a great solution for handling small UI changes and saves time by only running relevant tests. Starting with a SmokeTestSuite covering main scenarios ensures efficient testing.

The next question that we have here from Mikus. Do you run all end-to-end tests even if there is just a small UI change? How do you handle such situations? Is it possible to run a subset of the test suite so it doesn't take too long to run? Yeah, absolutely. At Mabel we actually have this concept of labeling. You can label your test whether it's for a specific feature, a specific environment, anything along those lines. If I'm making a small change to a form, I can easily say, OK, I only care about the test related to this page, and then run that subset. You're still only able to cater and tailor it based on your needs. I think that's a great solution and very useful for many use cases. Usually, you want to run, for instance, a SmokeTestSuite first, which covers all the main scenarios, and then after that passes, you run the rest. Otherwise, if it fails, it doesn't make sense to run a bigger suite. Right? Right.

Running Tests Locally with Mable

Short description:

You can run tests locally using the Mable command line interface, without the need to talk to the cloud or Mable. It gives you the option to execute tests against your local environment or a publicly available site, without running anything in the cloud. This is a great way to test changes locally without accessing the app or the cloud.

Exactly. Another question here, will I be able to run the tests locally without any requirement to talk to the cloud and or Mable? Yeah. Actually, so we do offer, as I mentioned during my talk, we have the Mable command line interface, as well as our CI runner. So our command line interface, it gives you the option to execute tests locally, it can be against your local environment or against a publicly available site. Does not require running anything in the cloud. That's a really great way. I've actually used it personally. And we were making some accessibility changes to our site to test against my local development branch without needing to go into the app or go into the cloud for that, for those purposes as well.

Reducing Maintenance Cost and Automating Testing

Short description:

Using Mable to reduce maintenance cost is a core challenge in the testing space. Building intelligence into the pipeline, capturing test intentions, and ensuring robustness and resilience are key. Mable uses its own product to test production against development and development against production. Low-code solutions like Mable's command line interface and CI runner help automate testing in earlier stages of development, shortening feedback cycles.

That's great to hear. So the next one is from Kacper. You've mentioned that using Mable, you can reduce the maintenance cost. But actually, the public opinion is a bit different on that topic. Even judging by the slide from the previous presentation, which is the one that shows the presentation from QLAB, I think, what's your take on that? How to make sure maintenance cost is low level while using Mable? Sorry, can you repeat that last sentence? What's your take on that? How to make sure maintenance cost is at low level while using Mable?

I think that's a great question. It's one of those core challenges I feel like within the testing space. You know, as you continue to try to move faster and optimize these processes, how do you keep, how does QA keep pace with those changes as well? So here at Mable, we're really focused, you know, on the, when we talk about building intelligence into the pipeline. So when we talk about machine learning, artificial intelligence, these concepts of auto healing, and it's certainly something that, you know, it is a journey as well. I don't think anyone's doing it perfectly, but, you know, really focusing on how we're able to capture your intention as you're creating those tests, make sure that they're robust and resilient as your application changes. So the idea there is, you know, I talked about this in my talk as well, but if you make a small change to your UI, your test shouldn't break. So I feel like that's when we start talking about how, you know, we're gaining a better understanding of those attributes that are specific to your application, and really getting a better understanding of, you know, what it really means to be in the correct state and to be interacting with the correct thing. So the idea there is that, you know, as you continue making those changes, we're able to keep up with the evolution there as well.

One curiosity that I have myself, that when I talk to people that develop products that are used for software development, do people inside Mable use Mable to test the products that you build? We do. That is a great question. So we call it Mable on Mable. We have our own workspace within Mable that we use to test production against development and development against production. We're running those tests every day. We're actually one of Mable's biggest customers in that way, because we use it really all across our pipeline to test our own product as well, which is really cool to be able to do that. Yeah. I think it's nice because then you are like dogfooding your own product and you feel the pains of your own users. And you can, I find it super interesting, I worked in companies where I was able to do that and I find it super, super cool.

Another question that we have here is, how can low-code solution help automate testing in earlier stages of development? Yeah, sure. I'll try to keep it quick. Here at Mable, we do talk often about the importance of shifting testing left and really shortening the feedback cycles early in the development process. As I mentioned earlier, we have a number of different tools to help you do that. The first is our command line interface, which gives you the ability to run any from Mable tests locally for rapid feedback during development. We also give you the option with our CI runner to run those tests locally against a preview or ephemeral environments during the build process. That's when you also can use labels to make sure you're targeting a subset of your application that's relevant to your PR, but really making sure that you can get that feedback early in the development lifecycle before you even reach your main branch. Awesome. Juliette, it was wonderful having you here with us. Thank you very much. Thank you so much.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Top Content
Cypress has taken the world by storm by brining an easy to use tool for end to end testing. It’s capabilities have proven to be be useful for creating stable tests for frontend applications. But end to end testing is just a small part of testing efforts. What about your API? What about your components? Well, in my talk I would like to show you how we can start with end-to-end tests, go deeper with component testing and then move up to testing our API, circ
React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Top Content
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
TestJS Summit 2021TestJS Summit 2021
31 min
Test Effective Development
Top Content
Developers want to sleep tight knowing they didn't break production. Companies want to be efficient in order to meet their customer needs faster and to gain competitive advantage sooner. We ALL want to be cost effective... or shall I say... TEST EFFECTIVE!But how do we do that?Are the "unit" and "integration" terminology serves us right?Or is it time for a change? When should we use either strategy to maximize our "test effectiveness"?In this talk I'll show you a brand new way to think about cost effective testing with new strategies and new testing terms!It’s time to go DEEPER!

Workshops on related topic

React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
WorkshopFree
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
Top Content
WorkshopFree
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
WorkshopFree
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.
TestJS Summit 2023TestJS Summit 2023
148 min
Best Practices for Writing and Debugging Cypress Tests
Workshop
You probably know the story. You’ve created a couple of tests, and since you are using Cypress, you’ve done this pretty quickly. Seems like nothing is stopping you, but then – failed test. It wasn’t the app, wasn’t an error, the test was… flaky? Well yes. Test design is important no matter what tool you will use, Cypress included. The good news is that Cypress has a couple of tools behind its belt that can help you out. Join me on my workshop, where I’ll guide you away from the valley of anti-patterns into the fields of evergreen, stable tests. We’ll talk about common mistakes when writing your test as well as debug and unveil underlying problems. All with the goal of avoiding flakiness, and designing stable test.