5 Habits to Treat Your Test Code Like Production

Rate this content
Bookmark

In this talk, David will talk about 5 habits that you can follow to ensure the same quality as your production code and building test suites that are robust, consistent and help you debug the issues quickly.

- Typical journey for teams adopting test automation 

- 5 habits to follow while building your test suite 

- Our learnings from a recent refactoring activity

22 min
03 Nov, 2022

Video Summary and Transcription

Today's Talk focuses on the five habits to treat test code like production code. It emphasizes the importance of modular testing and breaking down UI tests into smaller components. Treating SDETs as software engineers is crucial for code and test quality. The challenges of snapshot testing and the benefits of component testing are also discussed, including improved efficiency and addressing asynchronicity and nested promises.

Available in Español

1. Introduction

Short description:

Today I'm going to be talking about the five habits to treat your test code like you would production code. My name is David Burns, I head up the Open Source Program Office at Browserstack and I'm the chair of the browser testing and tools W3C working group. By the end of today, we're going to be able to take something away and see how you can rejig everything you do at work to make your lives easier.

Hi, everyone. Today I'm going to be talking about the five habits to treat your test code like you would production code. So, first of all, who am I? My name is David Burns or as most people tend to know me, automated tester. I head up the Open Source Program Office at Browserstack. I'm the chair of the browser testing and tools W3C working group. I'm a Selenium and a Nightwatch JS committer. I've been in this industry for a while. And so hopefully by the end of today, we're going to be able to kind of take something away and see how you can rejig everything you do at work so you can make your lives actually easier.

2. Approach to Testing and the Testing Pyramid

Short description:

In this part, we will discuss how people tend to approach testing on projects and how to break down testing to make it manageable, maintainable, and less flaky. We will also explore the importance of treating test code like production code and improving tooling. Additionally, we will examine the testing pyramid, which includes unit tests, service tests, and UI tests, and the trade-off between isolation and integration. Finally, we will address the issue of overloading unit tests and the resulting challenges in maintaining them.

Here's our agenda. I'm going to look at how people tend to look at their testing on projects, how we can break down testing to make it manageable, maintainable and less flaky. That's the most important part. How to treat your test code like production code. So what we do with the last part and how to improve your tooling and actually why this matters. You'll be surprised. So let's get started and see where we go.

Now, I've been working in testing for many, many years and I tend to see how people look at testing from different ways. And if we look at it from the textbook way, this is how people should be doing it. At the bottom, we have the testing pyramid here. At the bottom, we have the unit tests. The reason why that is important is because the reason why that is wider is that there should always be a lot more of them than any other tests in our test code. Then we have service tests. These are our integration tests. These are all kind of like if unit tests are small, service tests tend to be medium and kind of start bridging gaps between all our little components of code or like atomic areas of code. And starts getting us towards the next part, which is our UI tests. Now, one of the things I didn't put in here is manual testing. Generally, I tend to speak about automated testing, but this is not to take away from manual testing. And if you have a look at the arrows on the side, is that you've got on the left hand side, as I look at the screen, more isolation to more integration. So the higher you go, more integration you're going to need and less isolation. Yet, the downside of when you add more integration is that things will get slower. This is just general computer science, right. The more code that has to be processed, the slower it will be, right. If you do a loop within a loop, you know that's going to be slower than a single loop trying to find something. Ideally, we need to be trying to make super fast tests and a lot fewer slower tests. Unfortunately, especially from what I see, and I appreciate there might be a lot of bias, having worked on Selenium and NightwatchJS for many, many years, is that people do unit tests. These are generally done by your developers, and they're done with jest, or Karma, or things like that, and people put a lot of effort into them. Then you'll start getting some service tests or these integration tests, and so I took my image here from Martin Fowler's work, and I've kind of rejigged it a bit. Then, especially now that I see this in browser stack and kind of speaking to customers, and when I was at Mozilla, kind of speaking to Selenium users, people tended to throw everything, and I mean everything, at their unit tests. They would bulk it up, put tons of tests, and then slowly but surely, the test would become unmaintainable.

3. Challenges with UI Tests and Modular Testing

Short description:

A developer would make a change to the UI. And then suddenly, a whole swathe of tests would start failing. They put tons of effort into the UI tests, but it's not sustainable. We need to make our tests more modular, choosing the right test for the right situation.

A developer would make a change to the UI. A designer would help, and then suddenly, a whole swathe of tests would start failing. And then obviously, who gets the blame? The testing tool. Not the people who've architected the code or architected the tests. None of those people. It is purely down to the test framework. You and I know that's not right, but that's generally how people react, all right?

And they put tons and tons of effort into the UI tests. And then suddenly, they're like, I don't have time to be writing UI tests. I got to do these manual tests so that I can kind of work out what I need to do. And they build it, and build it, and build it. And then they try to scale it horizontally because their UI tests are taking a day to run. We all know that ideally the CI should be done and dusted within 10 minutes, right? That is the gold standard. Always try to get all your tests within 10 minutes so you can have the fast feedback loop because I don't know about you, but whenever I write code, it doesn't always work. I will hold my hands up. I write a lot of code. I write a lot of bugs. Fortunately, I tend to write a lot of tests to go with it. And sometimes they work on my machine and then they go into the CI and they stop working because there's certain assumptions. But we need that gold standard of really fast tests. And if we're bulking everything up in the UI area, then it's going to be really, really slow.

So how can we get around that? Well, let's make our tests a lot more modular. So we know of small tests, right, or unit tests. We know about integration tests or median tests. And we have these end-to-end tests or large tests. The thing is end-to-end tests aren't always needed. We know that if there's a form, you can test that form in isolation. It doesn't need to be an entire workflow. You can build out these things, especially if you're working with your front-end team or building these modulized components to move things forward. And so we need to pick the right test, just like you would pick the right architecture. I know that from working with loads of people to write tests, is that if you're writing some code, you're not going to have one monolith of a file and then ship that into production.

4. Treating SDETs as Software Engineers

Short description:

Treat your SDETs like you would your software engineers, from pay to recognition, and you'll see huge improvements in how your code and your tests are formatted.

And I'm saying this is not your obfuscated minified code, right, obviously that is going to be one file. But when we're building up to that point, you know that if you want to find certain things, how to structure your code. Unfortunately, people don't always do that. With their testing, they don't know how to architect it. And this might seem controversial, but this is why it's important that your SDETs or your software development engineers in test, take that time to know what they're doing. Don't throw a junior at it and go, this is your problem. Don't throw them a exploratory tester. Treat your SDETs like you would your software engineers, from pay to recognition, and you'll see huge improvements in how your code and your tests are formatted.

5. Splitting Tests and Managing Complexity

Short description:

Split up your tests. Make sure that you can run each and every test individually, just like you would if you were splitting out your production code. And this is why we need to get into this mindset of kind of like, whenever you're writing code, you're writing code, be it an automated test or production code. So always make sure that when we're breaking these things down, that we break them down, and then we test where our end users are going. By splitting out things into these slightly more manageable parts, we are going to remove flake. The smaller you make your tests, the less flaky they will be.

So, we know we are not going to create monoliths in our production code when we're writing it all out before it goes into our built system. So don't do it when you test environment.

Split up your tests Every good presentation is a good meme, right? Split up your tests. If you're testing small modular parts, split it out. Make sure that you can run each and every test individually, just like you would if you were splitting out your production code, right? People go, yeah, yeah, I can split out my code. I know how to break this down, right? You say the same for tests and they're like, it's a test, why does it matter? It does. It really, really does.

And this is why we need to get into this mindset of kind of like, whenever you're writing code, you're writing code, be it an automated test or production code, right? Or anything in between. Code is code. Your esthets are engineers. They write code. Your software engineers, they write code. They're exactly the same. They look at the problems slightly differently, but they still look at the problem. And so it's important that we make sure that when we're breaking these things down into the individual parts, that we do so in a meaningful way.

So we've talked about this, where we have our unit tests, our service tests, but that big, bulky UI part, we can break that down even further. We don't need a full end-to-end test for our UI tests. Yes, we might need a browser, and it's important to make sure that we test in all browsers that our users use. If you're going to test in Chrome, test in Chrome, Chromium is going to react slightly differently. So if you test in Chromium, you're not always going to get the same end experience than an edge user would, or a Chrome user would, or a Brave user, or an Opera or Vivaldi, right? It's all the same browser under the hood for the engine, but not always going to give you the same result when you're moving things about because of the way they configure it and ship it. Same with using WebKit. WebKit might be the underlying tool and engine for Safari, but there are times where Safari will act very differently to WebKit, and it will act very differently to iOS Safari. So always make sure that when we're breaking these down, that we break them down, and then we test where our end users are going. Because that way, we can know that we've done the right job.

Now, by splitting out things into these slightly more manageable parts, we are going to remove flake. The smaller you make your tests, the less flaky they will be. Now, I'm sure you've all tried to write end-to-end tests, and a lot of the times, you need to align a lot of stars to make it work. Your database needs to be set up, your middleware needs to be set up, your front end needs to be working, and whenever you do something, it needs to be able to pass through all these layers and then back again. And being JavaScript, everything is asynchronous. So you need to align a lot of stars to make things work.

6. Breaking Down UI Tests

Short description:

UI tests take too long to run, so we need to break them down into smaller components. Here's an example of a small test that starts from a known place, does one thing with an assertion, and knows where to go back to. The test loads a React component for a to-do list form and performs actions like setting values. Breaking tests into smaller parts helps improve test efficiency.

Sometimes it'll work, and you have means to kind of test things, so working on NightwatchJS, which is built on top of Selenium, it does all the autowaiting for you to be able to know when things are. But you need to know that that thing is going to be there. And so we need to break things down, make them smaller, and we're going to be breaking down the UI tests. Because the UI tests, the end-to-end tests take too damn long to run. Right? Let's be clear. It's not good enough.

And so how are we going to split it down? I keep saying it, we need to split it, but how are we actually going to be able to split it out? I've got an example. And let's go have a look at it. So here… Here… Here is a small test, right? It's really, really small. And I recommend to everyone who writes tests, especially end-to-end tests, that your tests do three things at most. They start from a good known place, they do one thing, but one thing very well with an assertion, if it's not doing an assertion, it's not a useful test. And then it knows where to go back to. And so, in this case, I've got some demo code here. And let me just make it really big. And I will zoom in, so you can read it. Here we've got some components test. We have a React component that I'm going to load. And if you want to see what it looks like, it just looks like a standard React component. This one's a form for a to-do list. So it takes everything, it has its inputs, has some change status, and it has the submit button. Really, really simple, right? And everyone can see what it is, they can work with it. Everyone's worked with React, so I'm pretty sure nothing of this is a shock to you. But the thing here is that we're able to write the test to be able to, kind of, load a component. Load a component, in this case, we can expect it to be visible, that we're going to work with it.

Now, I've done a really simple thing here, but we could simply just break it out and do stuff. So, like, because it's night, this is a night watch test, it's really simple. It's got a nice fluent API so you can work with it. And you can do more things. So you could kind of go, await browser, find element, component, and break it down and then move into it, if you want to type stuff, you could set value, I love cheese, because I do. And that would work.

7. Keeping Test Dependencies Updated

Short description:

We need fewer end-to-end tests and more small modularized component testing. Keep your test dependencies updated, ideally using bots. Regularly update npm packages and take advantage of new features. Keep your test dependencies as updated as your production ones. Don't neglect updating test code.

You'd be able to interact, and now instead of a whole end to end test, we're not going to check that we can send everything all the way to the database. Those tests are still useful, but we need fewer of them and a lot more of kind of these small modularised component testing. And we can just kind of use the same components that we've been using as before. So it's really, really simple.

The other one that I think is important that people focus on is making sure that once you've got your test working, that you keep all your items up to date, all of your npm packages. So in this case, like for my example, a new Gecko driver and a new Nightwatch was released. npm update it needs to be used on all of your test dependencies all the time, keep it regularly updated. Ideally, set a bot to do it. This is just busy work for a person to do. And so you should use bots where possible. And we'll get to that in a second.

But make sure that at least you run npm updated was once a sprint at your work, so you can keep things up to date. And you can start using new features straight away. With Nightwatch, we shipped Selenium under the hood. And we allow people to do really cool things like network interception, basic authentication, handling, JavaScript console error messages, capture so that you can fail tests if your JavaScript's not very good. Or if you want to look for certain mutations on the page, you could set that all up and be able to do it. And those things regularly come out and they get updated. And it's important that you keep your test dependencies as updated as your production ones. Don't go, oh, it's a test. It doesn't need to be updated. There are good reasons why those get updated regularly. And this is a good example.

This is the versions of Nightwatch that are out there. And we have people all over the spectrum. And so I've just taken a very small snippet of our user base. And so it's important that you keep things up to date. I appreciate some projects never need to be updated. But then if you're not updating the production code, then you're not going to update the test code. So be it, but if you're making updates to your dependencies in your production code, make sure you do that to your test code. It's important.

8. Tooling for Flow and Efficiency

Short description:

Find tooling that keeps you in flow, like the Nightwatch VS Code extension. It allows you to run tests, including component tests, without leaving your coding environment. Stay focused and efficient by minimizing the need to learn new commands or switch tools.

And then finally, have tooling that keeps you in flow. I use VS Code all the time. And at the minute I need to leave VS Code, I know for me, I can easily get distracted. So try to find tooling that keeps you in flow, that keeps you working, keeps you in where you need to. So as I've said, I work on Nightwatch. Nightwatch has a VS Code extension to be able to run tests. So all I need to do is click a button. I move to my tests, I can run them. I can run them against different environments. I can run my component tests. I can run everything all from where I am. The minute you need to learn new commands, new things, it keeps you out of flow. So try to find that tool that keeps you where you are. It helps you test your components from the start and then keeps you in flow. So it flows that when you're working, your head's down, you're working hard and everything's all right. So keep at it.

QnA

Audience Survey Results and Component Testing

Short description:

I am the automated tester. I work on Nightwatch and Selenium. The audience survey results were as expected, with visual regression testing being popular. It would be great if more people started doing accessibility testing. When testing, focus on breaking down components into the smallest parts and extending from there.

And that's it, folks. I hope you found this really useful. As I said earlier, my name is David Burns. I am the automated tester. I work on Nightwatch and Selenium. And if you have any questions, you can find me on social media or kind of Discord or Slack. And I'm happy to always help out. Thank you.

So you had a question, which we asked the audience before your talk, is like, do you do other items of testing? So the answers are like visual regression testing is at 62%, and accessibility testing is at 38%, and performance testing at 37%. So what do you think of these results? It's kind of what I was expecting. It was kind of, it's something that I've been looking at recently, and I was very curious to what people would be doing. I think visual regression testing is kind of big at the moment. So yeah, it kind of matched with what I was expecting. So that's good. Like, at least my gut was kind of right there. Awesome. I'll be nice if people started doing more accessibility testing, too. Definitely. Most definitely. Yes.

So we have questions for you. So, first question is, what components do you work with first when testing? I guess, yeah. Which components do you work with first? Yeah. So, in this case, I think it's how people break it down. Right? So, you know, with your, when you move to production code, right, you try to go for the smallest piece. And so whatever you can get to that point is the area that I think you need to focus on. And so if you can, like, I know a lot of people look at things like storybook or whatever, right? Like, it's all about treating code, your test code as production code, right? And so if you're going for component testing, you go do that. And you look at your components, you break it down, and then like to the smallest part and then you extend it outwards. So that's how I tend to do it. I don't kind of have one area that I focus on, but I find the one that's the most important.

Snapshot Testing and Conflicting Feelings

Short description:

Epster asks about good practices for snapshot testing. There are conflicting feelings about this. Snapshots should do what is expected, but sometimes they are rewritten or overridden. They can be useful for identifying mistakes, but also frustrating. Visual testing follows a similar pattern. It's a challenging topic.

Best business logic, things like that. Yeah, awesome. Yeah, I mean, definitely. So Epster asks that what are the good practices for snapshot testing? So I have multiple feelings around this and a lot of them actually conflict, which I find quite interesting. So like going back to snapshots, you should always try to make it do the thing that you think it's going to be doing and keep it at that, right? And then something I've been working on recently is trying to extend some of the projects I'm in to work with NX, the monorepo tool. So that uses a lot of snapshot testing. And there's times where I just like I've got to focus on this, do this right, and then there's times where I just go, I'm just going to rewrite everything and just override snapshots. And I kind of, and so I'd like to lose the value in it very quickly. But then at the same time, I can also use it to go, oh, actually I've messed up big time. And so that's how I tend to do it. It's the same with like visual testing. Like I take my snapshots, and then sometimes it's like, actually I just want to rewrite everything and throw it all away because it's just frustrating. And so that's where I find it useful. And then at times I find it frustrating. So I have these conflicting things in my own head. So it's hard.

Snapshot Testing Challenges

Short description:

Snapshot testing can become too heavy to maintain if not done properly. Blindly updating snapshots without understanding why they broke counterfeits the real process. Conflicting reasons exist for snapshot testing.

Yeah, I mean, certainly for me, I think when I hear snapshot testing first, I actually go into just snapshot testing, which we used to do for React. And so when we started first, we were like, yeah, we should do snapshot testing for everything. And soon it becomes too heavy to maintain because you are doing any change. And you just blindly go ahead and just run hyphen U and update the snapshot. Like most of the people, I don't think that. Yeah, so there there is no practice in like people really going and seeing if the snapshot broke, why did it break? So maybe it kind of counterfeits the real process, one reason of snapshot testing. But yeah, again, conflicting reasons, as you said.

Benefits of Component Testing

Short description:

Component testing allows for breaking down code into smaller, manageable parts, ensuring correctness and speed. It addresses challenges with asynchronicity and nested promises. By organizing tests in specific files and focusing on the right points, tests can be run quickly. Accessibility and visual testing are also faster when performed on smaller components.

So there is another question. Why do component testing when you can do end-to-end testing to check the things? Yeah, so that's this is my favorite kind of thing at the moment. So at the beginning, I said to like, you know, break down your your code, break down everything. So you be driving it forward in the smallest amount of bits. Right. The thing I like about like doing that and then treating your code as production and then going, which has led to this whole belief in component testing, which I'm 100 percent behind, is the idea that you can kind of just go, I'm going to do the small bits, make sure that this small bit is correct, which is going to be super fast. Because like, having worked on browsers, the thing that I always noticed is that like, a lot of people struggle with like concepts like asynchronicity. Even if you do JavaScript in and out every day, right, there's still times where JavaScript will just go. But I'm only going to reply now. So you've been waiting, or you've written this like callback hell, or you've got these like promises that are super nested, and you're like, okay, no, this is not going to work. And so we're breaking it down. So you would like you would in production, right, making sure your tests are in specific files, your tests are doing the same thing at the right points. And you're not just focusing on one too many of different things, you kind of then get to this ability to run them quicker, right, super fast. And then if you want to do your accessibility testing, your visual testing, they're going to be even faster than if you did like an entire page right to do your test. So it's all about the speed that way.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

DevOps.js Conf 2024DevOps.js Conf 2024
25 min
Atomic Deployment for JS Hipsters
Deploying an app is all but an easy process. You will encounter a lot of glitches and pain points to solve to have it working properly. The worst is: that now that you can deploy your app in production, how can't you also deploy all branches in the project to get access to live previews? And be able to do a fast-revert on-demand?Fortunately, the classic DevOps toolkit has all you need to achieve it without compromising your mental health. By expertly mixing Git, Unix tools, and API calls, and orchestrating all of them with JavaScript, you'll master the secret of safe atomic deployments.No more need to rely on commercial services: become the perfect tool master and netlifize your app right at home!
TestJS Summit 2021TestJS Summit 2021
36 min
Effective Performance Testing to your Server with Autocannon
Top Content
Performance testing expertise that is developed for a long time. In order to measure your server performance you need a tool that can efficiently simulate a lot of abilities and give you good measurements according your analysing criteria.Autocannon NPM library gave me exactly that - that library is super easy to install and has a very simple API to work with. Within a really short amount of time you can start do performance testing to your application and get good measurements in development environment and in your performance labs, and generate complicated testing scenarios.In this talk I will introduce Autocannon, explain how to efficiently analyse your server performance with it, and show how it helped me to understand complicated performance issues in my Node.js servers. At the end of this lecture, developers will be able to have the ability to integrate a fast and easy tool in order to measure your server performance.
TestJS Summit 2022TestJS Summit 2022
21 min
Delightful Integration Tests With Testcontainers
Top Content
Dockerized services are an excellent tool for creating repeatable, isolated environments ideal for integration tests. In this session, we'll look at the Testcontainers libraries which provide flexible and intuitive API for programmatically controlling lifecycle of your service dependencies in Docker containers. Running databases, Kafka, Elasticsearch, and even cloud technologies, straight from your test code ensures environment config is always up-to-date and consistent during local development and in CI pipelines.You’ll learn everything necessary to start adding powerful integration tests to your codebase without the headache of managing external service dependencies manually!
TestJS Summit 2022TestJS Summit 2022
23 min
Playwright Can Do This?
Guaranteeing that your application doesn't break while constantly shipping new features is tough. Obviously, with a continually growing app or site, you can't test everything manually all the time!Test automation and monitoring are crucial to avoiding shipping broken apps and sites. But what functionality should you test? When should you run your tests? And aren't complex test suites super slow?In this session, we'll get our hands on Playwright, the end-to-end testing framework, and learn how to automate headless browsers to ensure that you confidently ship new features.
DevOps.js Conf 2022DevOps.js Conf 2022
22 min
The Lazy Developer Guide: How to Automate Code Updates?
How to update hundreds of projects all at once? With organizations rapidly growing, demand for the scalability of the teams grows which directly impacts projects structure and ownership. The usual dilemma is mono- vs. multi-repos, but ... What if I tell you that it does not matter much? Both approaches can punch you in the face at some point, so perhaps it is better to think bottom-up.
Today I will walk you through some of the biggest challenges that exist in both approaches and those are managing dependencies across a few hundred projects, global code updates and many other things. I will also show you examples of how we solved this inside Infobip through building our own internal libraries.

Workshops on related topic

TestJS Summit 2021TestJS Summit 2021
85 min
Automated accessibility testing with jest-axe and Lighthouse CI
Workshop
Do your automated tests include a11y checks? This workshop will cover how to get started with jest-axe to detect code-based accessibility violations, and Lighthouse CI to validate the accessibility of fully rendered pages. No amount of automated tests can replace manual accessibility testing, but these checks will make sure that your manual testers aren't doing more work than they need to.
TestJS Summit 2022TestJS Summit 2022
163 min
Automated Testing Using WebdriverIO
Workshop
In this workshop, I cover not only what WebdriverIO can do, but also how you'll be using it day-to-day. I've built the exercises around real-world scenarios that demonstrate how you would actually set things up. It's not just "what to do," but specifically "how to get there." We'll cover the fundamentals of Automated UI testing so you can write maintainable, useful tests for your website and/or web app.
TestJS Summit 2021TestJS Summit 2021
111 min
JS Security Testing Automation for Developers on Every Build
WorkshopFree
As a developer, you need to deliver fast, and you simply don't have the time to constantly think about security. Still, if something goes wrong it's your job to fix it, but security testing blocks your automation, creates bottlenecks and just delays releases...but it doesn't have to...

NeuraLegion's developer-first Dynamic Application Security Testing (DAST) scanner enables developers to detect, prioritise and remediate security issues EARLY, on every commit, with NO false positives/alerts, without slowing you down.

Join this workshop to learn different ways developers can access Nexploit & start scanning without leaving the terminal!

We will be going through the set up end-to-end, whilst setting up a pipeline, running security tests and looking at the results.

Table of contents:
- What developer-first DAST (Dynamic Application Security Testing) actually is and how it works
- See where and how a modern, accurate dev-first DAST fits in the CI/CD
- Integrate NeuraLegion's Nexploit scanner with GitHub Actions
- Understand how modern applications, APIs and authentication mechanisms can be tested
- Fork a repo, set up a pipeline, run security tests and look at the results
GraphQL Galaxy 2021GraphQL Galaxy 2021
82 min
Security Testing Automation for Developers on Every Build
WorkshopFree
As a developer, you need to deliver fast, and you simply don't have the time to constantly think about security. Still, if something goes wrong it's your job to fix it, but security testing blocks your automation, creates bottlenecks and just delays releases, especially with graphQL...but it doesn't have to...

NeuraLegion's developer-first Dynamic Application Security Testing (DAST) scanner enables developers to detect, prioritise and remediate security issues EARLY, on every commit, with NO false positives / alerts, without slowing you down.

Join this workshop to learn different ways developers can access NeuraLegion's DAST scanner & start scanning without leaving the terminal!

We will be going through the set up end-to-end, whilst setting up a pipeline for a vulnerable GraphQL target, running security tests and looking at the results.

Table of contents:
- What developer-first DAST (Dynamic Application Security Testing) actually is and how it works
- See where and how a modern, accurate dev-first DAST fits in the CI/CD
- Integrate NeuraLegion's scanner with GitHub Actions
- Understand how modern applications, GraphQL and other APIs and authentication mechanisms can be tested
- Fork a repo, set up a pipeline, run security tests and look at the results