Testing CLI Utilities

Rate this content
Bookmark

Ever wondered what is the best way to end-to-end test your custom command line utilities? In this talk Florian Rappl will give you some insights what you can do to automatically verify your CLI tools and avoid regression.

- Introduction: Why test CLI tools

- Challenges: File system pollution, network and database issues, environment variables

- Demo: Showcase issues with a demo CLI tool - Solutions: Test plan implementation, choosing the right level of containerization

- Demo: Show solution using the previous CLI tool

- Conclusion

34 min
03 Nov, 2022

Video Summary and Transcription

CLI utilities are important to test because they act as an intersection point between different parts of an application. The main challenge in testing CLI utilities is performance, which can be improved by using temporary directories. Managing ports and resources is crucial to avoid conflicts when running multiple test suites. The test context ensures that processes run in the correct context, including the use of the right directories. Running tests on different configurations helps identify compatibility issues and provides comprehensive test coverage.

Available in Español

1. Introduction to Testing CLI Utilities

Short description:

Welcome to the session, Testing CLI Utilities. CLI utilities are important to test because they act as an intersection point between different parts of an application. They can access network resources and interact with the system locally, which requires proper coordination and sandboxing. These challenges make CLI utilities particularly interesting for testing. For example, PyCLI is a utility that helps with web development tasks like scaffolding and running debug processes.

Welcome, everyone to the session, Testing CLI Utilities. I hope you are as motivated as I am to get started.

Before we begin, let's have a look at my person. Hi, I'm Florian. I'm a solution architect at a smaller company based in Munich, Germany, called SMAPIOK. We are mostly doing IoT and embedded computing and we are specialized in building digital transformation projects, especially using distributed web applications. Also, I'm an open-source enthusiast. What does that mean? Well, I've been awarded a Microsoft MVP in the area of development tools for the last decade. I spent most of my time doing projects in .NET, JavaScript, TypeScript, web development space. I'm writing a lot of articles and I've also written one book already about micro frontends and I'm currently writing another book about development of frontend applications with NoJazz, so be sure to get a copy. But enough about me, let's just jump right into a topic before we run out of time.

So CLI utilities, what makes them difficult to test? What makes them also appealing to test? Well, first of all, I mean, CLI implies they're running the command line, so this is on the one hand good because spawning a terminal process is always quite easy and in some sense also easy to work with, much easier to work with, for instance, a graphical user interface. On the other hand of course, you need to deal with some things like for instance, receiving the standard output or also placing some inputs on the standard in stream. And you of course need to coordinate that and have your asynchronous processes right. CLI utilities often provide, let's say kind of an intersection point between two parts of an application so they are quite important to test to get right. And we are relying a lot of CLI utilities, so I mean having them working reliably is of course always what we strive after. Anyway, these CLI utilities like any other application, they might also access some resources. For instance, of course, network resources. And yeah, I mean, you may want to mock these, you may need to run local services and you may also need to coordinate these resources. So that's something you need to keep in the back of your head. Also of course with each step that you do in a mocking direction, of course you remove a potential source of error for a later run and you will need to accommodate for that. The most important area though is of course what happens locally on the system. So for instance on the file system, read and write operations need to be sandboxed properly. If you run especially multiple tests in parallel, you can't just go in blindly and say, whatever does these utilities that I'm testing are doing, I just trust them that they always of course work in, I don't know, dedicated directories and that there are no race conditions or whatever, really running them in parallel. And that of course applies to any kind of system resource that they access. So we've got already a set of challenges but also a set of things that make CLI utilities particularly interesting for testing.

Right, so going a little bit further and thinking about some challenges that arise in that, we've already heard about input output serving content was already briefly touched just to remind you. So for instance, let's say what we will be testing is so-called PyCLI. It's a little utility that helps with a couple of web development related tasks. One of these is scaffolding, but another one that you would do quite after the scaffolding is running, for instance, a debug process with it.

2. Verifying Port and Content, Handling Fragmentation

Short description:

The CLI utility opens a port and serves output on that port. To ensure correctness, we can ping the port and use utilities like BlayWrite to access and verify the content. Fragmentation is a common challenge where the utility places content in multiple directories. This can be avoided by cleaning up after each test and ensuring test isolation.

And this debug process is actually opening a web server on your local machine. Now we want, of course, to verify that this has been opened successfully. And so of course, this part of the CLI that it opens some port is something that we need to consider. Now, how do we ensure that the correct port was opened, but also then, of course, that the output served on the port is right? You need to, of course, have all those questions answered.

In our case, what we did, of course, is we ensure we have just some ping on the port, that the port is alive, but then afterwards what we can do is using utilities such as BlayWrite for actually accessing the content, reading out it, and verifying it against, let's say, an expectation source that the actual reserved resources is the same.

Now, fragmentation is something that will appear quite often, which means that a CLI utility might, let's say, place content that it, for instance, creates in a couple of directories. And you don't want that to happen so often, because first of all, you need to clean up after each test. And second, you need to ensure that the test, of course, stays isolated, right? That you can run, for instance, multiple test cases in parallel or multiple test suites. And if you don't have control over where the utility that you want to test is placing files, you might run into these race conditions that I mentioned earlier.

3. Improving Performance with Temporary Directories

Short description:

The main challenge in testing CLI utilities is performance. Running multiple tests can result in unacceptable delays. To improve performance, one strategy is to use temporary directories. Another approach is to create OS temp directories, allowing you to define their location. However, using OS temp directories may make it difficult to locate output and debug failures. A more efficient solution is to create a template directory with all the necessary steps for each test. When a new test is created, a temporary directory is made as a copy of the template. This minimizes overhead and allows tests to run efficiently.

An additional problem that will be potentially even the main challenge that you will face is performance. Now, if, let's say, running one test case will be, like, 30 seconds because you need to scaffold and create a lot of files, well, this might be acceptable if you just have this one test. But if you think you have 100 tests or even 1,000, then 30 seconds is certainly not at all something that you can accept. And so you need to think about how can you improve performance here, how can you make it that, on average, I mean, maybe still one test case takes longer, but on average, you have, let's say, times between one and five seconds the most.

And so you would need to think about how to structure that really efficiently. And one strategy is, well, to use temporary directories. Important part is that these are directories that you will create. There's another strategy where you create so-called OS temp directories. So they're coming from the operating system. And the big difference is that when you create them, you actually can define where they are placed, which makes a lot of sense. Because, after all, if the operating system decides where this temporary directory is hosted, well, I mean, first of all, they get a little bit out of your sight, and they may be quite substantial in size. But that's maybe acceptable for a while. But worse, you don't see so easily where that output was placed. And therefore, of course, for instance, in a CI-ICD scenario, you can't just, well, publish them very easily and see later on what was actually happening in that directory, why did the test fail, or when it succeeded that you can make manual verification that all your, let's say, assertions in the test are really useful and working.

Anyway, what you can do with this is when a test starts, like test one here, just creates this temporary directory in a known location and it works in this directory and you're good and you do the same thing with a second directory, maybe even a third one. And so far that it's looking quite fine, right? You still need, let's say, some primitives that you always, of course, refer to this temporary directory, not just let the utility do whatever. So the context, for instance, the working directory, to start with the simplest one must be set correctly. But still you've got a lot of overhead here. If, and that's why all these three tests here are placed in the same test suite, if they share, for instance, a common template that it should work against, they would need to redo all the steps. And so maybe even more efficient way is to add something, what I would call, a file system cache on top of it. So the strategy that you do here is the following. You start with your test suite, and when that starts it will create what I would call a template directory. So all the steps that each of your tests will require will be already done in the template directory. Now the trick is that when you create a new test, it will first, before it runs, create a temporary directory, but make a copy of the template directory. Now that could be even as efficient as just simply linking that, which means you have essentially no overhead. Every file that appears in this temporary directory one will be just some links, so there is actually not even overhead on the file size counter. But of course, there might be problems here, so if your utility, for instance, writes, Simulink might not be the right choice. You can mix, but in our case, for instance, we are doing a lot of modifications. The simplest was just a copy, and the recursive copy was still much faster than following the templating steps. That being said, once a temporary directory is there, you can just refer to it, and your tests can run.

4. Creating Temporary Directories and Serving Content

Short description:

You create temporary directories for each test, eliminating the need for scaffolding steps. This approach ensures reproducibility and speeds up subsequent tests. To serve content, wrap the functionality in service objects and use a port allocator to dynamically assign available ports.

And you do the same for instance, with test number two, so the directory is created, and then refer to it. And same, of course, with a third test. Before the test runs, the temporary directory is created, and it will already get all the contents in there from the templating, and so no scaffolding steps are required. You just run your tests, and all is good.

Another advantage here is, of course, that you've got it very reproducible. So the first run of, let's say, the test one might take a little bit longer, but then, two, three, and even more tests, they will all just use the temporary template directory, and so they will be really speedy, and we will see that in a demo in a minute.

Now what about the serving content part? What you can do here, instead of just triggering, I don't know, creating, for instance, a server on the fly, you will actually wrap all these things in little objects that you could call a service as well, now a service in your JavaScript code, for instance, and instead of just firing up a server, an HTTP server, for instance, you would need to go through a port allocator beforehand. Now the job of the port allocator is to make sure that you don't just select a fixed port for it, let's say 8080, but that you get a port that is free.

5. Managing Ports and Resources

Short description:

Having multiple test suites running parallel can cause conflicts if they demand the same service on a fixed port. To dynamically allocate ports, a port allocator is used, which ensures availability and assigns a port for opening an HTTP server. The resulting instances are managed by a resource manager, which automatically cleans up after the test case finishes, even in the presence of failing assertions or code issues.

Because thinking about, you know, having multiple test suites running parallel and both are running a test case that demand the same service. If you would fix that port, let's say to 8080, then you would have a potential conflict here. And even if you for some reason have some logic in your application where you say, oh, depending on the test case, I just increment a counter. Well, that might fail as well, right? Because you don't know where you're running this test from a CICD system, for instance, you don't know what ports are available. So it could be that 8080 is already taken there, maybe 8081 is available or something else. So that port allocator needs to be sure to dynamically get you a port that's really there. And it will reflect it back. And so you can then open, for instance, an HTTP server on the respective port. So quite nice. You put them, the resulting instance, into a resource manager, that's quite handy for later on. And then, of course, your test case might continue, might create another service. Same principle here. It will be put into a resource manager. And now the resource manager, when the test case finished, it will have the responsibility of cleaning that all up. But that is implied here. So there's no manual step needed, which can be quite handy, especially if you, let's say, yeah, forget about taking care of, for instance, failing assertions and such. So you might have even a problem in the code of your test case. And even in such cases, you want to reliably shut down these services, and the resource manager just does that. So it's always a good strategy to have such cleanups just implied and handled correctly out of the box.

6. Test Setup and Execution

Short description:

In our setup, we used Jest as a test runner, Playwright for browser verification, and TypeScript for writing the tests. We ran everything in Azure pipelines for continuous verification and monitoring. The tests interacted with our code written in TypeScript and used a plugin called GLTS Jest. Jest generated a JUnit formatted file for inspection via Azure DevOps. Here's an example of a test using special functions and a context object.

So in our case, what was the basic setup to run such tests? Was actually, I would call it fairly straightforward. We were using Jest as a test runner. That's always a good basis because it provides a lot of convenience. Also command line arguments can be supplied to, for instance, just select a certain test and you just get a full ecosystem already at your disposal.

Next thing we were using, Playwright, because as mentioned in the one example, we had to verify output in a browser, just verifying, for instance, that some HTML is served was not enough. We wanted to see that the rendering is working, that where this HTML is, for instance, was actually showing some JavaScript and that this was doing the right things after all. Then what we were using to write the test was TypeScript. I wanted to be sure, of course, that we have no bug or issue already in our tests itself. So that would make debugging even more brutal than anyway, fixing some failing tests. And we're running everything in Azure pipelines to have it continuously being verified and monitored.

Now, putting it all in a diagram, looks like this. Our CISD system triggers daily four o'clock in the morning. It runs the tests. These will be metrics tests more than later. This would just start a Node.js process running Jest and Jest and interacts with our tests that have been written in TypeScript. We have plugin or a transformer called GLTS Jest. Now, these tests might use Playwright, they might use other resources, whatever they do. All the interaction is now from these test cases. And at the end of the day, Jest writes out a JUnit formatted file that can be then published from the CICD pipeline. And that allows full inspection from Azure DevOps, web portal, but also of course via an API. So it's quite convenient, quite nice. You can have automatic reports. If something is failed, you get notification via email or in your favorite channel, for instance, on Teams or any other system. But that's quite cool.

How does then a test look like? Here's an example. What we did is we repped that normal test suite definition just for convenience. So you then just start this as you would do normally in Jest. But what you get back is special functions, like lifecycle, like set up, for instance, but also a special testing function. Now, the advantage of that approach is that in each of those functions, we have these repper running and we can then access a context. Otherwise, we have certain parameters which are all placed in the code via environment variables, so they can easily be set on any machine.

7. Testing CLI Utility Versions and Test Context

Short description:

In our CI/CD pipelines, we test different versions of our CLI utility. The test context ensures that processes run in the correct context, including the use of the right directories. The test context sets the current working directory to a temporary directory, managing resources efficiently. The test context is not a publicly available framework, but the source code is accessible on the GitHub repository. The test context in TypeScript allows for asynchronous operations and automatically maps file paths to the temporary directory. Each test suite focuses on a specific subcommand, and a setup function takes care of the necessary steps to prepare the directory and install dependencies for successful testing.

They are set, of course, in our CI, our CD pipelines. And here you see, for instance, what we do to test different versions of our CLI utility by default, it will always use the latest version, but you can actually set it to a different version, a preview version, for instance, which might be interesting. Or if you want to see that, I mean, there's a regression happening, this already work, you can make a new test case, but run it against an older version too, of course. And this is all covered, of course, by such parameters.

Otherwise, this context, of course, ensures that whatever you do, like running, for instance, spawning another command line process, you will always do it in the right context. So, for instance, using the right directories. So here, there's a lot of magic implied behind that, but rest assured, the most important thing that it does is it sets just the current working directory to a temporary directory already settled. And as you can see, that's the advantage of this wrapper. You don't even see the creation of any of that. It's just happening and it's reliably happening, and all these resources are managed in better context. So that's all not, let's say, a publicly available framework. All the source code is publicly available, but it's not, let's say, a library that you can just have, it's just tailored to our solution, but you can have a look at the GitHub repository for it.

Right, so what is this test context about? That's the TypeScript definition. Most importantly, everything is asynchronous. So even the things that are not called something async are asynchronous. For instance, if you run something, you look up at the promise, and the promise resolves when, for instance, this process that you started has completed. So the string that you get back is actually the output stream. Now, if you use, in this case, run async, then you get back to running process, and you will need to explicitly, for instance, wait for it, and you can then interact with it much more nicely. And you also have everything always being made relative to a temporary directory. So for the file, you don't need to know where the temporary directory is. You always just supply relative file paths, and they will be automatically mapped to the temporary directory, which is great, because that means there's really no need for a developer doing this test to have any idea where the files will be placed. It's just automatically happening, and you can just focus on what should be now inside this structure.

Right, with that being said, let's have a quick look how that looks in practice. What I see here is one test suite that we have for testing a specific sub command of our command line utility called Pilot Build. So each test suite we have for a specific sub command. And what you can see is just in the example, we have a setup function here, and that takes care of these templating steps. So in this case, it runs quite some steps at first. Well, start our CLI to actually prepare this directory, but then it even installs some bundler and it even installs some kind of other dependencies. So it runs in our case here, a couple of times and can install. So it does that, it actually prepare everything as we want it to be for running the test successfully.

8. Running Tests and Configurations

Short description:

When running tests, we use a test function to run the command we want to assert. We make assertions on the files, focusing on key points rather than using snapshots. Running a single test takes longer, but subsequent tests take advantage of the folder and template structure. Creating the template takes time due to NPM installation processes. Running tests locally is just one part of the story. Most tests are run periodically, testing the CLI utility in multiple configurations with different versions of Node.js and the command-line utility.

And then later on, when we actually want to run a test, we just use this test function and we run the command that we actually want to assert is working correctly. So in this case, it's called pilot build. We run it via MPX, just to make sure how the end user would run it. And once that's done, we can make assertions on the files.

So for instance, in this case, we're only interested in what's happening in the dist index chairs. One could argue you could do such things also with snapshots. And we experimented a lot with snapshots. In the end, what we found out is that snapshots are, at least for the utility that we have here, are not good because we would need a lot of exclusion rules. And even without these exclusion rules, a file like this index chairs is highly dependent on all the Node module dependencies that have been installed. And just a slight version in any of these dependencies, a slight change in a version of any of these dependencies might have an impact on the file. So what we do instead is we just identify some key points that definitely needed to be in this file. And we have them here in the expectations. Of course, there is no good way to say that's fully reliable. So we always monitor that. But usually if these points are in there, then this thing is working. And otherwise, we have other tests of course, and that covered at more end to end, and ensure that also on the screen, the right thing is happening.

Right, so if we would run this, just did that, you are just running here a single test, then the single test in this case, runs of course a little bit longer, because with a single test, you don't get the speed up, right? You just get now the time to scaffold this one directory. But the other tests of course, they would now take advantage of the structure that you can see here on the left-hand side, where you already created this folder automatically, you also created now a folder for the test suite, and in there, there's a temporary folder now for this template that would only be created once, of course. And from this template, you just get a copy in there, and every modification is done in here. And creating that template and everything that took most of the time, because NPM installation processes take longer, right? Overall, the 40 seconds are really therefore, yeah, screwing the result. Nevertheless, all the other tests were skipped, and this test was run successfully. Of course, you could also say this test might not run if you, for instance, go to a version where you know it has a problem, and then you would see the test not pass, in the same sense.

Right. And then, of course, running locally can be just one part of the story. Most of our tests are run, as mentioned, periodically, so every day at 4 in the morning. What we are doing here is metrics testing, so that's the other thing that is more efficient than running it locally, because here I just run it in one configuration. But actually, what I want is, it's a CLI utility used by many users. I want to run it in multiple configurations. I want to run it with different versions in our case of Node.js. I want to run it, maybe, with different versions of our command-line utility.

9. Running Tests on Different Configurations

Short description:

Running tests on different operating systems and identifying compatibility issues is crucial for CLI utilities. Continuous integration and continuous deployment (CICD) provides benefits such as notifications for failures and the ability to handle dynamic dependencies. While fixed dependencies are advisable to some extent, it's important to consider real-world scenarios where users may have different dependency versions. Matrix testing in Azure DevOps and other CI/CD pipelines allows running tests in parallel on different configurations, such as different operating systems and node versions. This provides a more comprehensive test coverage and helps identify issues specific to certain configurations.

And I want to also run it on different operating systems, because there might be some part in the CLI that works, I don't know, on Windows, but doesn't work on Mac, and so I want to identify that. And that's, of course, something that on a local system will be, well, difficult to do, and that's where the CICD shines. Right, you also get, of course, other improvements there. You get the notifications when something fails, so you've got something every day that you can look for if you have an issue, and you should, of course, have a look at dynamic dependencies for that.

What I mean with this is, of course, you can, let's say, harden everything. You can make, for instance, that all your templating that you do will be done against fixed dependencies on, for instance, NPM, so that you always depend on, I don't know, we've seen emoji lists that say one, two, three should be diversion, and that, of course, is up to some degree advisable, but doing all the dependencies in a fixed way is certainly not advisable, because if all the tests are always green, they will always stay green, but your real users, they will not make all their dependencies, of course, fixed, so they will usually say, oh, I just don't care. If, let's say, there was a patch version change here, I want to take it because, I don't know, it might contain a hotfix for security vulnerability, and so you should follow that, because your end-users will, too, right? And so a change in, for instance, your utility might, of course, be good to trigger your tests, but overall, even if your utility didn't change, you should still run the tests and then just update the dependencies just to emulate what real users would do. So always have the trade-off in mind between having everything reliable and reproducible in your tests and having that as real-world as possible. And somewhere in the middle is where you want to be, that's what you should consider.

Now, matrix testing in, for instance, Azure DevOps works like this. In other ERC-ICD pipelines, for instance, GitHub Actions, it works very similarly. It's also a YAML file, but, of course, some names are different. What you do in there is usually you define a certain key called strategy, and in there, you can have a matrix. Then you enter a list of good names. So you, for instance, assign Linux node 12 for where you want to run on Linux, which is node version 12. And then inside, you can actually have any kind of variable that you like. So these names are all made up, image name and node version. What you actually do is that you then use these variable names when you, for instance, assign a certain image or when you install a certain version of node, and then you can refer back to the variables that you assigned. And what Azure DevOps will actually do is it will then run all these things in parallel, at least up to the, let's say, purchase parallelism level of Azure DevOps. So if you just have one because you're on the free tier, then you will still run them sequentially, but you will just run all of these as that you can see here in one run. Right? So here you can see, we've got now the Linux jobs running the different versions of node. We've got the Windows job, but all running in parallel. Windows took a lot longer. There you can see the Windows is just, well, I think having a not such a good performance on the NPM install, that was the reason here, but that will test itself same time. Right. What you can get back is you can make nice batches. So each of your, let's say, metric sub pipeline also can have a dedicated batch. So you can have a batch for the whole thing that you see on top here, but you can also have a batch for each pipeline. And that gives you for instance, already a good indicator level that you can, for instance, use this on some internal dashboard or be on some page. So now when you get partially succeeded or even read once, then it's time to investigate.

10. Testing Results and Repository

Short description:

In our case, we get partially succeeded because it's not possible to just say a sub-task failed when running the tests. However, we had a successful ending where we published the test results. Azure pipelines evaluates this as partially succeeded. Since we're using Jest and have JUnit as output, it's easy to investigate failures by looking at the published directory. Check out the repository called ParallelsUI Integration Tests. Connect with me on LinkedIn or Twitter. Have a great day, everyone. Bye.

In our case, we get partially succeeded because I think it's not possible, but back then when we started those, it was not possible to just say some sub-task failed in this case running the tests, but then we had a successful ending where we just published the test results. So that will always be successful. So Azure pipelines just evaluated that to partially succeeded. In our case, therefore, yellow means usually red. And so you need to investigate. But again, since you're using Jest, have J unit as output. That is then quite easily doable. Plus, of course we publish, as mentioned all the generated files. So you can even then look back and say, hey, how was that? Was this one test that was failing? Let's have a look at the published directory and let's investigate that. With that being said, have a look at the repository. It's called ParallelsUI Integration Tests. And I wish you a great conference. Get in touch, connect to me on LinkedIn or Twitter, and let's exchange ideas. Have a great day, everyone. Bye.

11. Poll Results and Interpretations

Short description:

During the poll, someone humorously answered that CLI stands for camel, lion, and iguana. Although the responses didn't initially add up to a hundred, they were eventually adjusted. It's interesting to see the different perspectives and context-dependent interpretations.

So in the beginning, before your talk, you asked the audience to answer a poll question and we have the results of it. So what do you think about it? I would say, yeah, as the expectation goes, right? I mean, great. I found it funny that someone answered that CLI stands for camel, lion and iguana. No. I mean, what can you do? It is all context dependent, so maybe that's a good choice. I don't know the background here for the answer. Yeah, yeah, yeah. What I find more interesting is that it doesn't add up to a hundred, but rounding, right? Oh, okay, someone just changed it. Now it adds up to a hundred. Yeah.

12. Testing CLI Utilities with Cypress and Playwright

Short description:

The speaker explains that they have not used the cypress cy.exec command to test CLI utilities. They clarify that the playwright part in their talk was only used for browser automation, not as a test automation framework. They mention that cypress could be a viable option for browser-based end-to-end testing.

So we have a few questions from the audience that I would like you to answer. So the first one would be, which is one that I'm pretty curious about as well. Have you already used cypress cy.exec command to test CLI utilities? Because in your talk you mentioned about using playwright. So have you already used the cy.exec command for that? No, I have not. So the playwright part in my talk was only used for a fraction, right? So all the tests were still run from Jest, but here we used not the playwright as a test automation framework, but rather as a browser automation framework because we wanted to use the browser, get some information from there. But yeah, cypress might be also a viable option there. And I use cypress in a lot of projects for we had them browser-based end-to-end testing. So that might be also a great option. That makes a lot of sense.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

DevOps.js Conf 2024DevOps.js Conf 2024
25 min
Atomic Deployment for JS Hipsters
Deploying an app is all but an easy process. You will encounter a lot of glitches and pain points to solve to have it working properly. The worst is: that now that you can deploy your app in production, how can't you also deploy all branches in the project to get access to live previews? And be able to do a fast-revert on-demand?Fortunately, the classic DevOps toolkit has all you need to achieve it without compromising your mental health. By expertly mixing Git, Unix tools, and API calls, and orchestrating all of them with JavaScript, you'll master the secret of safe atomic deployments.No more need to rely on commercial services: become the perfect tool master and netlifize your app right at home!
TestJS Summit 2021TestJS Summit 2021
36 min
Effective Performance Testing to your Server with Autocannon
Top Content
Performance testing expertise that is developed for a long time. In order to measure your server performance you need a tool that can efficiently simulate a lot of abilities and give you good measurements according your analysing criteria.Autocannon NPM library gave me exactly that - that library is super easy to install and has a very simple API to work with. Within a really short amount of time you can start do performance testing to your application and get good measurements in development environment and in your performance labs, and generate complicated testing scenarios.In this talk I will introduce Autocannon, explain how to efficiently analyse your server performance with it, and show how it helped me to understand complicated performance issues in my Node.js servers. At the end of this lecture, developers will be able to have the ability to integrate a fast and easy tool in order to measure your server performance.
TestJS Summit 2022TestJS Summit 2022
21 min
Delightful Integration Tests With Testcontainers
Top Content
Dockerized services are an excellent tool for creating repeatable, isolated environments ideal for integration tests. In this session, we'll look at the Testcontainers libraries which provide flexible and intuitive API for programmatically controlling lifecycle of your service dependencies in Docker containers. Running databases, Kafka, Elasticsearch, and even cloud technologies, straight from your test code ensures environment config is always up-to-date and consistent during local development and in CI pipelines.You’ll learn everything necessary to start adding powerful integration tests to your codebase without the headache of managing external service dependencies manually!
TestJS Summit 2022TestJS Summit 2022
23 min
Playwright Can Do This?
Guaranteeing that your application doesn't break while constantly shipping new features is tough. Obviously, with a continually growing app or site, you can't test everything manually all the time!Test automation and monitoring are crucial to avoiding shipping broken apps and sites. But what functionality should you test? When should you run your tests? And aren't complex test suites super slow?In this session, we'll get our hands on Playwright, the end-to-end testing framework, and learn how to automate headless browsers to ensure that you confidently ship new features.
DevOps.js Conf 2022DevOps.js Conf 2022
22 min
The Lazy Developer Guide: How to Automate Code Updates?
How to update hundreds of projects all at once? With organizations rapidly growing, demand for the scalability of the teams grows which directly impacts projects structure and ownership. The usual dilemma is mono- vs. multi-repos, but ... What if I tell you that it does not matter much? Both approaches can punch you in the face at some point, so perhaps it is better to think bottom-up.
Today I will walk you through some of the biggest challenges that exist in both approaches and those are managing dependencies across a few hundred projects, global code updates and many other things. I will also show you examples of how we solved this inside Infobip through building our own internal libraries.

Workshops on related topic

TestJS Summit 2021TestJS Summit 2021
85 min
Automated accessibility testing with jest-axe and Lighthouse CI
Workshop
Do your automated tests include a11y checks? This workshop will cover how to get started with jest-axe to detect code-based accessibility violations, and Lighthouse CI to validate the accessibility of fully rendered pages. No amount of automated tests can replace manual accessibility testing, but these checks will make sure that your manual testers aren't doing more work than they need to.
TestJS Summit 2022TestJS Summit 2022
163 min
Automated Testing Using WebdriverIO
Workshop
In this workshop, I cover not only what WebdriverIO can do, but also how you'll be using it day-to-day. I've built the exercises around real-world scenarios that demonstrate how you would actually set things up. It's not just "what to do," but specifically "how to get there." We'll cover the fundamentals of Automated UI testing so you can write maintainable, useful tests for your website and/or web app.
TestJS Summit 2021TestJS Summit 2021
111 min
JS Security Testing Automation for Developers on Every Build
WorkshopFree
As a developer, you need to deliver fast, and you simply don't have the time to constantly think about security. Still, if something goes wrong it's your job to fix it, but security testing blocks your automation, creates bottlenecks and just delays releases...but it doesn't have to...

NeuraLegion's developer-first Dynamic Application Security Testing (DAST) scanner enables developers to detect, prioritise and remediate security issues EARLY, on every commit, with NO false positives/alerts, without slowing you down.

Join this workshop to learn different ways developers can access Nexploit & start scanning without leaving the terminal!

We will be going through the set up end-to-end, whilst setting up a pipeline, running security tests and looking at the results.

Table of contents:
- What developer-first DAST (Dynamic Application Security Testing) actually is and how it works
- See where and how a modern, accurate dev-first DAST fits in the CI/CD
- Integrate NeuraLegion's Nexploit scanner with GitHub Actions
- Understand how modern applications, APIs and authentication mechanisms can be tested
- Fork a repo, set up a pipeline, run security tests and look at the results
GraphQL Galaxy 2021GraphQL Galaxy 2021
82 min
Security Testing Automation for Developers on Every Build
WorkshopFree
As a developer, you need to deliver fast, and you simply don't have the time to constantly think about security. Still, if something goes wrong it's your job to fix it, but security testing blocks your automation, creates bottlenecks and just delays releases, especially with graphQL...but it doesn't have to...

NeuraLegion's developer-first Dynamic Application Security Testing (DAST) scanner enables developers to detect, prioritise and remediate security issues EARLY, on every commit, with NO false positives / alerts, without slowing you down.

Join this workshop to learn different ways developers can access NeuraLegion's DAST scanner & start scanning without leaving the terminal!

We will be going through the set up end-to-end, whilst setting up a pipeline for a vulnerable GraphQL target, running security tests and looking at the results.

Table of contents:
- What developer-first DAST (Dynamic Application Security Testing) actually is and how it works
- See where and how a modern, accurate dev-first DAST fits in the CI/CD
- Integrate NeuraLegion's scanner with GitHub Actions
- Understand how modern applications, GraphQL and other APIs and authentication mechanisms can be tested
- Fork a repo, set up a pipeline, run security tests and look at the results