And I want to also run it on different operating systems, because there might be some part in the CLI that works, I don't know, on Windows, but doesn't work on Mac, and so I want to identify that. And that's, of course, something that on a local system will be, well, difficult to do, and that's where the CICD shines. Right, you also get, of course, other improvements there. You get the notifications when something fails, so you've got something every day that you can look for if you have an issue, and you should, of course, have a look at dynamic dependencies for that.
What I mean with this is, of course, you can, let's say, harden everything. You can make, for instance, that all your templating that you do will be done against fixed dependencies on, for instance, NPM, so that you always depend on, I don't know, we've seen emoji lists that say one, two, three should be diversion, and that, of course, is up to some degree advisable, but doing all the dependencies in a fixed way is certainly not advisable, because if all the tests are always green, they will always stay green, but your real users, they will not make all their dependencies, of course, fixed, so they will usually say, oh, I just don't care. If, let's say, there was a patch version change here, I want to take it because, I don't know, it might contain a hotfix for security vulnerability, and so you should follow that, because your end-users will, too, right? And so a change in, for instance, your utility might, of course, be good to trigger your tests, but overall, even if your utility didn't change, you should still run the tests and then just update the dependencies just to emulate what real users would do. So always have the trade-off in mind between having everything reliable and reproducible in your tests and having that as real-world as possible. And somewhere in the middle is where you want to be, that's what you should consider.
Now, matrix testing in, for instance, Azure DevOps works like this. In other ERC-ICD pipelines, for instance, GitHub Actions, it works very similarly. It's also a YAML file, but, of course, some names are different. What you do in there is usually you define a certain key called strategy, and in there, you can have a matrix. Then you enter a list of good names. So you, for instance, assign Linux node 12 for where you want to run on Linux, which is node version 12. And then inside, you can actually have any kind of variable that you like. So these names are all made up, image name and node version. What you actually do is that you then use these variable names when you, for instance, assign a certain image or when you install a certain version of node, and then you can refer back to the variables that you assigned. And what Azure DevOps will actually do is it will then run all these things in parallel, at least up to the, let's say, purchase parallelism level of Azure DevOps. So if you just have one because you're on the free tier, then you will still run them sequentially, but you will just run all of these as that you can see here in one run. Right? So here you can see, we've got now the Linux jobs running the different versions of node. We've got the Windows job, but all running in parallel. Windows took a lot longer. There you can see the Windows is just, well, I think having a not such a good performance on the NPM install, that was the reason here, but that will test itself same time. Right. What you can get back is you can make nice batches. So each of your, let's say, metric sub pipeline also can have a dedicated batch. So you can have a batch for the whole thing that you see on top here, but you can also have a batch for each pipeline. And that gives you for instance, already a good indicator level that you can, for instance, use this on some internal dashboard or be on some page. So now when you get partially succeeded or even read once, then it's time to investigate.