Predictive Testing in JavaScript with Machine Learning

Rate this content
Bookmark

This talk will cover how we can apply machine learning to software testing including in Javascript, to help reduce the number of tests that need to be run.

We can use predictive machine learning model to make an informed decision to rule out tests that are extremely unlikely to uncover an issue. This opens up a new, more efficient way of selecting tests.

18 min
19 Nov, 2021

Video Summary and Transcription

This Talk explores the benefits of introducing machine learning to software testing, including automating test case generation and achieving close to 100% code coverage. AI is being used to automate test generation, improve regression testing, and make predictions in automation testing. Machine learning enables predictive testing by selecting tests that are more likely to uncover issues in code changes. AI-based tools are being used to generate automated tests, improve code coverage, and intelligently select tests. Companies are relying on dedicated testers and using historical code changes and test cases to generate specific test cases for relevant code changes.

1. Introduction to Predictive Testing in JavaScript

Short description:

Hello, everyone! Welcome to TestJS Summit 2021. I'm Shivaay, and I'll be presenting on predictive testing in JavaScript with machine learning. Let's get started!

Hello, everyone, welcome to TestJS Summit 2021, and I'm Shivaay, who is going to be presenting a topic on predictive testing in JavaScript with machine learning. So a very quick introduction about myself, I'm Shivaay and I'm currently a TensorFlow.JS Working Group member. I'm a Google Server Code mentor at TensorFlow and this is my third GitNation conference talk this year. Previously, I've also given a talk at Node.js conference and also React Advanced this year. So really excited to be presenting yet another talk at GitNation, specifically at TestJS Summit. So without wasting any further time, let's get started.

2. Introducing Machine Learning to Software Testing

Short description:

Machine learning is being used in various industries, including software development. However, software testing has not fully utilized AI and machine learning. This talk discusses the benefits of introducing machine learning to software testing. AI can automate test case generation, determine which tests are most important, and achieve close to 100% code coverage. AI can also improve automation testing frameworks like Selenium by identifying and resolving issues.

Now, first one, machine learning is really everywhere today. There is no doubt in that particular fact. You can see machine learning being used in healthcare, education, but also when it comes to things within software development itself. Machine learning today is being used to do a lot of different things. For example, we have seen how GitHub Copilot is being used today to auto-generate or auto-suggest new code. It's also being used in MLOps, in different forms of operations in DevOps to improve the DevOps cycles. And that sort of makes it really important to also use it in other areas where we traditionally might not think that it might be able to be used. So that's what makes machine learning really powerful today. And that's why, why not just think about introducing machine learning to software testing?

Software testing has traditionally been all about being able to write test cases, being able to sort of understand fundamentally within the entire software engineering lifecycle that how can we actually make our softwares more and more productive and make them error free. So what that means is that ideally what happens if you have a software tester or a QA analyst, they'll be going through the entire code and writing unit test cases. And of course, we have the entire pipeline of how the testing actually starts off, where first we write some code, then we prepare the unit tests, we have integration tests. And based on this, once the test cases are passed and our code passes through all these different test cases from both the unit and the integration test, then we finally put our code into production. We also use things such as regression testing to be able to see how is new code actually impacting the code that has been written prior to or let's say, some of the older code bases and how this code is actually getting affected. So all of these different processes involve basically either you can use manual testing or you also have a lot of different automation testing tools as well for example like Selenium. And these traditionally haven't really used machine learning. It has more to do with actually writing these test cases if you're doing manual testing or even if you're using automation testing tools such as Selenium to be able to configure them so that they are able to go through your application. And generally, the software testing has not really seen a lot of use of AI but this talk specifically will talk about how we can introduce and what are the benefits of actually adding machine learning or AI to software testing.

So, write about some of the great scenarios where AI could be actually used in testing. So the first one is the automated test case generation. So a lot of time what happens is that we spend a lot of time in actually creating test cases. Now that could be behavior driven programming or behavior driven development where we're creating the test cases before writing the code itself or let's say we have written a particular function and we write a unit test case for it. How can we actually use AI to generate test cases on fly by just evaluating the code by going through the code? Where an AI model just understands whatever code has been written, whatever code changes have been made to the code and then automatically generate test cases without any manual intervention? Then, essentially, being able to find out which particular tests to actually run that are more important that will essentially save time and sort of the entire process of the software testing time where we are only running the most important test cases. I'll also be coming into more deeper discussions specifically on the second point and then how we can actually use machine learning to not only test frontend based UI testing specifically on, say, JavaScript, but also how we can use machine learning to actually test out backend APIs. That's also one of the fields where today a lot of AI-based software testing tools are being worked on. And then how can we achieve 100% code coverage with the help of AI? Because code coverage is a really important topic to be used whenever we are evaluating any type of code base, whenever, let's say, we are creating a new build or we are testing out new deployments, new changes that have been made. If you're able to achieve even 90% or 95% code coverage, that itself is considered to be a really great point. But how can we achieve close to 100% code coverage with the help of AI? And that is something that AI can actually help with, because of the fact that we are able to evaluate the code with the help of AI and we are able to generate automated tests, we are able to run the most important tests that are really important for that particular code change, and that will actually help to just understand all the different nuances of the code that are there, including the code changes and how the new changes have actually impacted your older code. All of that can be considered while trying to do code coverage with the help of AI. And then even within the automation testing framework, how can we actually use AI, or you know, let's say, with respect to Selenium, so that whenever you're doing any kind of an automated testing, AI can actually help improve it by looking specifically at the issues that might arise, and we are like using a manual, we are using basically automation testing to look specifically for those particular issues and try to be able to resolve them. So, those are some of the scenarios today where AI is actually being used in software testing.

3. Benefits of AI in Software Testing

Short description:

AI is being used to automate test generation, improve regression testing, and make predictions in automation testing. These solutions are gradually replacing Selenium-based automation testing. The goal is to achieve 100% code coverage with AI. AI saves time by generating automated test cases and expanding the testing scope. It makes the testing process user-friendly and widens the scope to include test cases that may be overlooked by human testers.

So, we do have a lot of different softwares that actually do use AI to automate test generation, use machine learning to use better regression testing, use machine learning for better predictions with automation testing, and those solutions are actually going to be gradually going to be replacing even Selenium-based automation testing as well, and of course, the idea is now to be able to achieve code coverage that is 100% code coverage with the help of AI, and some of the benefits that actually AI will provide to software testing, first of all, it just helps us to save time. So, whether it's actually being able to generate automated test cases, that itself is going to be saving a lot of time for the QAs, or who had to initially or earlier write the test cases, it just helps to expand the test or the testing scope. Now, it's not just limited to be able to actually test just the incoming changes, regression testing or changes that have been made to the code and how those have impacted the previous changes, it's also taking care with the help of relevant AI-based algorithms, and it just makes it a lot more user-friendly to use these auto-generated or auto-tested test cases that have been provided by AI, and that actually just makes the overall process of the testing really simple and just widens the scope of being able to actually look out for such kind of test cases that humans or software testers might actually forget about, so it gives prevalence to those particular type of test cases as well.

4. Traditional Testing and Predictive Testing

Short description:

Regression testing uses information extracted from a build to determine which tests to run on code changes. Rerunning all tests can be inefficient, especially for low-level library changes. Machine learning enables predictive testing by selecting tests that are more likely to uncover issues in code changes. This is achieved by using a large dataset of historical code changes and applying machine learning algorithms. The test selection model is trained on the dataset to determine the most relevant tests for specific code changes. This approach saves time and can be integrated with JavaScript testing frameworks like Jest, Jasmine, and Mocha. Automated generation of unit test cases and suggestions based on code evaluation are also available.

Now let's actually have a look a closer look at how does traditional testing actually look like? So if we talk about regression testing, now regression testing will generally use whatever information that will be extracted from a build that we're creating, so whenever a build is created, we'll have certain metadata with it and we'll be able to extract information from it to just understand which particular test to run on some kind of a code change that actually has taken place.

And by analyzing all those different kinds of information that has been provided or has been extracted from the build metadata, one can actually determine which particular test to be actually run on the changed code or whatever change code has taken place, what are the particular tests that have to be done. And let's say one of the drawbacks of this particular testing suite would be that let's say if you had to make a change to one of the low level libraries, it would actually just become inefficient to just rerun all tests together. So, for example, even for smaller teams, you might be required to rerun all your test cases again and again.

So, that is where and, of course, this diagram sort of is the same that you have your source files and you have your test sets and there are certain small changes happening. Then based on those changes in a particular library or in a particular network call or a particular function call that you're making, you'll be required to rerun those test cases again and again. And that's where basically we can use machine learning to do predictive testing. That, essentially, being able to find out what is the probability that if you're trying out a particular test, how will it actually be able to find a regression within any kind of code change that we're making.

So how can we essentially make informed decisions to just rule out the test that are not going to be helpful? So essentially, being able to select those particular tests that really will matter for that particular code change because of course, not all test cases will behave or will be able to uncover issues within a particular code change. So be able to intelligently select those particular code changes that will result in actually making your testing process better and will actually make your testing process faster because we're not just using all the different test cases to a particular code change.

So essentially, how can we achieve this is that we could use a large dataset of that dataset will basically contain tests on a lot of different historical code changes, and then by applying machine learning algorithms, we'll be able to determine that which particular test are more well suited for a particular type of code changes. And that will allow us to in the future before the testing process actually takes place, or before once we have actually received the code changes, we'll be able to, before the testing process starts, we'll be able to select only those particular test cases that are more relevant for those particular types of code changes.

And this sort of diagram showcases how we actually aim to get that. So during the training process, we have all of our historic code changes, so this can be considered as the database for, you know, being able to get these historic code changes and then we are using our testing. And then, basically we'll see that whether for every test it was a failure or it was a pass. And then we'll be able to actually use it within our test selection model. And once we actually get started with the prediction, so whenever we have any kind of a new code changes, we'll basically use our test selection model to basically see which particular test might be actually useful for that particular specific code change and that will enable us to save time.

And to just specifically talk more about the training process. So, we are going to be taking a model that basically just learns and the features that are derived, right. So when we are doing the feature engineering, it's essentially getting all of those from the previous code changes and tests that have run historically. And whenever we are applying this particular system to any kind of a new change, essentially the learn model will be able to be applied to that particular code change. And the model will be able to predict what's the likelihood of detecting a regression. And based on this particular estimate, we'll be able to select those particular tests that are more likely to cover that particular code change. So that's how you can save the amount of time that it might actually take to generate the code.

This can be directly used with any kind of JavaScript based testing framework as well, including things such as Jest, we have Jasmine, right? And of course, all the other different, Mocha, right? So all of the different other testing frameworks can actually just use during the DevOps cycle when we're actually using, because this is not just actually limited to the front end of the backend coding, but essentially it comes under the umbrella of the test ops. So during the time when you're building or creating a build of your application or creating, let's say builds and you're testing out the application to be deployed to production, that's essentially where you could use the predictive testing as well. And then of course, even for JavaScript, of course we can use this kind of machine learning world-based system where you're selecting the most appropriate tests.

But even within JavaScript, we can have much better integration with different testing frameworks that I just shared. And of course, we also have a lot of different actions, for example, by PolyCode that actually help for automated generation of unit test cases. And also it is giving suggestions for unit test cases based on the code that it is able to evaluate.

5. Integrating AI in JavaScript Testing

Short description:

There are readily available VS Code extensions for JavaScript and TypeScript that provide suggestions and generate unit tests. Selenium can also be used for JavaScript testing, introducing machine learning directly. Jest and Jasmine can benefit from regression testing improvements and unit test consideration. AI-based tools are being used to generate automated tests, improve code coverage, and intelligently select tests. AI saves time in software testing. Connect with me on Twitter at Heartvalue or on GitHub at theShivaLamba. Thank you for watching!

And there are readily available VS Code extensions for JavaScript, for TypeScript, where you can just use these directly from PolyCode, and you'll be able to get suggestions and also actually just generate unit tests altogether.

And we can also use automation testing improvements within Selenium that can also be used for JavaScript testing. And that's also one of the other ways in which you could actually introduce machine learning directly for JavaScript, like specifically for JavaScript based testing.

But a framework such as Jest, Jasmin, can easily benefit from not only the regression testing improvements, but also the unit test consideration. For example, that is supported by PolyCode, to be actually used with these testing frameworks as well.

With that in mind, that sort of brings an end to my talk. Of course, a lot of this is still under, more to do with the MLOps and with TestOps. And we really want to see how we can actually use these different kinds of solutions, specifically with a lot of the JavaScript based testing frameworks. We're already seeing some of its use case right now, available for used by a lot of different companies who are using AI based tools, to especially generate things such as automated tests, being able to do better code coverage, being able to intelligently find out which particular test to actually run, to save time and make the regression process or the regression testing process better.

And it's just a matter of time that when we're able to actually use these particular tools within our popular JavaScript testing frameworks as well. Of course, I mean, we already have some obvious code extensions and open source toolings that can actually generate JavaScript-based test cases. And you can use those suggestions in your code by just making the code be analyzed by these extensions or these frameworks. And that will just save a lot of time when you're actually doing software testing. And that's what really makes AI so powerful with respect to any kind of a field and not just in soft testing.

So that's pretty much it for this particular talk. I hope that you liked and you understood some new concepts. If you still have any questions, I'll be here to take up any questions for the Q&A. And you can also connect with me later on my Twitter either at Heartvalue or on my GitHub at theShivaLamba. And that's it from my end at SDS Summit. Thank you so much for watching. And I hope to see everyone in next year's SDS Summit. Thank you so much.

How are you doing? Yeah, I'm doing well. How are you? And it's really great to see so many people coming in for SDS Summit. I mean, I had sort of a more unique topic, but yeah, would love to also share with you and look at some of the questions as well. So thank you so much.

Yeah, for sure. Let's first take a look at the poll results. So did you have a chance to look at them? And what do you think about the results? I got a bit impressed that no one responded. They are using AI tools for automating their task.

QnA

AI-based Testing and Auto-generation of Test Cases

Short description:

AI-based testing tools are still in their nascent states, but we are seeing real-world usage. Companies are relying on dedicated testers to ensure test-driven development. One question is about applying auto-generation of test cases for new products/projects. Machine learning requires data sets, and we can use historical code changes and test cases to generate specific test cases for relevant code changes.

What do you think about it? Yeah, especially because the title of the topic was, you know, using AI to help with the testing processes. And yeah, I mean, it's quite not really surprising as well, because like a lot of these AI-based testing tools are something that are really just at their nascent states, they are not well sort of well set up, like we have some other really great tools. For example, if you're talking about JavaScript, something like Cypress, right? Or let's say for that matter, you know, Jest. So these are not like tools that are well, you know, being used by hundreds and thousands of developers. So it's still in a really growing phase, but we are now actually seeing up some real world usage of being able to see AI-based testing. And that's what I try to also cover a few of the examples through my talk. But of course, just looking at the results that we see that a lot of the companies today are really dependent on, you know, having dedicated QAs and software testers to help. So we don't really rely on the software developers, but essentially these QAs and these software testers are working with the software developers to ensure that we are, you know, creating not just like, development, but it's really a test-driven development, right? So that's what a lot of companies today are focusing towards.

Yeah, that was impressive as well. That's like, the majority is still with dedicated testers.

So let's take a look at some questions now. The first one is from AIUSA. Hey, machine learning needs data to learn. How can we apply auto-generation of test cases for new products, slash projects? Okay. So, see, I mean, I would first want to answer this in two ways, right? So the first one is that, as you mentioned, that machine learning definitely does require data sets. So during the time when I was talking about integration and regression testing, right? So as I mentioned that, because what we are doing is that we're looking at the past historical test cases that were written for a lot of different types of code. And this is actually how the GitHub co-pilot has also been built. That we today are seeing the rise of GitHub co-pilot, and it suggests you code, you write a comment and it will give you an entire code sample. And how it was trained was also by looking at all the open source repositories that had been there on GitHub. And it trained over the different kinds of languages, the different kinds of scenarios that are present. So similarly, what we're doing in case of software testing as well is, that we're looking at past historical code changes, and the type of test cases that are applied on those specific code changes. Through this we were able to find out that, okay, if we have a particular code change, so for example, let's say we are having a JavaScript file and we are building a React application, and we push some chains to one of the React components. Let's say it's more dedicated towards links, like we're using React router, we're using some links. So if we already know that, okay, for these particular type of changes, we might have particular test cases that are more focused towards those specific code changes. If we're able to evaluate that. So we'll save a lot of time in removing the redundancy that actually is involved with creating test cases that might not be relevant for that particular code change. So in that case, if you're able to learn with the help of historical code changes that can actually be used to, you know, then generate code or generate, you know, unit test cases or test cases that are very specifically dedicated. So if the machine is aware that, okay, the only change that I made is to a particular part of the code. And we have trained that machine to understand, okay, like for that specific code change, the code that can be generated or the test case that can be generated will be only used for that particular code change. We will be able to, you know, create very specific test cases and save a lot of time.

Machine Learning and Test Case Suggestions

Short description:

I'm wondering if the machine could suggest removing test cases that are no longer valid. When it comes to expected failures due to intentional behavior changes, there is a degree of self-learning involved. We provide the right error reporting scenario to help the AI train on those test cases. AI can help us reach higher code coverage and save time by suggesting unit test cases and considering scenarios that humans may overlook. AI can make test case writing and maintenance easier and faster.

I'm wondering if we could reach to a point where the machine would not just look into the code base and the change and see what needs to be added in terms of testing, but maybe what should be removed as well. Did you think about it at some point? Like, test cases that are not valid anymore, you know?

Exactly. So that makes a lot of sense. So let's say we have reached that part of our code cycle where some of the code changes, some of the test cases might have become redundant and we probably might have not used them in quite a long time. So it can suggest. It's something that I've not personally looked at, but it's definitely a good thought process to consider. Definitely. Okay. Awesome.

So we have another question here from Mark Michaelis about respond to change, which would be how would AI deal with expected failures because the behavior got changed by intention? Okay. So, yeah, I guess like this is a really good question. And again, so the thing is that when we are talking about, so as Mark has mentioned, okay, like when we are talking about the behavior. Now the behavior, that at least what I assume from this particular question is that if we are making a particular code change and now let's say that once we have made a particular code change and we want the AI to sort of see, okay, like for this particular code change, what can be an expected test case that we could run, but let's say by intention, if someone is, you know, changing the code and AI has not really gone through that particular code scenes before, right? So it has not been trained in that way. So in that case, what we can do is I mean, there will be a, an degree of self learning involved as well, as we are giving it more and more newer test cases to sort of look at, or, you know, like, even with AI, we always have those kinds of scenarios when there are some kind of edge cases, which might, you know, might actually make your AI fail. So the expected response over there would be that we give a right kind of an error reporting scenario and just help the AI to sort of also train over those kind of test cases. It makes total sense. And as you said, in the beginning, it's something that is not as mature as other kind of techniques. So we have to teach the machine so that it can absorb more for us, the professionals, so that it can help us more in the future, right? Yeah.

So we have another one here from Carlos R, do you have any other metric beside code coverage? From my experience, even almost 100% coverage doesn't guarantee that product is really tested well. And I agree 100% with Carlos R. Yeah, absolutely. Again, a really great question. And see, I mean, if you consider about how AI can actually help you reach a 200% code coverage, the other thing that I would suggest is that now, apart from this helping you reach 200% code coverage, as I mentioned, with the help of AI, what we are trying to really do is that we are trying to save the time of the testers as well. So instead of actually writing manual test cases every now and then, if you just get a list of suggested unit test cases, or even you just get suggestions that okay, for this particular code change that you have done, this might be a good test case that you can add. So over here, what the AI is doing is it's not only generating the test cases for you, but at the same time, it is also trying to look at those kinds of cases where a human might probably forget about it. For example, a QA or a software developer, software tester is looking at okay, what all test cases should I write? AI could suggest you potential cases where you might probably skip over them or those might come out of your mind. So that could be another metric to sort of look at. Yeah, I think that makes a lot of sense. Yesterday on Vlad's talk with Ramon, I think it was, or I don't remember exactly his name, but they mentioned about the other things, it's not just for instance about time execution, but like how it makes it easier and faster for you to write and maintain the task. So I think yeah, AI can probably help a lot on that.

Pony code: Generating Test Cases

Short description:

Pony code is a platform that helps generate test cases for JavaScript and TypeScript functions. It analyzes the context of a function and suggests potential unit tests. It works like a pair programming tool, providing suggestions that you can accept or reject.

This is where I would just want to also again share about Pony code. Yeah, because as I mentioned, I had mentioned it briefly in one of my slides. Pony code is one of such platforms that basically is a tool built for JavaScript type script. And what it does is that it will look at one of the functions that you've written. So for example, you have like nodes.js and you have created a simple function and you have used module.export, right? So it will look at the function and then it will just understand, okay, like what's the context of that particular function and it will just help you generate some test cases like unit tests automatically. It will suggest you and then you can just, you know, go ahead and select whichever is relevant for your case. So we're not enforcing it to directly give you but it's giving you suggestions to look at, that okay, these are potential test cases that you could have. Yeah, I like to look at it as like when we do pair programming, like the Nopilot or any other tool, it's like you are a pair and it's giving you suggestions, you can accept them or not.

AI Writing and Fixing Scripts

Short description:

If the AI can automatically write the scripts to find the error, shouldn't we program the AI to fix the error right away? GitHub Copilot helps autocomplete code and provide suggestions. It can be programmed to help fix bugs.

So we have another question here from Omig, it's alive. Martijn, a little bit of devil's advocate, if the AI can automatically write the scripts to find the error, shouldn't we program the AI to fix the error right away? That's a nice question. That's a really good question. Yeah, absolutely. So I guess this is where, you know, GitHub Copilot actually is doing that, right? In case you want to write a question, I mean, you write a comment and GitHub Copilot already gives you like a suggestion for the code. So in a way you could say that, yes, like it has been programmed in such a way to help you autocomplete some part of your code or help you, you know, write the remaining part of the code. So, for example, let's say you might have actually come across a particular error in your code, and then you can look at the solution provided by Copilot. So that definitely, you know, makes sense that if we can also create some kind of scripts that help you also fix bugs. Definitely. Awesome. I agree 100%.

Predictive Testing for End-to-End and UI Testing

Short description:

You can use predictive testing for end-to-end testing, including browser-based UI testing and testing React components. It can also make automation testing with tools like Selenium more efficient. While currently more focused on unit test cases, there are tools being developed for browser-based testing, UI testing, and regression testing. Pawnee Code and GitHub Copilot are recommended tools, with Copilot providing suggestions for test cases. The scripts generated by AI can vary, but they generally provide test case suggestions or blocks of code. Thank you, Shibhai, for your talk at TestJS Summit!

We have another one here from Jay Munro. Is this primarily for low-level UI tests or possibly integration layer? I'm not clear on if this would be effective for end-to-end or front-end testing. I have to say that I've used GitHub Copilot for writing Cypress tests. It sometimes knows exactly what I want, so that's my opinion. But I would love to hear yours, Shubhai.

Definitely. Again, a really great question. This is where I would want to say that you can definitely use it for end-to-end testing. Whether you are using it for browser-based UI testing, there are tools that are coming up which really understand from how the browser interacts. Whether you're testing out different types of React components and how they are behaving right now on the front end, you could definitely use it over there as well. So you could use it for making automation testing, let's say, using Selenium or any kind of tools also more efficient. So it really can definitely help you with end-to-end testing. More amount of where we're seeing it right now is definitely on the lower-end side, that is helping out with unit test cases. But tools are being developed right now that will help not only for unit test cases, but also outside of this for things such as browser-based testing or UI testing or even for regression testing. So tools are definitely being created for covering each and every aspect that is possible for the testing cycle.

Yeah, I think it's all about training the models so that they can learn how to write any kind of test, right? So, there's another one from J. Munro, which is, is this a specific product? What Vscode plugin would you be looking for? Sure, so as I just mentioned, Pawnee Code is a really great tool. They have a CNI, they have a VSCode extension. You could also, as mentioned by, you know, Valmira, that you could try GitHub Copilot as well. So, although primarily GitHub Copilot is meant for, you know, suggesting you codes, but if you're writing your own test cases, it can be used for helping you give suggestions for the test cases as well, as mentioned by Valmira. So, definitely you can use either Copilot. It comes directly, so if you use Copilot, it can be directly embedded into your VSCode, and you can also use the VSCode extension for Pawnee Code. But there are a lot of other tools as well that you can use, yeah.

Yeah, and for Copilot you might need to wait in the waiting list, but at some point it will come. There's another one which is, as I understood, the AI creates some scripts. How easy are these scripts to be read or checked by humans? That's a great one. Yeah, so I mean, I would sort of, again, break this down into two parts. One is that, yes, there might be certain scripts that might be prepared, but then the other part is that you might just get suggestions on which particular test cases you should use, or you will just get blocks of code that are actually the test cases themselves. So, with regards to the script part, I'm not 100% sure, because I haven't really worked on that side myself. But what I've worked is the generation of the code itself. So, those have been focused towards, okay, for a particular function that you can directly correspond that particular test case with that function. It makes sense.

Shibhai, it was great having you here. Thank you very much for your talk. Absolutely. Thank you so much for having me, Valmir. And thank you so much to the entire TestJS team.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Top Content
Cypress has taken the world by storm by brining an easy to use tool for end to end testing. It’s capabilities have proven to be be useful for creating stable tests for frontend applications. But end to end testing is just a small part of testing efforts. What about your API? What about your components? Well, in my talk I would like to show you how we can start with end-to-end tests, go deeper with component testing and then move up to testing our API, circ
TestJS Summit 2021TestJS Summit 2021
31 min
Test Effective Development
Top Content
Developers want to sleep tight knowing they didn't break production. Companies want to be efficient in order to meet their customer needs faster and to gain competitive advantage sooner. We ALL want to be cost effective... or shall I say... TEST EFFECTIVE!But how do we do that?Are the "unit" and "integration" terminology serves us right?Or is it time for a change? When should we use either strategy to maximize our "test effectiveness"?In this talk I'll show you a brand new way to think about cost effective testing with new strategies and new testing terms!It’s time to go DEEPER!
6 min
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Featured Article
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.

What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
 What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
 What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
 And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
 What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
 Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
 You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.


Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
 You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
 How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter

Workshops on related topic

React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
WorkshopFree
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
WorkshopFree
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
WorkshopFree
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.
TestJS Summit 2023TestJS Summit 2023
148 min
Best Practices for Writing and Debugging Cypress Tests
Workshop
You probably know the story. You’ve created a couple of tests, and since you are using Cypress, you’ve done this pretty quickly. Seems like nothing is stopping you, but then – failed test. It wasn’t the app, wasn’t an error, the test was… flaky? Well yes. Test design is important no matter what tool you will use, Cypress included. The good news is that Cypress has a couple of tools behind its belt that can help you out. Join me on my workshop, where I’ll guide you away from the valley of anti-patterns into the fields of evergreen, stable tests. We’ll talk about common mistakes when writing your test as well as debug and unveil underlying problems. All with the goal of avoiding flakiness, and designing stable test.