Revolutionizing JS Testing with AI: Unmasking the Future of Quality Assurance

Rate this content
Bookmark

"Revolutionizing JS Testing with AI: Unmasking the Future of Quality Assurance" is a forward-thinking talk that delves into the transformative power of AI in JavaScript testing. The presentation offers an enlightening exploration of AI Testing principles, practical applications, and future potential. By featuring AI-driven tools like Testim, ReTest, Datadog, and Applitools, this talk brings theory to life, demonstrating how AI can automate test case generation, optimize anomaly detection, and streamline visual regression testing. Attendees will also gain insights into the anticipated advancements in AI Testing for JavaScript. The talk concludes with a lively Q&A, inviting everyone to delve deeper into the world of AI and JavaScript testing. Be prepared to reimagine your QA process with AI!

20 min
11 Dec, 2023

Video Summary and Transcription

AI testing with generative AI is revolutionizing JS testing by automating test creation and improving software test processes. Key technologies like natural language processing and neural networks, as well as quality data, play a crucial role in AI testing. The benefits of AI testing include speed, efficiency, adaptability, bug detection, and limitless potential. Generating JavaScript tests can be tailored to different tools like Selenium, and there are popular tools available for automating test automation. AI tools like Datadog, RecheckWeb, and Applitools Eyes offer powerful capabilities for anomaly detection, visual regression testing, and code list testing. The horizon for AI in testing continues to expand with evolving capabilities, and understanding AI's role in testing revolution and machine learning is crucial for practical application and continuous learning.

Available in Español

1. Revolutionizing JS Testing with AI

Short description:

Hello everyone! My name is Rinaldi, and today I will be delivering a talk on revolutionizing JS testing with AI. AI, particularly generative AI, has been making big strides with changing the landscape of programming and testing. It has opened up opportunities for improvement within quality assurance. We will explore the growing trend of generative AI within testing, its potential to automate test creation, and how it revolutionizes the landscape. The role of machine learning in AI testing is to improve software test processes, prevent human error, and automate error detection based on history.

Hello everyone! My name is Rinaldi, and today I will be delivering a talk on revolutionizing JS testing with AI. Unmasking the future of quality assurance. So as you probably already know by now, AI, particularly generative AI, has been making big strides with changing the landscape of different kinds of programming. Javascript is just one of them. And not just the programming aspect of itself, but also the testing element as well. And that has been able to open up a lot of opportunities for improvement within quality assurance. And hence, that's where we're going to delve into this topic today.

So without further ado, let's get straight into it. So I'd just like to briefly introduce myself. So I'm a software engineer for Seek. I am also a holder of all 13 certifications from AWS. I'm also a subject matter expert for the Solutions AWS Architect Professional and AWS Data Analytics Specialist Certification. I'm an international speaker at over 30 events and conferences. I also enjoy all things AWS, open source, testing and virtual reality.

So diving into this topic directly, what is the main meat that we want to get into today? Well, really it's all about being able to understand the growing trend of generative AI within testing, because we've seen a bigger trend of how it's currently being conducted within the realm of testing. And nowadays, it's not only that you can just automate and create new text or generate new stories with generative AI, but now you can actually create code with generative AI, create tests with generative AI for your code. So it just has so much potential within what you can do with it. And as mentioned before, this leads to a lot of new areas such as codeless creation of test cases. And of course that then leads to the potential of opening test creation to anyone. So it's not only those who are very well versed within test creation that can do this, but normal devs or even non-technical people can even start looking into this and help out with the development process too of tests. So in general, it's just revolutionizing the landscape in a really big way.

What is the role of machine learning within AI testing? Well, firstly, we're using AI to be able to improve software test processes. It's becoming an assistant for us to be able to work with, to be able to create us a template to be able to build on. And aside from that, it helps us to ensure that what we are doing is right. So one of the things that is very common in test case creation is the occurrence of human error. Introducing AI to the mix, it can help us to prevent that from happening and redirect us instead to be more focused on how we can make better tests and how we can make more error proof tests. So that is the power of generative AI. And we want to be able to also automate error detection based on history. That's one of the things that it has been able to do for us too, because what we can do is that we can create an automated process where error handling and error checking is a normal thing, so that AI can immediately just check based on the history. Maybe there could be potential errors here and accordingly just provide and provide better suggestions based on that.

2. Key Technologies and Data in AI Testing

Short description:

Aside from redefining quality assurance, AI testing involves key technologies like natural language processing, predictive analytics, and neural networks. The role of data is crucial as feeding quality data determines AI's performance. Fine-tuning solutions requires sufficient data.

Aside from that, it is also redefining how we are able to perform quality assurance, as mentioned before, we can also integrate it as part of our pipeline and hence build based on that to be able to ensure that the quality that we have in each stage is assured because of the checks that the AI does.

So what are the key technologies that are involved in this? To name a few, some of them include natural language processing, predictive analytics, and neural networks. Natural language processing, for example, in this particular case scenario is a very important thing because it really determines how we are processing the text that we put through. And that's why problem engineering is a very big thing within AI because we want to make sure that we are actually telling it the right instructions instead of making it vague. Well, we're going to cover it a bit later as well.

Aside from that, there is a very big role that data plays in this because feeding the AI with quality data really determines how well it's going to perform. We have seen a lot of different providers such as chat.gtbt or Amazon's bedrock models perform whether it be good or bad based on a number of parameters that they are fed based on the data that has been used to train them. So it really affects this and it's important to understand that this affects it as well. So if you, for example, decide to look into fine-tuning your solutions, that's definitely a big consideration because you want to make sure that you are fine-tuning it based on enough data and not just partial data.

3. Benefits of AI Testing and Using Generative AI

Short description:

AI testing brings speed, efficiency, adaptability, bug detection, and limitless potential. It can generate tests faster and refine them based on previous tests. AI helps developers account for edge cases and uncover subtle bugs. When using generative AI, specify the language, functionality, and include what you want to be tested. Be cautious when inserting code into generative AI and use APIs for safety.

So if you, for example, decide to look into fine tuning your solutions, that's definitely a big consideration because you want to make sure that you are fine tuning it based on enough data and not just partial data.

So what's the benefits of AI testing? Firstly, it's just speed because tests can be run and invalid rapidly. There is really a limitless potential to be able to take in with the amount of tests that you can actually run with that. And you can generate them much faster than you can do manually because, although they may not be correct or as per your visualization on the spot, at least it is able to then accordingly help to refine it. So you can accordingly ask it to refine based on what the test it created before, or you can actually manually directly develop it too. So that really helps it develop much faster compared to if you're coding tests one by one.

Aside from that, it also brings efficiency. So being able to do more tests in less time would reduce human effort. You're also able to provide more coverage, as we know, JavaScript testing has coverage as one of its most important aspects. So we know that as devs, one of the most important things is that, can we account for the edge cases that are provided? And how can we ensure that it is as clean as possible from errors. And with AI, we're able to really plan for that better, and cover the potential test cases that we might not have thought of. And that's why it's good to have them as an assistant.

Fourth, we also have adaptability. We can make quick responses to changes in code and functionality. We're going to be seeing some examples of that later on, too. Finally, we also have bug detection, because they're really great at being able to uncover subtle or non obvious bugs. One of the best cases of this is how there is visual regression testing nowadays, and also in regards to how AI can compare subtle differences between imagery, for example, or within code. Really, the potential is limitless.

This is an example of how to specifically say to a generative AI, and it really depends as well on what LLM you're using. It could be ChachiBT. It could be Amazon Bedrock. This particular one, for example, I put into Amazon Bedrock's Cloud Foundational Model because I wanted to see how it performs, and it turned out that it was able to do this well. So I'm providing this as the template right now to be able to better understand how to provide this kind of prompt as a prompt to generate new test cases. So for example, we're providing it essentially instructions based on this. So we can see that we're specifying what language we're using, we're specifying what functionality we want, and we're basically just putting through test cases. Now you can put test cases, you might not want to, so it really depends, but what you really want to be able to do from this is that you want to be able to instead of generically mentioning just that generate me a suite of test cases based on this code, instead you want to be able to ensure that you're also including what you want to be tested as well. So for example, let's say you want to be able to test if this button in this webpage works, then you'll say webpage for button X, and after that in the description, you'll say, I want to test button X to see that if it actually works properly, and you can also might not be able to. And again, a pretty much like a caution I would like to make as well is be mindful as well of just inserting code randomly within generative AI because you need to make sure that you're using the API instead of a public facing interface. For example, like HatchiBT, it's usually more, it's very risky and it's very, very ill advised to insert your code or any PII directly in the interface. But meanwhile, you might have an in-house solution or you might use it through an API, in which case your data will be more likely safer.

4. Generating JavaScript Tests and Popular Tools

Short description:

When generating JavaScript tests, you can specify extra tokens like input, expected output, and special considerations. The tests can be tailored to different tools, such as Selenium. Popular tools like Amazon Bedrock and Hedgey allow you to generate cases by providing code and instructions. We will also discuss other ready-made tools that automate test automation.

So that's just a quick caveat when you're considering this. Aside from that, you're just putting extra tokens like input, expected output and special considerations as well as to what constraints or conditions you want. And from this, you're able to generate JavaScript tests in an easy way. And they can also be tailored to different tools as well.

For example, if you want to test with Selenium, it can do so as well. You just have to specify in this particular case scenario. This is just a template to be able to show you how to specify and provide the LLM with the proper considerations that it needs.

So, some popular tools, a couple of popular tools include Amazon Bedrock and Hedgey, which I was mentioning before. These are the ones I've been using a lot. I've been experimenting with their APIs, I've been experimenting with how they perform. And this in general is one way of being able to generate cases. Because it is the way that you just put in your code, you put in the instructions for that, and you generate the cases based on that. But, of course, we'll be talking as well about the different other readymade tools that already help you automate your tests in an easier fashion. And here's one of them.

5. AI Tools for Testing and Monitoring

Short description:

So, Datadog is a powerful tool with AI capabilities for anomaly detection and continuous monitoring. Their Bits.ai feature enables querying anomalies with general text. RecheckWeb checks for small errors in code and performs visual regression testing. Applitools eyes is a popular tool for visual regression testing. AAPower code list testing allows recording and comparing user interactions for testing purposes.

So, Datadog is one of the tools that has been used over time that has also developed with AI capabilities. So, one of its capabilities is being able to perform anomaly detection based on history. So, with that, you're able to directly obtain recommendations based on the graph that you provided, and it will use previous anomalies and recommend you to watch out for indicators and patterns based on that history. So, it's a really powerful tool to look into, and it's a really great one to definitely adopt as part of a continuous monitoring solution.

And one of the things I feel is worth mentioning as well is their Bits.ai feature, which is their new component that enables you to query these kinds of anomalies or data with general text. So, you don't even need to search for, say, go to this time period, go to this specific part of the graph, you can just say, oh, like, yeah, has an anomaly happened in this particular segment?

Aside from that, RecheckWeb is another big tool that's currently being used a lot. So, one of the things it is able to do is that it is able to check for small errors that you might have made to your code, which may affect the visualization. So, in normal cases, you might actually have a problem with visualizing broken code, but in this case scenario, it points at the broken code that you have while being able to visualize it based on the assumption that the code is still intact as well and actually not broken based on changes. So, as we can see here, for example, with button dot recommend dot slash dash cert, I accidentally put it to, instead of cert, I put it to cet, set, but it's able to detect this and because of this, it's able to then detect that particular fault and then visualize it appropriately. There's also a visual regression testing as well, and one of the biggest use cases right now is with applitools. They have applitools eyes, which is able to perform visual regression, which helps to be able to ensure that what you're currently testing on actually is comparable with what it should be. And this really helps a lot. So you can integrate the Selenium WebDriver with this. They're able to perform this, and you're able to perform functional and visual testing through the tool. So this is just an example test case that I wrote up as part of being able to work with applitools eyes. So you can see that I'm pretty much just calling eyes open, eyes checked, and finally, so you can see that this is a great way of being able to integrate with applitools eyes and appropriately use it for this case scenario. Yeah.

Aside from that, we also have AAPower code list testing. So there's a really great one that I encountered recently, and this is just an example of how I was able to do this. So we can see that with this, I'm testing my website. So I'm basically just trying to click on a button. I'm trying to navigate through from the home page through clicking, track my CPEs. And the best thing is you can just record this on screen and it can follow you around and record every step you take. So we can see it's able to track, I click track my CPEs, click the activities, click add activity. So I wanted to be able to illustrate that add activity functionality, and what it can do later is it can use this to be able to use as a test and compare against what you currently already have, the behavior in your code, to be able to ensure that it's actually working properly. So you can essentially just do this test, run it on your, for example like staging environment, and what it'll do is that it'll spin up a Chrome driver. For example, and it'll basically just test to see if it works on the Chrome driver and close appropriate to, and if it passes, then it'll say it passes. And this is an example of how it works. So I can play this example of like how it's able to set up the page, then after that this is run by itself. And this is not me, this is pretty much them doing it by their own, they can just mention this accordingly.

6. Automating Processes and Expanding Horizon for AI

Short description:

And this is by test them. So it's a brief case study where I was looking to automate processes, including anomaly detection and test case generation. Implementing AI tools like Datadog resulted in enhanced quality and reduced test maintenance. The horizon for AI continues to expand with evolving capabilities, addressing future needs in automation, test case generation, and security. Best practices include prioritizing quality data, balancing AI and human insight, implementing version control, fostering AI understanding, establishing instrumental integration, and staying updated with AI evolution. Understanding AI's role in testing revolution and machine learning is crucial, along with practical application and continuous learning.

And this is by test them. So they're basically another tool that is really great to look into that I have been using as well for this pretty good case scenario. So it's a brief case study, more or less, I had a scenario as well, where I was looking to automate processes. And I had the problem as well of being able to automate anomaly detection and generate test cases better too, because one of the things that I was looking into is how can I more efficiently do what I'm currently doing, and automate these kinds of processes to be able to ensure that we don't have errors popping up here and there.

Because one of the things that I'm sure that a lot of testers are here are quite familiar with as well is, if you leave your backlog of errors or security risks alone for a while, they will definitely quickly populate if you're not really taking care of too much testing. So being able to automate is a very important thing. So as part of the implementation, integrate AI tools such as Datadog for anomaly detection and also test them into as part of the workflow. And what I then was able to get was enhanced quality and being able to have reduced time needed for test maintenance.

So there's definitely going to be a continued expanding horizon for AI. For tomorrow, it's definitely going to be more potential growth areas with evolving capabilities as well. It's going to keep on developing as we see it right now. We're going to continue on being able to anticipate the needs of future JavaScript applications because there's going to be a lot. And with the development of AI, it's going to just continue on proving the potential of it to be able to help address future needs, including automation and test case generation and security. So there's just a lot that it's able to do. And of course, automate, automate, automate, because if you can, why not, makes your life easier as a tester and as a developer.

So some best practices and patterns. Prioritize quality data, balance AI human insight, implement version control, foster AI understanding, establish instrumental integration, keep abreast with AI evolution, test data management, continuous monitoring analysis. And some key takeaways are, it's important to understand AI is pretty much a testing revolution. We also have machine learning as a pivotal role, too, to be able to understand. Understand how you can integrate tools appropriately, as I mentioned before, with the four examples that I provided before, with excluding ChatGPTN Amazon Bedrock, as you can also definitely use them as tools as well. Practical application is very important to understand, understand the future outlook of how it looks in the future, and understand that it's very important to look at it as empowerment through knowledge, because it's continuous learning process. And that is all for me. Thank you again so much, everyone, for listening to this session. And I'm happy to take any questions after the session as well.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Top Content
Cypress has taken the world by storm by brining an easy to use tool for end to end testing. It’s capabilities have proven to be be useful for creating stable tests for frontend applications. But end to end testing is just a small part of testing efforts. What about your API? What about your components? Well, in my talk I would like to show you how we can start with end-to-end tests, go deeper with component testing and then move up to testing our API, circ
TestJS Summit 2021TestJS Summit 2021
31 min
Test Effective Development
Top Content
Developers want to sleep tight knowing they didn't break production. Companies want to be efficient in order to meet their customer needs faster and to gain competitive advantage sooner. We ALL want to be cost effective... or shall I say... TEST EFFECTIVE!But how do we do that?Are the "unit" and "integration" terminology serves us right?Or is it time for a change? When should we use either strategy to maximize our "test effectiveness"?In this talk I'll show you a brand new way to think about cost effective testing with new strategies and new testing terms!It’s time to go DEEPER!

Workshops on related topic

React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
WorkshopFree
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
Top Content
WorkshopFree
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
React Advanced Conference 2023React Advanced Conference 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps