Rise of the Robots

Rate this content
Bookmark

Discover the future of automated mobile application testing with a JavaScript-powered mechanical arm. During this talk we will explore the design, prototyping, and implementation of this cutting-edge solution, optimizing testing efficiency and precision on real mobile devices. We will also discuss the challenges of building a hardware solution for the real world, and how to overcome them.

27 min
07 Dec, 2023

AI Generated Video Summary

This Talk discusses the possibility of robots taking over based on Asimov's three laws of robotics. It explores the use of automation robots for testing, including building and controlling them. The Talk also covers implementing interfaces, conducting math game challenges, and the capabilities of automation testing. It addresses questions about responsive design, camera attachment, and the future roadmap. The affordability of the setup and the potential for future automation are also discussed, along with a rapid fire Q&A session.

1. Introduction to the Question of Robots Taking Over

Short description:

Hi, everyone. I'm Theodor, a software engineer and founder of Proxima Analytics. Let's discuss if robots will take over based on Isaac Asimov's three laws of robotics.

Hi, everyone. I hope you're enjoying the conference so far. Okay. So I'm Theodor. I'm a software engineer based in Athens, Greece. I'm also the founder of Proxima Analytics. It's an open source ethical first analytics platform that you should definitely check this out. And you can also find me online via the alliance wordless. So if there's a question that we can make for 2023, it's going to be are there robots going to actually take over? Like, is going your freeds going to take you as a hostage, or are you going to lose our jobs as software engineers? And truth to be told, this question is, like, rather too old, if we can say so. So this is Isaac Asimov, one of the most famous science fiction authors. And in one of his books, which is called iRobot, in 1953, I think, he came up with the three laws of robotics. So basically, this is a manual when robots are going to actually take over. And today, we're going to challenge this question, and we're going to try to find out if this is true or not.

2. Exploring Automation Robots for Testing

Short description:

In 2018, while working for a company, I experimented with different aspects of software engineering. Mobile end-to-end testing is challenging due to sandboxed applications and external interruptions. I had the idea to build a robot for automating tests on real devices. There are three categories of automation robots based on movement: Cartesian robots, robot guns, and Delta robots.

So fast forward, in 2018, I was working for a company. And besides that, I was also trying to experiment with different aspects of software engineering, trying to mix up things like 3D printing, electronics, and stuff.

And I was working for a company where we had three major products. Like, the first one was a web application and two mobile applications, one for Android and another one for iOS. And I can really tell you this. So mobile end-to-end testing is pretty damn hard. It really is.

So if you have ever tried to run end-to-end testing in mobile applications, we're basically more or less stuck to the emulator. On the other hand, we also need to test mobile applications on real devices, right? But it's quite tricky, actually. So for security reasons, most of the applications are sandboxed. That means that we cannot actually test how our applications are interacting with the operating system. We cannot test sharing links between applications. We cannot test a workflow where we want to authenticate users using the email client in the mobile phone.

Moreover, we have external interruptions like phone calls. Mobile phones are like living organisms. So we have phone calls, notifications, push notifications, and so on. And then I had that weird idea, like, what if we could actually try and build a robot for automating tests in real devices? And you know what? I know that most people would actually think something scary or big like the Terminator or so. But truth be told, we can categorize automating automation robots in three big categories based on their movement.

So the first category is they're called Cartesian robots. We have three axes. And basically, the actuator moves into the three-dimensional space using bolts and wheels. 3D printers and CNC machines are working in that way, in that format, actually. But in our case, it does not work because you have a limited area to work with. And also, the movement does not feel that natural.

Next up, we have robot guns, which are basically made with separate motors and three or four attached parts. They are a needless standard for the car industry and medical operations as well. But they're pretty hard to operate. And moreover, they're pretty expensive and mostly used for repetitive tasks. So our pickup for today are called Delta robots. We have a base on the top, three or four motors attached.

3. Building the Robot with Tapster and Joni 5

Short description:

We have pairs of arms connected to the actuator, which are easy to operate. The movement is fast, efficient, and natural. Tapster, founded by Jason Huggins, offers open-source devices like Tapster Bot. Margaret is a real device built based on Tapster's design. JavaScript is a great option for programming microcontrollers with frameworks like Device Script, Joni 5, Notebots, and Esprino. Joni 5 acts as a proxy for the Arduino nano microcontroller, allowing commands to be executed by the robot.

And as you can see, we have pairs of arms that are connected to the actuator. They're pretty easy to operate. Basically, we can use basic trigonometry in order to calculate the position and move the robot along. And the movement is fast, efficient, and really natural.

But how we can actually build that kind of a device, right? Luckily for us, there is a company called Tapster. And it has been founded by Jason Huggins, who is also one of the creators of Appium and Selenium. And two of their devices are actually open source. So this is called Tapster Bot. This is the version one. And based on his design, I actually forked the entire setup and the code. And today, I'm going to present you a real device. So this is Margaret, as you can see here. Thanks so much. By the way, this whole setup runs inside my browser, so this is a live feed here. As you can see, we have arms attached here. Most of this part is using bolts, nuts, and screws, and 3D printed parts as well.

The brain and hearts of this device is a tiny microcontroller. But you might be asking, where the hell is JavaScript? This is a JavaScript conference, right? So truth to be told, JavaScript is a really good option if you want to program stuff in microcontrollers. There are a bunch of frameworks out there like Device Script, Joni 5, Notebots, Esprino, you name it. And JavaScript is a really good language because we're handling events naturally, callbacks as well. So this is the things that you're doing with a device like this one. In our case, we're going to use Joni 5.

The brain and hearts of the robot is actually an Arduino nano microcontroller. So Joni 5 works as a proxy. The device is connected to my workstation, and then we can directly run a Node.js server in the backend, and we can send commands that the robot executes them. So this is a basic class that allows us to run a really basic robot. We first have to instantiate the Joni 5 board. The microcontroller has labeled pins. So we can say that we're running three server motors on pins two, three, and four. And then when the board gets instantiated, we can run the actual class.

4. Interface Implementation and Robot Control

Short description:

And here is the basic interface implementation for tapping, moving, swiping, and resetting the device. With a Node.js server in the backend, we have the flexibility to expose REST APIs or web sockets. The robot can be controlled through the x, y, and z-axis, and even perform dance moves. Calibration is necessary to determine the mobile phone's position, achieved through a simple web interface. Swiping involves touching the screen, dragging to a point, and releasing. The implementation includes a Tinder-like UI for selecting options. With the Node.js server, we have various options for controlling the robot.

And here is the very basic interface implementation of doing so. You have the initialization function, and then some helper functions like tapping, moving, swiping, and resetting the device. And I told you that this is live, right? So since we have a Node.js server in the backend, we can do whatever we want to.

So here I have... We can expose REST APIs or even web sockets. So here, as you can see, I can directly move the robot through the x, y, and z-axis as well. Or I can actually make it dance for a while. The basic movement is more or less sending commands and coordinates, like go to point 0, 0, 0, and so on. Let me just unfocus this one.

So if you think about it, we can move the robot just sending the coordinates within an interval. If we're talking about taps, like tapping the actual device, that's just about lowering the z-axis to 0, like touching the actual device on the bottom of the robot. And here's the basic implementation of doing so. But there is one thing here. Because that device does not exactly know where the mobile phone is, we need to actually calibrate and let the robot be aware of where the mobile phone is. In order to do so, I have created a really simple web interface. So we can start the calibration process. The robot lowers down to the device, and when the touchscreen gets touched, it sends a command back so we can take pinpoints and find out where everything is right now.

Now let's talk about swipes. Now that the robot is actually calibrated, swiping is just touching the screen to point A, dragging to point B, and releasing. In order to do so, I have recreated the Tinder UI, you know, the one with the cards. So we can start picking our mate for today. Cool. Let me unfocus this one. Okay. So now that's everything we actually need to do. So we have taps, and we have swipes. Since we have the Node.js server in the back end, we can do whatever we want to. We can direct with the REST API. We can use a web socket. We can actually also use Opium.

5. Interactive Math Game Challenge

Short description:

In this example, we're just instantiating the Opium JS SDK and sending commands back to the web server. We're going to challenge the real robot and have a simple math game where you can compete against the robot for a chance to win prizes. The game involves solving equations as fast as possible. Get ready to play!

In this example, we're just instantiating the Opium JS SDK, and then send directly commands back to the web server. So everything is in place, and we can start testing out our application.

But I know that you want the full-fledged demo, right? But as I told you, today we're gonna challenge the real robot. So I want everyone in this room or remotely to pick up your phones. That's a new one for a conference, right? So you're gonna compete against the robot.

We have a really simple game here. We also have two big prizes, right? If the robot wins, we're doomed. Try to find out the bunker. Try to run away. But if someone of you wins, then there's a full-year sponsorship by Egghead, like one full year, as well as a copy of Node.js design patterns by Luciano Mamino. I provided this one for one lucky winner. And here's the game.

You're gonna commit with math against the robot live in this room. So you can go to dab.sh slash play. You have to go inside, just pick a username to do so. Please, please, people, be kind. And once everyone gets logged in, we can start playing around. The game is really simple. You have an equation, like a mathematical addition. You have to pick up the right answer as fast as you can. Time matters here. Okay. I'm just gonna give you, like, ten seconds or so. The whole game will last about one minute. Everyone set? Everyone set? Are we good? Okay. So, are you ready? Just one minute. And you can actually start playing. Okay. It's hard, right? Okay. Almost there.

6. Automation Testing Capabilities and Conclusion

Short description:

Five. Four. And we're done. Congratulations to Rahul, Mikhail, and the robot. We can test deep linking, sharing, authentication flows, run the device 24-7 on different devices, test post notifications and interactions with the operating system. We can attach a camera to detect what the robot touches or swipes, get meaningful metrics, replicate bug scenarios, stress test the application, and train our own AI models. Thank you for your time and patience.

Five. Four. And we're done. Hands on the phones. Everyone. Okay. Let's see how that went. Are you ready for the results? Yeah. Let's do that. Chatterbox Coder, is someone else? No? Okay.

In the application, we can find your ID that you can send me over through a DM. Congratulations to Rahul, Mikhail, and the robot, actually. But really good job. Actually, Chatterbox nailed it.

Okay. So. This is a wrap. We're going to try and do what else we can do with a setup like this one. So, since we have a full-fledged automation system, we can test deep linking, sharing, like authentication flows, as I told you before. We can run the device 24-7 on different devices, iPads based on the actual base that we can So, we can scale up and down the robot as we want to. We can test post notifications and interactions with the operating system.

But there is also more. So, for example, we can attach a camera and have a live feed in order to detect what the robot touches or swipes. We can get really meaningful metrics, like how much time does our application take to load. This is a really interesting one, because we can take metrics about user behavior or analytics. And because the coordinates and the movement is based on matrices of numbers through time, we can effectively replicate bug scenarios or try to go through workflows. We can also stress test the application, like fiercely tapping the screen and so on. And finally, since the movements can get described really well, we can train our own AI models and we can auto-generate user paths and workflows in order to provide automation testing as well.

So, that's all from me now. I would like to thank you so much for your time and patience.

QnA

Questions and Testing

Short description:

And I hope you enjoyed the conference so far. Let's talk about some of these questions. The next question asks about Axis and responsive design. If it's a larger device, updating the test is not necessary because we can interact with the web interface and easily accommodate different designs. Tabletop Robotics has created a base that can rotate and tilt the device for testing the gyroscope. Testing push notifications can be done by interacting with a web interface and using locators.

And I hope you enjoyed the conference so far. Let's give him a round of applause. Let's see the questions.

Before we do the questions, I have a confession to make. I feel really bad about this as well. I am Chatterbox Kodar, and I realized halfway through that I was like, oh, wait, if you just quickly go through questions and you get them wrong, it still speeds up the next question. So, I was just doing maths on the first two digits and carrying one and seeing if it worked. So, I think Rahul should get the prize. So, wherever Rahul is, wave your hands. Give him a round of applause wherever they are, and if they're at home, give Theodore a DM.

All right. That's pretty fair for you, right? Yeah. Okay, so let's talk about some of these questions. So, we'll save the first question until last, but the next question which asks about Axis. If it's a larger device with responsive... I will start that again. If it's a larger device with responsive design, will you need to update the test? Fairly not, because since we're... So for my example, I'm interacting with a web interface, so you can actually place locators and you can identify the coordinates, where's everything. Also, when the device boots up, we get information about the viewport, about the actual device screen, so it's pretty easy to accommodate different designs as well. Tabletop Robotics have also created a base that can actually rotate the device also and tilt it in order to identify, to test the gyroscope and things like that. So no.

Ah, well, thank you. I think that's so interesting how you start with a really simple thing, you're like, oh, can it do this? And you add on the features. It's just like a real product you're developing. That's really cool. All right. I'm going to scroll down to the next question. We will go through some of the cheeky ones later, don't worry, I'll save them for later. But how do you test push notifications? Can we make the robot know that a push notification has been received, etc? Sure. So, in this example, I'm just interacting with a web interface, like placing locators and so on.

Camera Attachment and Robot Speed

Short description:

TabSir offers a unique way to attach a camera for image comparison and detecting notifications. The robot's speed depends on movement and precise calibration. JavaScript is a good option for IoT with microcontrollers and sensors. Special software is not needed, making experimentation straightforward.

TabSir has actually provided a way that you can attach a camera in order to identify what's on the screen and also use a toggle to screencast the results. This allows for image comparison and the ability to detect notifications or receive callbacks when push notifications appear. It's a unique and impressive use of the camera.

Another question is about the speed of the robot and how it affects the duration of CI pipelines compared to a software-only solution. The speed of the robot depends on the movement and precise calibration. It can move quite fast for concurrent taps in different places, as demonstrated by TabSir. They even have a demo of playing Tappy Bird, where the robot taps the screen quickly. It's impressive that the robot can capture images and move faster than most humans.

For JavaScript developers interested in IoT, it depends on the requirements. C++ is suitable for full-fledged IoT applications, but JavaScript is also a good option when attaching sensors or buttons. Jony5 works as a proxy and makes it easy to use JavaScript with microcontrollers. Esprino and Device Script are options for standalone applications. Ultimately, it depends on the individual's preferences and goals.

One advantage of using JavaScript for IoT is that special software is not required. There are different IDs that can be run inside a browser, providing live updates instead of a compilation step. This makes experimenting with Arduino clones and LEDs straightforward and accessible.

JavaScript for IoT

Short description:

JavaScript is a good option for IoT devices with sensors or buttons. Jony5 works as a proxy, making it easy to use. Esprit and Device Script are suitable for standalone applications. You can experiment with a simple Arduino clone and LEDs without needing special software. You can run different IDs inside your browser for live updates.

Someone asked do you recommend using JavaScript with IoT, as an IoT device has less memory and most devs prefer to use C++ programs. First, let's talk about what you prefer, and then let's give some advice for anyone who wants to get into it with JavaScript. Okay, so it actually really depends. If you want to bring up full-fledged IoT application, yeah, sure, C++ makes totally sense. On the other hand, JavaScript is actually pretty good, because basically, if we have a microcontroller, it's bare bones. It's just a tiny computational unit. But if you start attaching sensors, or buttons and so on, JavaScript is a really good option. Jony5 works as a proxy, so it's pretty easy to use as a workstation. If you want to bring up standalone applications, Esprit and Device Script are really into that. So whatever works for you, that's the proper answer. But if you want to experiment, you can spend a couple of bucks, get a simple Arduino clone, some LEDs, and it's pretty straightforward to do so. The other interesting part is that you don't need special software to do so. So, for example, there are different IDs that you can run inside your browser, and it's pretty cool, because you have live updates inside, instead of having a compilation step and so on. That makes sense. Thank you, thank you.

Multi-touch, Future Roadmap, and Camera Sensors

Short description:

We've received questions about multi-touch, the future product roadmap, and implementing camera sensors. The current method of touch input involves using a stylus that can be attached to the microcontroller. Additional features, such as swiping, can be achieved by attaching multiple styluses. When implementing camera sensors, the approach depends on the desired outcome. For mobile applications, events alone cannot be used for assertions. Image comparison and locators are alternative methods. The game played during the demonstration utilized a web interface and WebSocket communication. The flexibility of using familiar technologies and APIs makes this setup accessible. The approximate cost of the setup is around $40 to $50, depending on the availability of a 3D printer.

We've got a bunch of different questions that all really are the same thing, which is about multi-touch. So, zoom, pinching, and different things. What's the future product roadmap, I guess? That's the actual question.

Right now, in order to touch the screen, we have a stylus. So basically, it's a phone pen, the cheap one, like $1 or so, that's grounded to the microcontroller. Basically, you can attach whatever you want to. If you attach a pen, that's a plotter. If you attach a hotend, that's a 3D printer. So, basically, you can attach a small... Like two fingers, like two styluses in order to create swipes and so on. It's based on your imagination and what you are capable of doing so.

Nice, nice. We've got another question, and a lot of these are people seeing that this is great, they want to use it, and maybe there's some extra features that they want. I think the next one's kind of in that, especially you talked about the camera sensor. And I guess if you build a camera sensor, this person's asking about making sure that the next screen that happens after an action is what is expected. And I know you spoke about the fact that there's a camera, the fact that you can use AI. How would you go about implementing that? So, basically, it's based on what you're trying to achieve. If you're using mobile phone applications and you're running the application, you can't just send events in order to make your assertions. Here is a screen. I have swiped. Here is another screen. The mobile application just raised an event, so I can assert that everything works as expected. You can actually also use image comparison to do so. You can use locators. It's up to you how you want to handle this one. For example, for the game that you have actually played, this is a web interface. So every time that I had to make an action, I was just directly sending commands back to the workstation using WebSocket. Nice, nice, nice. And I love the fact that this is because all of the technologies, our JavaScript and things we're familiar with, all the other APIs and all the other pieces of knowledge that we have is available to us to use with it. By the way, I just want to say that the entire cost for this setup is more or less 40 to 50 bucks based on how, if you're capable of, like, if you have a 3D printer, it's probably less.

Affordability, Future Automation, and Rapid Fire

Short description:

All the components are widely available and affordable. The price is around 40 to 50 bucks. Tapster offers a setup using a customized toggle to control the phone using accessibility settings. In the future, automation of test case updates may depend on the setup and the ability to identify dynamic interactions. It's an 'it depends' situation, so feel free to have a chat with the speaker. Now, let's move on to a rapid fire of yes or no answers.

But all the components are widely available. Like the servo motors are pretty plain, old-school motors for remote-controlled cars. So yeah, it's pretty affordable if you want to do so, if you want to build something like that.

Oh, cool. Because you kind of preempted me on the next question. The next question was what's the price. Yeah, it's more or less 40 or 50 bucks. The cool thing is that right now we have more ways of doing so. For example, Tapster, they were working on a setup where they have actually used a customized toggle that can control the phone using the accessibility settings. So they're tricking the phone, thinking that this is a keyboard and a mouse, so you can even run a similar setup with an Arduino or a Raspberry Pi for way less, I think more or less five bucks or so. So yeah, definitely check them out.

Now that is really, really awesome. And then last one of the serious questions before we get to the fun ones that I know everybody is waiting for. So this one's about if the application UI changes every time you're going to need to update your test cases and do it all quite manually, I'm guessing at this point in time. I mean, is there a future you envision where it could be easier to kind of automate changes to the tests? I think that relies on the setup you actually have. So as I told before about the responsive design, if you... So for example, if you're running tests on the web interface, like using Cypress or Playwriter, so you can actually say, hey, go and click to this position of the screen. That's pretty static. But if you somehow have a dynamic way of identifying the interactions that you want to make, that makes pretty much... Makes the whole setup more flexible. Awesome. Awesome. And one thing I think actually is, and kind of like this answer, everything's an it depends. And honestly, what I would say is afterwards, come find him and have a chat. I'm guessing you'd love to talk about it, considering you've come on stage and chatted about it with all of us. So thank you. Thank you so much. All right. Now what we're going to do is we are going to jump into a rapid fire of yes or no answers for the next question. Is that okay? Yeah, that's fair.

Q&A - Skynet, Robot Capture, Tinder Source Code

Short description:

Will Skynet kill us in 10 years? Yes. Can the robot solve the I am not a robot capture? Yes, it's pretty amazing. Have you used the robot for Tinder? No. Can you share the source code for Tinder? Yes, it's open source on the Tapster GitHub repo.

That's fair. Okay. That's pretty exciting. All right. So we'll start with the first one. Yes or no. Will Skynet kill us? In 10 years or so? Yes or no? Yes. In 10 years. Cool. Nice.

Next one. Can it solve the I am not a robot capture? Actually, yes. We have tried this one and it's pretty amazing because they do have the bug there. So for mobile devices it's pretty easy to do so. Fair enough. It's ironic, right?

All right. Next one. Yes or no. No explanations. Have you used the robot for Tinder as well? No. No. Okay. Well, I guess the answer is obvious on the next one. Can you share the source code to use it in Tinder? Asking for a friend. Wink. Emphasis on the wink. The source code is actually open source. It's on the Tapster GitHub repo for Tapster one and two. It includes all everything that you need to build like a bill of materials, a code and so on. And also the 3D designs that you can go to a makerspace and 3D print the entire setup.

Robot Programming and Connectivity

Short description:

Don't worry, you've got it covered. If the robot had won, would you program it to read a Node.js book? The machine is connected over Wi-Fi, allowing the local setup to be exposed to the robot. This presentation was amazing. Thank you so much!

So whoever you are, don't worry, you've got it covered. You just need to do a little bit of work yourself.

All right. And next one. If the robot would have won, would you have programmed it to read a Node.js book? Once again? If the robot would have won, would you program it to read the book? I cannot answer this one. Like yeah. Hopefully it just never wins and you don't have to face the problem. Right.

Okay. So this is about the actual, the board, the machine. Is it connected over Wi-Fi? How is it connected? Yeah. So basically I'm connected through, I'm on the Wi-Fi network and both of them are connected to the same Wi-Fi so I can expose like my local setup to the robot, to the mobile phone that's on. That's awesome.

And the last question. It's not a question, but it's one thing that I think we all agree with. This presentation was amazing. You should keep it up. We want to see you again. Give him a massive round of applause. Thank you so much. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Do you have a large product built by many teams? Are you struggling to release often? Did your frontend turn into a massive unmaintainable monolith? If, like me, you’ve answered yes to any of those questions, this talk is for you! I’ll show you exactly how you can build a micro frontend architecture with Remix to solve those challenges.
TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
Remix Conf Europe 2022Remix Conf Europe 2022
37 min
Full Stack Components
Remix is a web framework that gives you the simple mental model of a Multi-Page App (MPA) but the power and capabilities of a Single-Page App (SPA). One of the big challenges of SPAs is network management resulting in a great deal of indirection and buggy code. This is especially noticeable in application state which Remix completely eliminates, but it's also an issue in individual components that communicate with a single-purpose backend endpoint (like a combobox search for example).
In this talk, Kent will demonstrate how Remix enables you to build complex UI components that are connected to a backend in the simplest and most powerful way you've ever seen. Leaving you time to chill with your family or whatever else you do for fun.
JSNation Live 2021JSNation Live 2021
29 min
Making JavaScript on WebAssembly Fast
JavaScript in the browser runs many times faster than it did two decades ago. And that happened because the browser vendors spent that time working on intensive performance optimizations in their JavaScript engines.Because of this optimization work, JavaScript is now running in many places besides the browser. But there are still some environments where the JS engines can’t apply those optimizations in the right way to make things fast.We’re working to solve this, beginning a whole new wave of JavaScript optimization work. We’re improving JavaScript performance for entirely different environments, where different rules apply. And this is possible because of WebAssembly. In this talk, I'll explain how this all works and what's coming next.
React Summit 2023React Summit 2023
24 min
Debugging JS
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.

Workshops on related topic

React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
WorkshopFree
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
React Day Berlin 2022React Day Berlin 2022
86 min
Using CodeMirror to Build a JavaScript Editor with Linting and AutoComplete
WorkshopFree
Using a library might seem easy at first glance, but how do you choose the right library? How do you upgrade an existing one? And how do you wade through the documentation to find what you want?
In this workshop, we’ll discuss all these finer points while going through a general example of building a code editor using CodeMirror in React. All while sharing some of the nuances our team learned about using this library and some problems we encountered.
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
WorkshopFree
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
WorkshopFree
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.