
33 min
Panel Discussion: Future of Web Testing
Video
Hey, hey, hey, I'm really happy to be here. I'm thrilled. I'm super excited to kick off this panel about the future of web testing. And I'm so excited because of two reasons. One, a lot of things, great things happening now in this field. Just to give you an example, I checked today in GitHub, and there are already 17,000 repos dealing only with testing in JavaScript. 4,000 of them were written in the last year. So, so many things and innovation happening now in testing. The second reason is because I probably have the ideal panel to discuss all of these trends, really some of the most influential. The people who are building this innovative tool here with me. So how about we start with a quick intro? Let's start with you, Nancy. Would you like to introduce yourself? Sure, I'm really excited to be here. My name is Nancy and I am a solutions architect working at a company called Rango.io. We're a tech consulting agency. So, I've worked with a lot of clients in the past, enterprise and startups, oftentimes with their testing strategy. Cool, great to have you Nancy. Kent, C-Dodd, I'm not sure we even need to introduce Kent, but we will. Kent, please tell us about yourself. Yeah, sure, thanks. I'm super happy to be here. I love the conferences put off by Git Nation. They do a great job. So yeah, I created testingjavascript.com where I can teach you everything that I know about testing. It's awesome, so take a look at that. And actually there's a discount coupon just for TestJS Summit. So I'll share that in the Discord. And I also created Testing Library, which is kind of a funny name for a library. But it helps you test DOM related stuff. So anything that has to do with the DOM, anywhere you can find the DOM. And actually there are other implementations for React Native and stuff like that too. But it just gives you utilities that you need to test UI. That's the main idea behind that. So that's what I do. Cool, and we're about to discuss, of course, Testing Library very soon. Oren, good morning, San Francisco. Hi everyone. Pleasure to be here, super excited to be here with everyone. My background is mostly building products for developers in the last 22 years. I'm guessing that the standard you know is the Aplitur is the visual validations. I was one of the founding members, weeks an early developer. And now the founder of the last six years, founder of Testing. I will focus on AI helping test automation. That's great, and we're also going to discuss testing. Jason, hi. Everyone, I'm happy to be here. I'm Jason Palmer. I work at Spotify for the last almost ten years as an engineer and now a PM. I'm also a core contributor to Jest, and I created JestJUnit. So if you're running Jest in a CI environment, there's a pretty good chance you're using that. Amazing, great, so we're about to discuss now a bunch of topic, a bunch of hot trends in testing, and each of the panelists is going to share his thoughts and experience about each of these hot trends. So how about we start with the CI world? And there is a new giant player, GitHub, now playing in the CI world in the past two years. And funnily enough, I just checked today, and each of the big vendors who operate there for years have approximately 100 extensions or actions, some call these orbs. GitHub in two years already have 7,000 GitHub actions. So it's growing crazily. And my question to you, let's start with you, Kent. Is this just another big giant eating the market, or are they doing things differently? Do they provide new capabilities, opportunities for us developers? You know, years ago, I did a survey, not like a survey, but I took a survey of all the CI services that I thought would be good for the company that I was working with at the time. And I tried a ton of them, and I ended up with SnapCI, which is no longer a thing, which is kind of sad. There's GoCD now as their thing, but, and that's what I used for my company. But then for open source, I was all in on TravisCI, and it served me really, really well for years. But I don't want to get into this too much, but Travis kind of changed the way that they do things in a way that made it very difficult for open source. And so I've been totally moving everything over to GitHub actions. I've been kind of thinking about it anyway. And my goodness, it's such a nice experience. So there's a reason that I think so many people are going over to GitHub for managing their GitHub actions. There can be some concern with the monopolistic approach, I guess. But I don't know, like, I'm just trying to ship stuff, and GitHub actions makes it really easy to do that. So that's, I like it. Can you share one example of any delightful experience, something that just helped you specifically to ship? Yeah, well, you mentioned the 7,000 different GitHub custom actions that you can use. I didn't have to build any custom or, yeah, custom action of any kind. It was just like putting together all these different ones to make things come together in a really nice way. It was just very, I wouldn't say that it was super straightforward. There are definitely some nuances and intricacies and things like that. But once you have it set up, it's actually pretty understandable what you have. And I think that, yeah, just the sheer amount of actions that you have at your disposal is super, super helpful. Yeah. Yeah. And Nancy, I'm curious to hear your thoughts on this matter. Yeah, I think it's actually super cool. I really like the approach kind of GitHub went with. Instead of just focusing on being another CI tool and just focusing on their infrastructure, they really took like that open source approach, really encouraging and fostering the community to contribute and focusing on that marketplace aspect, rather than it being sort of an afterthought. And I think that's where it's really powerful. And that's probably why we've seen so much growth there so quickly. And really now sort of GitHub is not limited by how many developers are putting on to really build out their CI or really understanding the user's need because it comes organically, right? They're allowing the community to kind of dictate where this goes. And I think that's super exciting to see. I think it's hard to predict where it will go in the future, especially, you know, a lot of the clients I've worked with in the past, like moving, switching over to a whole new infrastructure is just a huge undertaking and oftentimes not the top priority and just not worth the effort right off the bat. Yeah, interesting. Both of you mentioned that the marketplace is the thing, like upfront, they build it for you. And beyond, they build it for the marketplace. And this is the core and not some extension like it was in previous CI vendors. Yeah, yeah, sensible. And Oren and Jason, feel free to jump in whenever and everyone, we are spontaneous here. So one thing I really like about GitHub Actions, I guess, is that it's kind of one of the CI providers that's taking the like a Docker-based approach, which is really nice. And it's kind of a new wave that I've seen, I guess, in the last few years. It definitely makes it easier for folks to be able to like reproduce CI steps locally and kind of craft like a decent, you know, CI environment locally and make sure that it works. And then you kind of, you know, it's a lot less, you know, toil, basically, than you might have with other CI environments. Yeah, yeah. So bottom line, I think anyone should evaluate the GitHub, at least evaluate as your next CI. And now moving to the next topic, which is testing library. Well, probably everyone here heard about React testing library. I learned recently that it has sisters, which is X testing library for Cypress, for Puppeteer, for almost any UI framework. And if you're not sure how popular it is, recently, the state of JS big survey was published. And people chose a testing library above Jest and Mocha as their most popular testing framework. Of course, it's not the same thing. But it's, I mean, it was called as the most trendy testing library of past year. And surprisingly, we have the creator here, Ken. Question for you. So I know Cypress. I can do things for this with Cypress. I know to build stuff. And I'm going to start now a new project. Why should I use, for example, Cypress testing library and not Cypress? As is. Yeah, well, I mean, I think you should use Cypress for sure. So testing library is a collection of utilities that you use within testing frameworks. So it's not a replacement for Jest or Mocha or Cypress. It's more of a utility that you tack on to those things. And so like, but that said, with Cypress in particular, there are built in commands that you can use instead of what you get from Cypress testing library. And so the reason that I can suggest adding in Cypress testing library instead of using like Cy.get, for example, is because it gives you more confidence. And that's all that we're really after. I know that some people, some testers really like to use testing to kind of enhance their design process and everything so they can build out their code. And that's great if that's what you want to do. You know, TDD is awesome, whatever. But for me, I'm mostly, and when we talk about running tests in CI, the types of tests that I want to run in CI are the types that can tell me, can I ship this or not? I want, everybody says, don't ship on Fridays. I want to be able to ship on Fridays. I want to ship whenever. Whenever I commit code, it should get shipped. And for me to be able to do that, I need to have confidence. And this is what testing library offers is a really nice way to not only query, in Cypress, it's pretty much just for querying, but elsewhere, it gives you not only query ability, but also interactivity with that output DOM. And the reason I say you don't need that for Cypress is because Cypress has great interactivity functionality already. So it just adds the ability to query your document really, really well. And not only does it make it easier to read and write those tests, but it also, you end up with more confidence because of the types of things that you're writing in your tests. And so, yeah, that's the reason. It just helps you increase the level of confidence. That's the most important thing. Mm-hmm. So just to make it even clearer, so the main motivation for me as a developer is to use testing library across different UI framework is mostly to reuse my knowledge that the syntax that I'm already familiar with, or it kind of pushes me towards best practices. Things like, hey, test like the user, don't focus on the internals. What exactly? Yeah, exactly that. So the fact that you can use testing library wherever you can find a DOM is a really awesome thing, but that's not why you use it. The reason that you use it is because it helps separate your tests from implementation details, which is why you get so much confidence out of it. And originally, when I wrote testing library, it was actually React testing library specific to React. And then I realized that there was just a very thin integration layer between React and the rest of the DOM. And so I extracted all of that. And then we made Angular testing library and Vue testing library and Cypress testing library. There's a test cafe testing library. Like anywhere you find a DOM, you can use testing library for it. And that actually is a huge benefit. So you can have your unit and integration tests that are maybe running in Jest or wherever, and they're all hitting the DOM. And then you can also have your end-to-end tests. And they look very similar from the way that you're querying and interacting with the DOM, which I think is a huge benefit too. Sounds like a great opportunity. And where can, I mean, can you recommend resources, places to learn more about testing library? Yeah, testingjavascript.com is a pretty good place to learn it. And I did share in the Discord a short URL. Maybe I'll share it again because people are pretty active in the Discord. But yeah, for a 25% discount to testingjavascript.com. But of course, it's not the only place to learn. Like you said, it's very, very popular. There are tons and tons of courses that you can pay for, and also videos that you can watch on YouTube. I've given many talks. I've got lots of free content on my blog. And then the docs are really good. So you can go to testing-library.com and learn more. I personally really like this approach of test from the user perspective, almost as production. And I think that this mindset also inspires many now of the testing strategy of the organization level testing strategy are being disrupted. So let's play a small role game. Let's say that you are the chief testing officer or the CTO, and I'm your developer. And I'm working traditionally. Well, I'm writing unit tests, a lot of unit tests. Then after three weeks, when I have the version working with 100% test coverage, then me and the QA person starts writing some integration tests, and then some end-to-end testing, then manual testing my version and put in production. Now we have a meeting and we do our planning for 2021. What exactly, and Oren, let's start with you. What exactly should I change in this approach for the next year? So when you start planning, I think, first of all, you need to understand what you have and how do you want to, we all know the pyramid, right? And we all say, this is the amount of tests that we have unit tested. Unit tests is not enough. They can even, even deciding on over there, everyone tries to reach the 100% test coverage. Code coverage is really hard. It's really hard to get to 100% code coverage, and sometimes it's not worth it. So you have those different layers and you need to decide how to start. And sometimes, by the way, if you don't have anything, sometimes it's even nicer to go actually against the tradition and actually start from the top. End-to-end testing and the integration tests give you high coverage, very, very fast. And they give you a lot of confidence, as you said, close to production to understand, or you can even test in production to understand what you got. The other thing that you need to make sure is that you don't neglect those other layers and you need to make sure that you keep investing in them because they give you the feedback faster. So the end-to-end are super critical. They give you high confidence and high coverage in seconds, where they usually... The challenge over there is what Kent said, and I totally agree, is the trust. You need to trust your tests. If you have tests that are running and you don't trust them, I don't want them. I prefer 10 tests that give you 20% coverage and I trust it, then 100% that it's flaky. And when the CI would go, I was like, oh, it fell again. I don't know why. And then what happens is people take tests out and they say, I don't trust it, let's take it out because I want to release. Or they're starting this thing called rerun. You know what? Let's run it again and again and again and again. Let's rerun until everything passes. And that, of course, it covers a lot of bugs that you don't catch because of those flakiness. You might have a real bug. People think that bugs are only deterministic bugs, that you're saying, hey, I caught this and this will always reproduce. The most challenging bugs are the ones that don't reproduce all the time. When you have that once out of 10, it's going to happen. And you can't ignore that. I keep telling people, if you had someone, like something happened, a robbery, you can't say, oh, the robbery, we're not going to go up in the same place again. You need to make sure, how do I find the person that did this? How do I find it? And that means how they had all the information. And that brings us to the root cause analysis, understanding you need to invest in the fact that you'll know when something does break, what happened. Do I get, is it, we started with the screenshot, then the DOM, I want to see the DOM, the network request, the console logs. I want to have all the information I need so that when something does happen, I know I can pinpoint and say, OK, this is it. And of course, the more you go up the DOM, it's harder to say, to pinpoint. And as you go more to the, more even toward the unit test, you can say, oh, here's the function that, the method that went wrong. This is the state. So here it gives us, up to here, we see a very, we have a lot of coverage, slow response. That means it's harder running on every commit and going down, making sure that we have the amount of tests in each part to know exactly as fast as possible what went wrong. Yeah, yeah, that makes sense. The wider your tests are, then you need more to invest in setups that prevent flakiness and yeah, makes tons of sense. Jason, many of your work is related to flakiness. I'm curious about your overall thought, but also specifically to this, flakiness in integration test. Yeah, flakiness is a big problem. And, but it's also kind of a fascinating field as well. Something I'm sort of weirdly interested in. I can sort of say at Spotify, we had kind of a large flakiness problem. That's kind of how I got introduced into working and testing to begin with. The library that I wrote, JestJUnit, the entire reason I wrote it was because I was working on the Spotify desktop client at the time. And we were trying to tackle test flakiness, trying to reduce it significantly. And so the first part in doing that is just basically knowing what tests have run, which one failed, how fast are they, et cetera, et cetera, putting them in a database and sort of trending these things over time. This is kind of the fundamental starting point for just understanding what is flakiness, like what is flaky? And then how are you going to address this? So it's a fascinating field. There's probably tons I could talk about way more than this conference, but I guess kind of getting to the original question, something I feel that's important to say is that far too many organizations that I've seen, even now, but particularly in the past, don't really recognize the fact that testing and sort of quality in general actually speeds up teams. So there's this concept in Agile called sustainable pace, which I don't know, I just don't see a lot of teams actually following this or really understanding this. A lot of teams are really moving very fast as quickly as they can. And this often means writing no tests or writing fewer tests. But if you can move just a tad bit slower and focus on quality and keep that quality high, this actually pays dividends over years and years and years. So this is something that I think is important, not just to developers, but companies as a whole. I think that one of the big things that we actually see is actually that developers do own quality. Yoni, you mentioned, hey, there's a QA person that's going to say, hey, what's up? I think it's becoming more and more that developers own the quality and they need to say, hey, I feel okay with releasing on a Friday because that means that you feel secure enough. You're saying, hey, I trust this test. It's not someone else's job to make sure everything works. That's my job. That's my responsibility. Sure. Sometimes it's a good idea to release on Fridays. I would like to add that I feel like quality is not, you know, moving it to having developers be responsible for that is not enough. I feel like it's an organizational thing that everyone needs to be aligned on. And I think that's where your quality strategy really becomes impactful is when it actually helps your whole organization understand what quality means. And that's what enables you to actually move forward and be able to have those confidence in those tests, which is why I think test coverage isn't always reliable to do that because it's often just expressed as number of tasks or a percentage. And it doesn't really at all speak to what they actually do, especially for those people who are non-technical or really removed from the day-to-day code. Like what is this test case actually mean in terms of, you know, how does this safeguard my business? And it doesn't say any of that. So a lot of the things that I think is actually important to consider besides test coverage or what type of test you're writing is actually be able to identify, you know, features and flows are actually really high impact areas or high touch areas for your users, whether that's like a dashboard or a homepage or a checkout flow, and really concentrating your testing efforts there, focusing on a few high impact ETE tasks. That's what's actually going to give you confidence, you know, understanding your users. Are they on Chrome? Are they on, you know, desktop versus mobile is going to help narrow down what browsers you're actually going to be testing on. Does it make sense to run all your automation tests through every single browser? So a lot of those things, I think, is more important to think about than just the number of tests. And then again, like that part about communicating about your quality strategy and having everyone within the team and company understand that, I think it's often overlooked, but it's super critical. Yeah. Yeah. And there is a repeating concept here. Start from the user, focus on the user. Okay. We have five minutes left. So I think that it's absolutely now a significant trend to move from Pyramid to something that is more Diamond or Trophy or Honeycomb. Start with integration test or end to end. By the way, some of the earlier TDD books actually also recommend this. Start with wider tests and then do your unit and TDD stuff. But now as a developer, we all know praise integration test. When should I write now unit tests? And also I keep hearing about the new thing, component tests. Kenneth, I'll direct first the question to you. What exactly is component tests and what is the new role of unit tests? You know, it all comes down to confidence and how much confidence you can get out of things. And if you unit test, let's say that you get a hundred percent unit test coverage, you can still have so many integration problems. And so you get a lot of problems there. So let's say, all right, nevermind, we won't do a hundred percent. Let's do like a mix here. And then as you start doing more and more integration tests, you realize that you don't need as many unit tests because you're already covering that code from your integration. And so when you keep on down this path where you end up finding most of your unit tests are in those maybe like pure functions that are doing complicated logic. And those typically they run really fast and they're pretty straightforward to write and maintain. And that's where I find myself doing most of my unit tests. I actually don't find myself doing a whole lot of module mocking in those types of tests either. So I'm pretty much doing component or integration tests. The component is like React component or Angular component, whatever. Integration would be like on the node level, you're integrating a bunch of your modules and stuff. And so, yeah, I find myself doing mostly that sort of testing. And then I fill in the gaps for like these hardcore computational or like these algorithmic things that are difficult to cover with the broad brush of a integration test. Yeah, so it's still a room for unit test and this should be stated clearly. A question to backstage. We're almost out of time, but we have a lot of fascinating topics still uncovered. So if we can get five minutes more, please let me know. I'm moving to the next topic, which is it's a promise for years that it's going to affect the testing world. And maybe this is the year where it's happening. I'm referring to AI based testing. Oren, you are the chief, the guru at testing. The testing I know is a lot about AI. Question, I mean, can you explain really in a layman terms, what does it mean to me really in simple words, features, how can it augment or help testing? We're trying, I think what everyone is trying to get to the same thing, to trust your test. So you can look at it like often, like offering the test to have as much coverage as possible. And I think Nancy said it in an amazing way. Wait a second, how do you define that? How do you see that? I hope that one day we'll get to talk about not just code coverage, but also user coverage. Not just the shift left, shift right. What do we know about production? What do people interact with, what do not? So first of all, AI can help with the offering. Can we generate, can we look in production and generate automatically a thousand end-to-end tests in two days just by looking at what's going on in production? Can we filter out of that? What are the tests that we do want to keep as an end-to-end test? And of course, generating those integration component or even unit tests, can we remove the flakiness? Right now, things like they are basic in every platform was, hey, was one CSS selector. There's one way to find an element and click on it. A machine can look at thousands of them. So can we do that automatically and reduce the flakiness if we understand that a machine can find an element better than us? Yeah, like identify selectors. I identify selectors from previous records that lead more to flakiness and- Exactly, and improve that over time. Yeah, makes tons of sense. I would love to discuss it for at least 10 minutes more, but we are just almost out of time. And I want to cover the last very important topic. Sorry, Jason and Nancy, only one minute to cover this. There is a big bloom of tools that's related to network interception, network mocking, a lot of new capabilities. You specifically, Nancy, wrote one of these tools. Can you first explain to us why it first record the network? So I wish I had more time to talk about this because I think it's super exciting. But I think the biggest thing that the network mocking really solves is it takes the most difficult part out of being able to write integration tests. I wish I can go into elaborate what I mean by integration test because I think it's like one of those testing that's super misunderstood and no one really aligns on what it does. But I think it really solves that flaky issue that Jason was talking about. You can really focus on things that you can actually visually see, things that's as close to possible as what the users can interact with. And I feel like network mocking really allows you to be able to kind of do that without making it super slow and flaky and it cause all those issues. The plugin specifically that I worked on, it actually tags on what is out of the box already from Cypress. A lot of you who do use Cypress knows it comes with a lot of mocking capabilities and it kind of just tags on top of that to really solve for I think the biggest issue with all of this, which is how do we actually manage our mocks? Mocking is obviously number one issue, but number two, once you're able to mock, how do you manage hundreds of files of mocking that needs to change often? And that's where the plugin Cypress auto record that I wrote really helps to address that. But again, I love to continue this discussion. I know we're out of time. Yeah, happy to talk with anyone on Discord. Yeah, I took a look at it and it looked amazing. Like very fast at your integration in practice are a few things they are. We got one last minute, Grace, Jason, I'll really be glad to hear your take on the network interception monitoring. Yeah, sure. Just really quickly. I mean, I think it's an important part of testing. You have end to end tests and the most typical reason why they're flaky or why they're slow is going to be network requests. So if you can kind of take those out of the equation, especially if you have something like VCR where you can automatically generate stable mocks, you turn these end to end tests into something that is far less flaky and far more performance. And basically it's testing the same thing using hopefully the same system. So there's a lot of benefits to it. I would encourage you to look at it, especially Nancy's plugin there is a pretty great example. Yeah. Thank you, Jason. And unfortunately we're out of time. So really many thanks to all of you for this great insight. Should I try to summarize it in one sentence? It's production user, production user, production user, leave the internals to the end, start from the chaos of production, the network from the user, from the UX. This is where most of the tooling now innovate and this is where testing should start from. And then if you have something complex inside, then well, we have the traditional unit and other tools. Many thanks for all of you for being with me today. Thank you. Thank you. Thank you. Bye.