Jason Palmer
Jason Palmer
Jest core contributor and creator & maintainer of jest-junit. Jason is an engineer and technical product manager at Spotify where he focuses on web infrastructure, CI/CD, and test automation.
Panel Discussion: Future of Web Testing
TestJS Summit - January, 2021TestJS Summit - January, 2021
33 min
Panel Discussion: Future of Web Testing
Video
Hey, hey, hey, I'm really happy to be here. I'm thrilled. I'm super excited to kick off this panel about the future of web testing. And I'm so excited because of two reasons. One, a lot of things, great things happening now in this field. Just to give you an example, I checked today in GitHub, and there are already 17,000 repos dealing only with testing in JavaScript. 4,000 of them were written in the last year. So, so many things and innovation happening now in testing. The second reason is because I probably have the ideal panel to discuss all of these trends, really some of the most influential. The people who are building this innovative tool here with me. So how about we start with a quick intro? Let's start with you, Nancy. Would you like to introduce yourself? Sure, I'm really excited to be here. My name is Nancy and I am a solutions architect working at a company called Rango.io. We're a tech consulting agency. So, I've worked with a lot of clients in the past, enterprise and startups, oftentimes with their testing strategy. Cool, great to have you Nancy. Kent, C-Dodd, I'm not sure we even need to introduce Kent, but we will. Kent, please tell us about yourself. Yeah, sure, thanks. I'm super happy to be here. I love the conferences put off by Git Nation. They do a great job. So yeah, I created testingjavascript.com where I can teach you everything that I know about testing. It's awesome, so take a look at that. And actually there's a discount coupon just for TestJS Summit. So I'll share that in the Discord. And I also created Testing Library, which is kind of a funny name for a library. But it helps you test DOM related stuff. So anything that has to do with the DOM, anywhere you can find the DOM. And actually there are other implementations for React Native and stuff like that too. But it just gives you utilities that you need to test UI. That's the main idea behind that. So that's what I do. Cool, and we're about to discuss, of course, Testing Library very soon. Oren, good morning, San Francisco. Hi everyone. Pleasure to be here, super excited to be here with everyone. My background is mostly building products for developers in the last 22 years. I'm guessing that the standard you know is the Aplitur is the visual validations. I was one of the founding members, weeks an early developer. And now the founder of the last six years, founder of Testing. I will focus on AI helping test automation. That's great, and we're also going to discuss testing. Jason, hi. Everyone, I'm happy to be here. I'm Jason Palmer. I work at Spotify for the last almost ten years as an engineer and now a PM. I'm also a core contributor to Jest, and I created JestJUnit. So if you're running Jest in a CI environment, there's a pretty good chance you're using that. Amazing, great, so we're about to discuss now a bunch of topic, a bunch of hot trends in testing, and each of the panelists is going to share his thoughts and experience about each of these hot trends. So how about we start with the CI world? And there is a new giant player, GitHub, now playing in the CI world in the past two years. And funnily enough, I just checked today, and each of the big vendors who operate there for years have approximately 100 extensions or actions, some call these orbs. GitHub in two years already have 7,000 GitHub actions. So it's growing crazily. And my question to you, let's start with you, Kent. Is this just another big giant eating the market, or are they doing things differently? Do they provide new capabilities, opportunities for us developers? You know, years ago, I did a survey, not like a survey, but I took a survey of all the CI services that I thought would be good for the company that I was working with at the time. And I tried a ton of them, and I ended up with SnapCI, which is no longer a thing, which is kind of sad. There's GoCD now as their thing, but, and that's what I used for my company. But then for open source, I was all in on TravisCI, and it served me really, really well for years. But I don't want to get into this too much, but Travis kind of changed the way that they do things in a way that made it very difficult for open source. And so I've been totally moving everything over to GitHub actions. I've been kind of thinking about it anyway. And my goodness, it's such a nice experience. So there's a reason that I think so many people are going over to GitHub for managing their GitHub actions. There can be some concern with the monopolistic approach, I guess. But I don't know, like, I'm just trying to ship stuff, and GitHub actions makes it really easy to do that. So that's, I like it. Can you share one example of any delightful experience, something that just helped you specifically to ship? Yeah, well, you mentioned the 7,000 different GitHub custom actions that you can use. I didn't have to build any custom or, yeah, custom action of any kind. It was just like putting together all these different ones to make things come together in a really nice way. It was just very, I wouldn't say that it was super straightforward. There are definitely some nuances and intricacies and things like that. But once you have it set up, it's actually pretty understandable what you have. And I think that, yeah, just the sheer amount of actions that you have at your disposal is super, super helpful. Yeah. Yeah. And Nancy, I'm curious to hear your thoughts on this matter. Yeah, I think it's actually super cool. I really like the approach kind of GitHub went with. Instead of just focusing on being another CI tool and just focusing on their infrastructure, they really took like that open source approach, really encouraging and fostering the community to contribute and focusing on that marketplace aspect, rather than it being sort of an afterthought. And I think that's where it's really powerful. And that's probably why we've seen so much growth there so quickly. And really now sort of GitHub is not limited by how many developers are putting on to really build out their CI or really understanding the user's need because it comes organically, right? They're allowing the community to kind of dictate where this goes. And I think that's super exciting to see. I think it's hard to predict where it will go in the future, especially, you know, a lot of the clients I've worked with in the past, like moving, switching over to a whole new infrastructure is just a huge undertaking and oftentimes not the top priority and just not worth the effort right off the bat. Yeah, interesting. Both of you mentioned that the marketplace is the thing, like upfront, they build it for you. And beyond, they build it for the marketplace. And this is the core and not some extension like it was in previous CI vendors. Yeah, yeah, sensible. And Oren and Jason, feel free to jump in whenever and everyone, we are spontaneous here. So one thing I really like about GitHub Actions, I guess, is that it's kind of one of the CI providers that's taking the like a Docker-based approach, which is really nice. And it's kind of a new wave that I've seen, I guess, in the last few years. It definitely makes it easier for folks to be able to like reproduce CI steps locally and kind of craft like a decent, you know, CI environment locally and make sure that it works. And then you kind of, you know, it's a lot less, you know, toil, basically, than you might have with other CI environments. Yeah, yeah. So bottom line, I think anyone should evaluate the GitHub, at least evaluate as your next CI. And now moving to the next topic, which is testing library. Well, probably everyone here heard about React testing library. I learned recently that it has sisters, which is X testing library for Cypress, for Puppeteer, for almost any UI framework. And if you're not sure how popular it is, recently, the state of JS big survey was published. And people chose a testing library above Jest and Mocha as their most popular testing framework. Of course, it's not the same thing. But it's, I mean, it was called as the most trendy testing library of past year. And surprisingly, we have the creator here, Ken. Question for you. So I know Cypress. I can do things for this with Cypress. I know to build stuff. And I'm going to start now a new project. Why should I use, for example, Cypress testing library and not Cypress? As is. Yeah, well, I mean, I think you should use Cypress for sure. So testing library is a collection of utilities that you use within testing frameworks. So it's not a replacement for Jest or Mocha or Cypress. It's more of a utility that you tack on to those things. And so like, but that said, with Cypress in particular, there are built in commands that you can use instead of what you get from Cypress testing library. And so the reason that I can suggest adding in Cypress testing library instead of using like Cy.get, for example, is because it gives you more confidence. And that's all that we're really after. I know that some people, some testers really like to use testing to kind of enhance their design process and everything so they can build out their code. And that's great if that's what you want to do. You know, TDD is awesome, whatever. But for me, I'm mostly, and when we talk about running tests in CI, the types of tests that I want to run in CI are the types that can tell me, can I ship this or not? I want, everybody says, don't ship on Fridays. I want to be able to ship on Fridays. I want to ship whenever. Whenever I commit code, it should get shipped. And for me to be able to do that, I need to have confidence. And this is what testing library offers is a really nice way to not only query, in Cypress, it's pretty much just for querying, but elsewhere, it gives you not only query ability, but also interactivity with that output DOM. And the reason I say you don't need that for Cypress is because Cypress has great interactivity functionality already. So it just adds the ability to query your document really, really well. And not only does it make it easier to read and write those tests, but it also, you end up with more confidence because of the types of things that you're writing in your tests. And so, yeah, that's the reason. It just helps you increase the level of confidence. That's the most important thing. Mm-hmm. So just to make it even clearer, so the main motivation for me as a developer is to use testing library across different UI framework is mostly to reuse my knowledge that the syntax that I'm already familiar with, or it kind of pushes me towards best practices. Things like, hey, test like the user, don't focus on the internals. What exactly? Yeah, exactly that. So the fact that you can use testing library wherever you can find a DOM is a really awesome thing, but that's not why you use it. The reason that you use it is because it helps separate your tests from implementation details, which is why you get so much confidence out of it. And originally, when I wrote testing library, it was actually React testing library specific to React. And then I realized that there was just a very thin integration layer between React and the rest of the DOM. And so I extracted all of that. And then we made Angular testing library and Vue testing library and Cypress testing library. There's a test cafe testing library. Like anywhere you find a DOM, you can use testing library for it. And that actually is a huge benefit. So you can have your unit and integration tests that are maybe running in Jest or wherever, and they're all hitting the DOM. And then you can also have your end-to-end tests. And they look very similar from the way that you're querying and interacting with the DOM, which I think is a huge benefit too. Sounds like a great opportunity. And where can, I mean, can you recommend resources, places to learn more about testing library? Yeah, testingjavascript.com is a pretty good place to learn it. And I did share in the Discord a short URL. Maybe I'll share it again because people are pretty active in the Discord. But yeah, for a 25% discount to testingjavascript.com. But of course, it's not the only place to learn. Like you said, it's very, very popular. There are tons and tons of courses that you can pay for, and also videos that you can watch on YouTube. I've given many talks. I've got lots of free content on my blog. And then the docs are really good. So you can go to testing-library.com and learn more. I personally really like this approach of test from the user perspective, almost as production. And I think that this mindset also inspires many now of the testing strategy of the organization level testing strategy are being disrupted. So let's play a small role game. Let's say that you are the chief testing officer or the CTO, and I'm your developer. And I'm working traditionally. Well, I'm writing unit tests, a lot of unit tests. Then after three weeks, when I have the version working with 100% test coverage, then me and the QA person starts writing some integration tests, and then some end-to-end testing, then manual testing my version and put in production. Now we have a meeting and we do our planning for 2021. What exactly, and Oren, let's start with you. What exactly should I change in this approach for the next year? So when you start planning, I think, first of all, you need to understand what you have and how do you want to, we all know the pyramid, right? And we all say, this is the amount of tests that we have unit tested. Unit tests is not enough. They can even, even deciding on over there, everyone tries to reach the 100% test coverage. Code coverage is really hard. It's really hard to get to 100% code coverage, and sometimes it's not worth it. So you have those different layers and you need to decide how to start. And sometimes, by the way, if you don't have anything, sometimes it's even nicer to go actually against the tradition and actually start from the top. End-to-end testing and the integration tests give you high coverage, very, very fast. And they give you a lot of confidence, as you said, close to production to understand, or you can even test in production to understand what you got. The other thing that you need to make sure is that you don't neglect those other layers and you need to make sure that you keep investing in them because they give you the feedback faster. So the end-to-end are super critical. They give you high confidence and high coverage in seconds, where they usually... The challenge over there is what Kent said, and I totally agree, is the trust. You need to trust your tests. If you have tests that are running and you don't trust them, I don't want them. I prefer 10 tests that give you 20% coverage and I trust it, then 100% that it's flaky. And when the CI would go, I was like, oh, it fell again. I don't know why. And then what happens is people take tests out and they say, I don't trust it, let's take it out because I want to release. Or they're starting this thing called rerun. You know what? Let's run it again and again and again and again. Let's rerun until everything passes. And that, of course, it covers a lot of bugs that you don't catch because of those flakiness. You might have a real bug. People think that bugs are only deterministic bugs, that you're saying, hey, I caught this and this will always reproduce. The most challenging bugs are the ones that don't reproduce all the time. When you have that once out of 10, it's going to happen. And you can't ignore that. I keep telling people, if you had someone, like something happened, a robbery, you can't say, oh, the robbery, we're not going to go up in the same place again. You need to make sure, how do I find the person that did this? How do I find it? And that means how they had all the information. And that brings us to the root cause analysis, understanding you need to invest in the fact that you'll know when something does break, what happened. Do I get, is it, we started with the screenshot, then the DOM, I want to see the DOM, the network request, the console logs. I want to have all the information I need so that when something does happen, I know I can pinpoint and say, OK, this is it. And of course, the more you go up the DOM, it's harder to say, to pinpoint. And as you go more to the, more even toward the unit test, you can say, oh, here's the function that, the method that went wrong. This is the state. So here it gives us, up to here, we see a very, we have a lot of coverage, slow response. That means it's harder running on every commit and going down, making sure that we have the amount of tests in each part to know exactly as fast as possible what went wrong. Yeah, yeah, that makes sense. The wider your tests are, then you need more to invest in setups that prevent flakiness and yeah, makes tons of sense. Jason, many of your work is related to flakiness. I'm curious about your overall thought, but also specifically to this, flakiness in integration test. Yeah, flakiness is a big problem. And, but it's also kind of a fascinating field as well. Something I'm sort of weirdly interested in. I can sort of say at Spotify, we had kind of a large flakiness problem. That's kind of how I got introduced into working and testing to begin with. The library that I wrote, JestJUnit, the entire reason I wrote it was because I was working on the Spotify desktop client at the time. And we were trying to tackle test flakiness, trying to reduce it significantly. And so the first part in doing that is just basically knowing what tests have run, which one failed, how fast are they, et cetera, et cetera, putting them in a database and sort of trending these things over time. This is kind of the fundamental starting point for just understanding what is flakiness, like what is flaky? And then how are you going to address this? So it's a fascinating field. There's probably tons I could talk about way more than this conference, but I guess kind of getting to the original question, something I feel that's important to say is that far too many organizations that I've seen, even now, but particularly in the past, don't really recognize the fact that testing and sort of quality in general actually speeds up teams. So there's this concept in Agile called sustainable pace, which I don't know, I just don't see a lot of teams actually following this or really understanding this. A lot of teams are really moving very fast as quickly as they can. And this often means writing no tests or writing fewer tests. But if you can move just a tad bit slower and focus on quality and keep that quality high, this actually pays dividends over years and years and years. So this is something that I think is important, not just to developers, but companies as a whole. I think that one of the big things that we actually see is actually that developers do own quality. Yoni, you mentioned, hey, there's a QA person that's going to say, hey, what's up? I think it's becoming more and more that developers own the quality and they need to say, hey, I feel okay with releasing on a Friday because that means that you feel secure enough. You're saying, hey, I trust this test. It's not someone else's job to make sure everything works. That's my job. That's my responsibility. Sure. Sometimes it's a good idea to release on Fridays. I would like to add that I feel like quality is not, you know, moving it to having developers be responsible for that is not enough. I feel like it's an organizational thing that everyone needs to be aligned on. And I think that's where your quality strategy really becomes impactful is when it actually helps your whole organization understand what quality means. And that's what enables you to actually move forward and be able to have those confidence in those tests, which is why I think test coverage isn't always reliable to do that because it's often just expressed as number of tasks or a percentage. And it doesn't really at all speak to what they actually do, especially for those people who are non-technical or really removed from the day-to-day code. Like what is this test case actually mean in terms of, you know, how does this safeguard my business? And it doesn't say any of that. So a lot of the things that I think is actually important to consider besides test coverage or what type of test you're writing is actually be able to identify, you know, features and flows are actually really high impact areas or high touch areas for your users, whether that's like a dashboard or a homepage or a checkout flow, and really concentrating your testing efforts there, focusing on a few high impact ETE tasks. That's what's actually going to give you confidence, you know, understanding your users. Are they on Chrome? Are they on, you know, desktop versus mobile is going to help narrow down what browsers you're actually going to be testing on. Does it make sense to run all your automation tests through every single browser? So a lot of those things, I think, is more important to think about than just the number of tests. And then again, like that part about communicating about your quality strategy and having everyone within the team and company understand that, I think it's often overlooked, but it's super critical. Yeah. Yeah. And there is a repeating concept here. Start from the user, focus on the user. Okay. We have five minutes left. So I think that it's absolutely now a significant trend to move from Pyramid to something that is more Diamond or Trophy or Honeycomb. Start with integration test or end to end. By the way, some of the earlier TDD books actually also recommend this. Start with wider tests and then do your unit and TDD stuff. But now as a developer, we all know praise integration test. When should I write now unit tests? And also I keep hearing about the new thing, component tests. Kenneth, I'll direct first the question to you. What exactly is component tests and what is the new role of unit tests? You know, it all comes down to confidence and how much confidence you can get out of things. And if you unit test, let's say that you get a hundred percent unit test coverage, you can still have so many integration problems. And so you get a lot of problems there. So let's say, all right, nevermind, we won't do a hundred percent. Let's do like a mix here. And then as you start doing more and more integration tests, you realize that you don't need as many unit tests because you're already covering that code from your integration. And so when you keep on down this path where you end up finding most of your unit tests are in those maybe like pure functions that are doing complicated logic. And those typically they run really fast and they're pretty straightforward to write and maintain. And that's where I find myself doing most of my unit tests. I actually don't find myself doing a whole lot of module mocking in those types of tests either. So I'm pretty much doing component or integration tests. The component is like React component or Angular component, whatever. Integration would be like on the node level, you're integrating a bunch of your modules and stuff. And so, yeah, I find myself doing mostly that sort of testing. And then I fill in the gaps for like these hardcore computational or like these algorithmic things that are difficult to cover with the broad brush of a integration test. Yeah, so it's still a room for unit test and this should be stated clearly. A question to backstage. We're almost out of time, but we have a lot of fascinating topics still uncovered. So if we can get five minutes more, please let me know. I'm moving to the next topic, which is it's a promise for years that it's going to affect the testing world. And maybe this is the year where it's happening. I'm referring to AI based testing. Oren, you are the chief, the guru at testing. The testing I know is a lot about AI. Question, I mean, can you explain really in a layman terms, what does it mean to me really in simple words, features, how can it augment or help testing? We're trying, I think what everyone is trying to get to the same thing, to trust your test. So you can look at it like often, like offering the test to have as much coverage as possible. And I think Nancy said it in an amazing way. Wait a second, how do you define that? How do you see that? I hope that one day we'll get to talk about not just code coverage, but also user coverage. Not just the shift left, shift right. What do we know about production? What do people interact with, what do not? So first of all, AI can help with the offering. Can we generate, can we look in production and generate automatically a thousand end-to-end tests in two days just by looking at what's going on in production? Can we filter out of that? What are the tests that we do want to keep as an end-to-end test? And of course, generating those integration component or even unit tests, can we remove the flakiness? Right now, things like they are basic in every platform was, hey, was one CSS selector. There's one way to find an element and click on it. A machine can look at thousands of them. So can we do that automatically and reduce the flakiness if we understand that a machine can find an element better than us? Yeah, like identify selectors. I identify selectors from previous records that lead more to flakiness and- Exactly, and improve that over time. Yeah, makes tons of sense. I would love to discuss it for at least 10 minutes more, but we are just almost out of time. And I want to cover the last very important topic. Sorry, Jason and Nancy, only one minute to cover this. There is a big bloom of tools that's related to network interception, network mocking, a lot of new capabilities. You specifically, Nancy, wrote one of these tools. Can you first explain to us why it first record the network? So I wish I had more time to talk about this because I think it's super exciting. But I think the biggest thing that the network mocking really solves is it takes the most difficult part out of being able to write integration tests. I wish I can go into elaborate what I mean by integration test because I think it's like one of those testing that's super misunderstood and no one really aligns on what it does. But I think it really solves that flaky issue that Jason was talking about. You can really focus on things that you can actually visually see, things that's as close to possible as what the users can interact with. And I feel like network mocking really allows you to be able to kind of do that without making it super slow and flaky and it cause all those issues. The plugin specifically that I worked on, it actually tags on what is out of the box already from Cypress. A lot of you who do use Cypress knows it comes with a lot of mocking capabilities and it kind of just tags on top of that to really solve for I think the biggest issue with all of this, which is how do we actually manage our mocks? Mocking is obviously number one issue, but number two, once you're able to mock, how do you manage hundreds of files of mocking that needs to change often? And that's where the plugin Cypress auto record that I wrote really helps to address that. But again, I love to continue this discussion. I know we're out of time. Yeah, happy to talk with anyone on Discord. Yeah, I took a look at it and it looked amazing. Like very fast at your integration in practice are a few things they are. We got one last minute, Grace, Jason, I'll really be glad to hear your take on the network interception monitoring. Yeah, sure. Just really quickly. I mean, I think it's an important part of testing. You have end to end tests and the most typical reason why they're flaky or why they're slow is going to be network requests. So if you can kind of take those out of the equation, especially if you have something like VCR where you can automatically generate stable mocks, you turn these end to end tests into something that is far less flaky and far more performance. And basically it's testing the same thing using hopefully the same system. So there's a lot of benefits to it. I would encourage you to look at it, especially Nancy's plugin there is a pretty great example. Yeah. Thank you, Jason. And unfortunately we're out of time. So really many thanks to all of you for this great insight. Should I try to summarize it in one sentence? It's production user, production user, production user, leave the internals to the end, start from the chaos of production, the network from the user, from the UX. This is where most of the tooling now innovate and this is where testing should start from. And then if you have something complex inside, then well, we have the traditional unit and other tools. Many thanks for all of you for being with me today. Thank you. Thank you. Thank you. Bye.
Panel Discussion: Code Quality
JSNation Live 2021JSNation Live 2021
39 min
Panel Discussion: Code Quality
Video
Thanks for having me. I'm Anna and joining me here are three awesome people to discuss quality. We have John Papa, who is a professional web and mobile developer and currently works as a developer advocate for Microsoft. He's also very active in the open source community and is also active on podcast, Real Talk, JavaScript. We also have Angie Jones with us, who is a principal developer advocate at AmpliTools and specializes in test automation strategies and techniques. She is also a very creative inventor and volunteers with Black Girls Code. Last but not least, we have Jason Palmer here with us, who is a Jest core contributor and the creator and maintainer of JestJUnit. He is also an engineer and technical product manager at Spotify, where he focuses on infrastructure, CI, CD, and test automation. As Matti already said, we're going to talk about quality. Quality is indeed a topic that is discussed a lot and where many people have many different opinions. But I would like to start and talk about what quality actually is, like what are aspects of quality that we should consider. And I'd like to make a round and let's maybe start with Jason. Jason, what do you think quality actually is when it comes to JavaScript projects? I think that there's a lot of different ways to look at it. But at the end of the day, if people outside of your team can easily contribute to your software and you're confident that it's not going to break, and if you're able to continuously deploy to production, so you make a commit to the main branch and it's able to go directly to production and you're confident about that, you know you've done a good job. So that's at least when you know you've developed something of high quality. Yes, if you can leave the office Friday afternoon, just push to prod shortly before and then leave for the weekend. That's definitely a good feeling. You can do that. Angie, what do you think? Yeah, I definitely agree with that. I think the key here is one, that it works. That's very obvious. But you know, that it works for your users, right? Not your own local machine and only the various happiest of paths. But it works for your users. And a key part of what Jason said is that the team members are able to contribute to that. So that means we're able to read it, we're able to refactor if needed. No one is scared to touch certain modules. You know, and that's when you know you're in bad shape when it's like, oh, I don't want to touch that. So those are the characteristics I feel of quality code. Yes, definitely. If just everybody, even new team members can just participate in it. John, what's your take on quality? First, I think that the characteristics, that's the key word I think Angie just mentioned. This is one of those topics where it's hard to describe code quality, but you know it when you see it, and you absolutely know it when it's not there. So it's easy to spot those kinds of things. But it all comes down to what Jason and Angie said, it's more about does it work? Does it meet the business needs? And I look at the long-term viability of a project as being a big sign of code quality. If the project lives for the term it's supposed to live. For example, you know, once it goes live, we all know there's issues and maintenance and things to work on, which is where we get to the contributions that we all talked about here. That's got to be something that keeps the project moving. The sign for me of really bad code quality is when people start talking about, wow, we just deployed this thing, we have an update. It's too hard to update it. Let's just redo it from scratch. And that happens far too often in the business world out there. And I want to say that it's really evident without even looking at the code when it's of poor quality. And I think we all can attest to, you know, situations where we've used some product as an end user and you just know that what's underneath here is not of quality. Like you can see it even, you know, from the outside. Yeah, definitely. You can easily spot bad code, but is there also a way to spot or measure good quality code? You're nodding. Feel free to just elaborate on that. Yeah, there's lots of, you know, clean coding principles and things like that that are out there that kind of gives us all some direction on how we should go about developing good quality code, you know, as far as like, you know, things are not too big or too long or too abstract. You know, there are these certain principles that I followed throughout my career, and that's typically led to good quality code. Other ways I think is how testable is your code, right? And I feel like that's one that's glossed over quite a bit, but it gives so much insight when you think about your coding from that angle. Whether you practice TDD, which is test-driven development, or if you test your code after you've written it, that's fine, too. That is legal. But how testable your code is is a really good indicator I've found in how well it's written. Yes, I've met that experience as well. I agree 100% on testability and spam calls. So I agree 100% on testability there. One thing I really like about Sign for Code Quality is when things become testable. I hear a lot of folks say, I can't write code that's testable. And I do a lot of coaching sessions with them, and a lot of times we'll ask why. Like why isn't it testable? What it really comes down to quite often is the code is, as you mentioned, Angie, too long. What's too long? I think that's subjective. But when your code is doing multiple things, when a function, for example, is doing seven things and you've got tons of comments explaining what all the ifs and thens and switches and for loops are doing, that's a great sign for maybe these things should be separate because then I could test each individual thing on its own instead of wondering where in this long journey did something go wrong. The other thing I look for in code quality and I recommend to folks is I, because I like Disney, I came up with a thing called the seven D's. And none of them stand for Disney. But Disney, we do these weird things. And I used to work there. And the seven D's, I think of code quality and these seven areas. You've got to design your application. You must handle that well. You must develop it well with code quality and styling and your guidelines. But you also must have a documentation of some kind in your company for that. Also another D in there is destroy. Can you destroy it? Can you test it? Make sure that your code works under the bad conditions, not just the happy path. There's also going to be a demo. How many of us write some kind of a test harness to make sure the thing works in a separate app somewhere? Why not include that as part of your repo so people can do an end to end test on that feature or integration as well? And then you also get down to things like deployment. Can you deploy your code out to production? So the final one is I couldn't come up with a good D for review. So I called it the review because I'm Italian. So the review. You got to do the review because you got to make sure you get a peer review from other people. If I write code, Angie and Jason have to review it. Not me, not the person who writes it. That's a great point. I think the way that I look at quality is how are you kind of constructing the automation that goes into every code change? So giving some really good thought around how much do I want to test before code is merged into the main branch? How many things am I okay testing while it's in production? You know, the form of monitoring to understand how my users are interacting with the system, how it's performing out there in production. So I think basically having like a conscious thought around this and planning ahead for what would make the most sense for you and your team and what kind of velocity you want to have makes a whole lot of sense. So that's a sign of quality in my view. Okay. And say I'm a person who just started coding. What do you think is a good way to learn about quality and to get into this whole topic? Where should I start? I think it's probably pretty easy to learn how to write unit tests right away. That's usually something, you know, soon after you're getting a hang of writing code and getting something working, you know, it's not too much of a stretch to take it a little bit further and be able to write that first unit test. And I think that that's a great first thing that you can do just to get a feel for what testing is like and how much of an impact it can have on your code as you develop it. Yeah. Unfortunately, testing is one of those things that isn't typically taught when we're learning to code, whether that be traditional university paths, whether that be bootcamp, self-taught, whatever. It's one of those kind of side notes if you even get it at all. So it's very much still a kind of on the job thing that you learn. I strongly recommend people review other folks' code, as John mentioned, right? So for code reviews, don't just look at the feature, look at the unit tests that were checked in as well. And this kind of gives you some insight into how other people are approaching their testing. And challenge yourself then as well. Did they miss a case? Can you think of anything else that's, you know, not so far out there that we don't want to bother with including it, but, you know, something that would be a value that was missed in the unit test. So this kind of gets you into the thought process. You step away from construction mode and you're more thinking about, for lack of a better word, what John said, destroying. But no, you think about, like, how to exercise this code and what are other various scenarios that are, you know, legal scenarios that your code should definitely cover. And in that, you start thinking more in this kind of quality mindset, and then you can leap into what Jason said and start, you know, contributing unit tests maybe for that other feature or some of your own as well. I'll give a... I agree testing is really, really important. And I don't want to diminish it by what I'm about to say, but I want to give a little devil's advocate to testing. I think one of the reasons testing is difficult today and not in the past, because we have great tooling now. For a while, that was a problem years ago. Now we've got great tooling. Is when we test, we all teach, when people do teach, they teach how to test, but we don't always teach what to test. And that becomes a problem because, yes, I can write a test that does something, but is there value in that test? And that's the real thing that this panel, we have not nearly enough time to get into, but that is far more important than the how, because you can have a hundred percent test coverage. And anytime when somebody tells me that, usually I'm looking at them going, what are you actually testing? Because that's like, how far did you go? Because the most important things are probably, you know, the business features and whatnot. So I'll add one different aspect as well, which is a style guide. And obviously I'm biased because I like creating these things, but it doesn't matter who writes it or what it says. Very bluntly, it doesn't matter what it says, as long as your team follows it. A big piece of code quality is when you have three people or you've got 3000 people on your dev teams, having a style guide that you can enforce through some kind of linting rules and whatnot is critically important for so many reasons. And consistent code is far easier to maintain as we talked in the beginning, than let's say Jason makes a pull request and he changed one line of code, but he has three spaces instead of two inside of his code files, which would be crazy. But if he does something like that, all of a sudden I've got 8,000 files in a pull request that I have to review and I'm not doing it. So things like this, I'm exaggerating on, but it really is important to have a style guide that your team can follow. And again, the rules are not as important as consistency. No, that's really good. The style guide part, yes, the linters and all of that, but more specifically the part about what are you testing? I find people really miss this. Again, I don't really fault them much because it's not something that we teach very well. Shameless plug, I have a free university that's dedicated to testing. It's called Test Automation University. It's online. All the courses are free. So you can take courses there to learn this stuff better, but I can't tell you how many times, and that's why I stress to not just review the features and code reviews, but also review the test. I can't tell you how many times I've seen unit tests. I'm going to give an example, let's say it's testing some API request and the assertion is literally, oh, make sure the response is not no. That's not really testing what we should be testing. Having just something come back is not the same as having the right thing come back, for example. So really give some thought into the coverage that you're adding, because these tests, the tests are really there to save you. There's multiple benefits to having tests, but the biggest one in my opinion is the fast feedback and to save you from really ugly, nasty, costly bugs that we can catch. We can catch this before it goes to prod if we test this appropriately. These are all really important points. One thing I just wanted to add to this was a test should always represent a user, in my opinion, and I think it's important to understand and think about what user am I testing for. So a much lower level test, a unit test or an integration test, the user you probably have in mind in this case would be another development team. So you may be writing a test against an API or against a function call or a class or something like this, and keeping in mind that the user in this situation is another development team that might be interacting with this function, interacting with this class and what have you. And then an end-to-end test could be an actual user out there interacting with your web app, with your backend service, whatever it is that you're developing. But I think it's important to keep in mind who your user is for each test that you're writing. And I think doing that, you're going to do a much better job avoiding testing the implementation, which is sort of what we call a white box test or a gray box test. And generally speaking, that's something you should try to avoid. Yes, definitely. We've already talked a little bit about reviews, and we actually have a question for that. Paul asks, how do you feel about pull request reviews? What are your steps? How much time do you put in? What's the workflow? I think we all probably have opinions on how to do reviews. And I don't think there's a lot of wrong ways. There's a lot of right ways to do it for sure. One of the things that's a wrong way is to look at the code in GitHub, not run it, not pull it locally, not make sure it works and just go, oh yeah, I trust Jason. Jason does good work and merge. A big thing for reviews like I like to do, and I have multiple processes, on some teams, I go to GitHub and I open up the review process right there. And I will make comments on code. And if it's small changes, I'll add the suggestion button, which I love, right in GitHub. You press that little plus sign, say, oh, they made a typo. Instead of making the person change it, fix it for them. And it creates a suggestion. And you can actually start a review, and I'm known for creating on a file 30 or 40 suggestions for little things. And that way the person can either accept each one or decline each one without making it a big process. The second step I like to do is to run the tests. If it's got CICD built in, I go look at the results there. Or you can just open up a container or something locally, pull the code down and run the whole thing and exercise the changes that they made. So there's lots of things to do. It should never just be a, oh, yeah, Angie's great. Let me just press the button. Yeah, that is so important. That's really good. Because a lot of people get caught there, right? You're basically becoming the human linter. And you're checking for things that the linter has already checked for. And you're like, oh, looks good to me. You know, very minor, minor suggestions. But no, just like you said, run those stuff. I said this to a colleague one time and their mind was blown. Like they literally never thought, oh, maybe I should actually run it, right? Another thing I love to do is to pull the issue, right? Pull the issue and see what the requirement actually was and determine if this implementation meets that requirement. Because it could be implemented, you know, great, the code is beautiful, code quality, yes, but it doesn't meet the business need that was requested. So that's where I usually start. I run it. Again, I look at tests, make sure we have, you know, some good coverage there. I also take a step back and think about, okay, if this person won the lottery tonight, and I had to inherit this, what does that mean for me? Is it readable? Would I be able to take this over if I had to? And if not, let me call out the issues that would prevent me from doing that. That's a very good point. I think that that's really, really good. I agree with everything that's been said so far. I know, I mean, especially what you said, Angie, around, you know, does this meet the business goals? I think I'm imagining a slightly hypothetical situation. You know, so let's assume that your code already has really good test automation and linting, and you're very confident with anybody being able to make contributions. Sort of at that point, what is left for you to review, right? And so I think everything that's meant, or that's been said here is perfect. I don't really have much to add there. Something I'd like to just pull onto this, because I, Angie, you made me think of it while you were talking, was pull requests and commits. I think we're all making a few assumptions here, because we live in the worlds we live in. And sometimes I see pull requests where people do, I'll make an analogy to the political system of many countries. A bill, for example, the United States go out for, we're going to make, we're going to paint all the streets purple, just picking something weird. That's the bill. And that's the feature that Angie's talking about in this case. That's what the code's supposed to do. But then I sneak in, well, let's add this button. And Jason sneaks in, well, let's add this thing. And all these little things get snuck into the bill, or the code in this case. It makes a pull request really difficult to review, because you can't really separate all these features. And think about the person looking at the code. They now are looking at the feature, like Angie said, and said, all right, that's what it's supposed to do, but why is all this other stuff happening? And it just causes a lot of friction and swirl. So it's much better to create separate pull requests and separate issues. I know it's more process, but it's a lot easier for somebody else to look at, and you have a lot less errors. Yes, definitely. We have another question. Kevin asked, you talked about deploying. Do you think end-to-end testing is primordial to be confident of your deployment? No. It's helpful. Is it what? Is it required? Primordial is the word that was used. I think it's helpful. End-to-end testing, I love it. I think it's very helpful. But I wouldn't be confident if I did end-to-end test, if it worked, that it would still deploy to production properly. I'd still want full CI, CD. I'd want to see it on a staging server with as much like the environment and staging as it is in production, including SSL and certificates and environment variables and everything else too, as close as you can get there. Even then, I think all of us have shipped something to production, where with all of that, something didn't go right. So always, always, always have a backup plan. Always have one. Speaking of backup plan, and we already talked about a little bit, spotting good code, spotting bad code. Jean asked, do you recommend tools like Zona to measure the quality or to help with spotting good and bad code? I think to what some of the other panelists said earlier, as long as you're consistent with what your definition of good code is and what your winting setup is and things like that, I think that that's what matters. In my view, there isn't any one tool that is going to be absolutely perfect for defining code quality, but consistency is really crucial here. Yeah. And I think more than anything, it gives a framework for the team to have a discussion on, what does good quality code mean for us? What are the rules here that we all agree to? And that way, you remove a lot of this back and forth and opinionated comments from code reviews. You get all of that out of the way so that we can focus on what truly matters. Agree, 100%. And you're right, the tools don't really matter, and the consistency does. One thing I like about SonarQube in general is that a company I've worked at, it creates metrics, and the metrics don't really mean anything. It's just a number. But the rule we had at that company as part of the consistency was never do any harm. So every time you put new code in, whatever that metric is, that number, it should never go lower. As your code gets in, it should always either stay where it is or go higher. So you can't just go in there and destroy the application because, oh yeah, I feel like that today. That sounds like a very good rule to have. We've talked a lot about how to get into quality now, but say I switch companies, I took a new job, and the company is kind of anti-quality, it doesn't see the value of quality. Or as Johan asked, the other team members don't see the added value of having quality code. How would you handle this situation? How would you convince your team and your boss that quality is something you should invest time in? I think if I could start, I mean, I would start and try to lead by example, of course. I mean, that's always going to be an impactful personal thing that you can do. And hopefully others see the results in what you do. Kind of like a bigger, longer term answer to this is something that I've been working on recently. And I think it is important for teams to do their best to basically be able to tell leadership what kind of an impact they're going to have by focusing time on tackling tech debt, on writing tests, on writing more automation. If you can tell a greater story around that, for instance, being able to say, I think that we can speed up our development process by 20% if we invest in a CICD pipeline. These are conversations that you can have that have a very clear impact and the business responds to impact. So those are the two things I would recommend really trying to do. That's it right there. That's the key. Businesses are just not in the, companies are not in the business of just doing things because, oh, this is the right thing to do. As much as we want them to, they just, listen, it just doesn't work. So you have to provide like, this is the benefits. We'll be able to move faster. This will increase or decrease X by Y, but you have to give these metrics and these incentives to get any change done I found. And unfortunately, this is very common where people are, we don't have time for this. We don't know how, it's a million different excuses. I've been doing this long enough that I can advise folks on, like I see the train coming. Like I'm like, okay, you continue down this path. This is what's going to happen next. And I try to give a lot of warnings and as they start seeing those things happen, then people start, okay, maybe she knows what she's talking about and try to adjust. But unfortunately, lots of times it had to be some kind of catastrophe, like some money lost or something like that, that gets people to buy in. So everybody out there who's thinking about this, these are fantastic points and we can tell people these stories. We're storytellers. That's what we do with presentations. We've had these experiences, but somewhere in your life, somebody told you something and you completely just ignored it. We've all done this. When I was a kid, my grandmother told me not to put my hand on the hot stove and I got mad and said, yeah, whatever, it's not that hot. And what happens? I put my hand on the hot stove and had a nice big burn on my hand as a little kid. You don't do it again after that. My point is you can tell somebody something 20 times, but until you actually experience it, you don't really get it. And those are the real experiences that drive us. So what I do with companies a lot of times, the last company I worked with on this was they had hundreds of apps in production. And instead of trying to really work it into everything right away, what we did is we picked a small mission critical, still mission critical project, which had smaller scope. And we did this with full code quality, all the things that we're talking about here. And then it went live and it was the first app that that company had put live in many years that had no bugs after it went live. They had no 24 hour maintenance for the weeks and weeks and weeks afterwards. And that, showing that scorecard to the executives is like, okay, maybe you implemented a few less features because you did more quality, but nothing broke. And the customer experience was great. That experience by them touching that hot stove and knowing all the other past experiences was far more valuable than me standing there saying, I told you so. Yes, experiencing the pain is often the best teacher. Do you think there are situations where quality doesn't matter? Like at least my experience is that often, especially in the startup scene, quality gets completely thrown out the window in the beginning to just push the product and make money. Do you think that's a viable way to go? Or are there other situations where I think it's okay to just say tests not needed? Quality doesn't matter? I think it always matters, but it's a degree. There's always a sliding scale of how much you need to do. And the ultimate thing for all of us to have jobs is business, the business goals and values and that the companies have to make money. That said, there's a, you know, you aim for the most quality you can get. And then you start figuring out what's the reality of the time, the budget, the resources and everything else we have. We can do things to mitigate this, like I mentioned before. Nothing we can do is become better communicators with our business stakeholders. So if we can really talk about the value of the quality and why we need scope creep not to happen and keep the scope where it is, so it's not a moving target, these things help. But in the end, yes, there are things that always slip because this is a business world. You just have to very carefully weigh if this slips, these are the things that you might be risking. I agree with that. I think that the only other situation I can think of where quality or testing isn't like the most important thing would be if I'm developing a prototype I intend to throw away. Although careful with that, because I've personally been in so many situations where that prototype never got thrown away. You know, so be aware that that's a pretty common pattern. You got some prototypes that are live out there, Jason? Hundreds. We have one more question from the audience. Niels says, I love the distinction between functional quality and structural quality. Of course, it most do what the user needs, but is it sustainable under the hood? Any thoughts on that? I think that's a tough one to answer for me. But I'll try to break the ice and talk a little bit. I think everyone has a different thought around how you should structure your software. And in my view, and maybe I'm happy to be wrong and for folks to disagree with me, I haven't seen any clear advantages from one pattern to another for the most part. I think the thing that's important is that you're consistent, like we've talked about so far. So as long as you have identified what pattern you and your team prefer using and how you're constructing the software, take the extra time to build in some automation so that people outside of your team or new joiners to the team can understand how to contribute to that pattern and that you don't need a manual review each time. In my view, that's the most important thing, apart from any other pattern that there is. I don't subscribe to anyone in particular. I think a big key is a friend of mine, Brian Holt, had a good saying, and I'll mess up his exact saying. But it's basically, if you can't automate it, it's not worth it. So if you come up with a rule, and I'll make up a silly one, like curly bracket cuddling. A good friend of mine years ago saying, should the curly bracket be in the same line as the if or the next line? And I'm like, I don't care. Just pick and make a rule and automate it, and we'll all do it. This was about 10 years ago when linters really weren't that great. But that's the kind of thing where if you can automate your rules, it's much better and much easier to have these kind of guidelines and consistency and keep your structural integrity. Yes, definitely. Pedro asked, should we incorporate refactors into our current stories or create technical chores for those kinds of more extensive refactors? I mean, we already talked a bit about pull requests becoming like just a puddle of all sorts of stuff that's being made. What are your thoughts on this question? My experience, you don't really get the cycles dedicated to refactoring something like you have to squeeze it in where you can. That doesn't mean you get some muddy up features with a whole bunch of refactoring. But I like to tackle it as it's needed. For example, if I'm modifying an existing feature and I find that in order to implement this, a refactoring of some things would be nice, then I'll try to get that into there. So maybe sometimes if it's really big, I'll split it up. So I'll split up the refactoring and that's one thing that I push and then I'll do the actual feature after that to make sure everybody's cool with the refactoring before I dump all of this into one. But yeah, I think that trying to find dedicated time to do refactoring and get that buy-in from management is really difficult. So you kind of need to do it as needed. Yes, I totally agree. I think to build on that even as you're asked to implement new features or launch things by the business, do your best to basically incorporate these refactors into your changes as you go along. So ideally an iterative refactor is the way to go. Well, we only have a few minutes left. Are there any last pieces of advice, any last words of wisdom that you want to share with the audience? I want to say that folks really need to get out of the habit of thinking of code quality as this separate task. And this should be something that you're taking with you as you develop your features. It breaks my heart when people say things like, oh, we don't have time or the client doesn't want to pay for testing and stuff like that. When you go and buy a car, you didn't ask anything about, okay, do I need to pay extra for them to test to see if it works? You expect this to be included with it, right? Same goes for our software. People are not looking for a separate line item that says, make sure the thing I'm paying you for actually works, right? This needs to be in our minds and exercised with everything that we do in development. Yeah, I'd say my final thought is not only is it good to create a style guide, but to create your process up front. Before you start a project with a team, this doesn't take hours. This is a very short meeting usually of what is our process going to be when we create features? How do we decide on the features? And are we going to have testing and CICD? Setting those things up takes a while, yes, but the decisions to do those in beginning of project are really important so everybody's on the same page and can really avoid later somebody like me saying, Jason, why didn't you do X? Well, we never discussed that, blah, blah, blah. That's the kind of thing where you communicate with your teammates right up front and it's just a lot easier and it really helps quality. My view will be controversial, but what I would say is double your estimates and look up agile, sustainable pace for when you're going to talk to leadership because this is basically the practice of moving at a sustainable pace instead of moving in sprints and fits like we usually do in the software development field. Take the time to write software that's of quality that you're proud of and that you're confident that it works. That's important and these are some ways that you can maybe try to sneak that in. Then you can deploy with the security on a Friday afternoon and leave in the weekend. Thank you so much, Jason, Angie, John. It was delightful to talk to you about quality and that's the end of this panel. All right. Thank you. Bye bye. Bye bye.