
39 min
Panel Discussion: Code Quality
Video
Thanks for having me. I'm Anna and joining me here are three awesome people to discuss quality. We have John Papa, who is a professional web and mobile developer and currently works as a developer advocate for Microsoft. He's also very active in the open source community and is also active on podcast, Real Talk, JavaScript. We also have Angie Jones with us, who is a principal developer advocate at AmpliTools and specializes in test automation strategies and techniques. She is also a very creative inventor and volunteers with Black Girls Code. Last but not least, we have Jason Palmer here with us, who is a Jest core contributor and the creator and maintainer of JestJUnit. He is also an engineer and technical product manager at Spotify, where he focuses on infrastructure, CI, CD, and test automation. As Matti already said, we're going to talk about quality. Quality is indeed a topic that is discussed a lot and where many people have many different opinions. But I would like to start and talk about what quality actually is, like what are aspects of quality that we should consider. And I'd like to make a round and let's maybe start with Jason. Jason, what do you think quality actually is when it comes to JavaScript projects? I think that there's a lot of different ways to look at it. But at the end of the day, if people outside of your team can easily contribute to your software and you're confident that it's not going to break, and if you're able to continuously deploy to production, so you make a commit to the main branch and it's able to go directly to production and you're confident about that, you know you've done a good job. So that's at least when you know you've developed something of high quality. Yes, if you can leave the office Friday afternoon, just push to prod shortly before and then leave for the weekend. That's definitely a good feeling. You can do that. Angie, what do you think? Yeah, I definitely agree with that. I think the key here is one, that it works. That's very obvious. But you know, that it works for your users, right? Not your own local machine and only the various happiest of paths. But it works for your users. And a key part of what Jason said is that the team members are able to contribute to that. So that means we're able to read it, we're able to refactor if needed. No one is scared to touch certain modules. You know, and that's when you know you're in bad shape when it's like, oh, I don't want to touch that. So those are the characteristics I feel of quality code. Yes, definitely. If just everybody, even new team members can just participate in it. John, what's your take on quality? First, I think that the characteristics, that's the key word I think Angie just mentioned. This is one of those topics where it's hard to describe code quality, but you know it when you see it, and you absolutely know it when it's not there. So it's easy to spot those kinds of things. But it all comes down to what Jason and Angie said, it's more about does it work? Does it meet the business needs? And I look at the long-term viability of a project as being a big sign of code quality. If the project lives for the term it's supposed to live. For example, you know, once it goes live, we all know there's issues and maintenance and things to work on, which is where we get to the contributions that we all talked about here. That's got to be something that keeps the project moving. The sign for me of really bad code quality is when people start talking about, wow, we just deployed this thing, we have an update. It's too hard to update it. Let's just redo it from scratch. And that happens far too often in the business world out there. And I want to say that it's really evident without even looking at the code when it's of poor quality. And I think we all can attest to, you know, situations where we've used some product as an end user and you just know that what's underneath here is not of quality. Like you can see it even, you know, from the outside. Yeah, definitely. You can easily spot bad code, but is there also a way to spot or measure good quality code? You're nodding. Feel free to just elaborate on that. Yeah, there's lots of, you know, clean coding principles and things like that that are out there that kind of gives us all some direction on how we should go about developing good quality code, you know, as far as like, you know, things are not too big or too long or too abstract. You know, there are these certain principles that I followed throughout my career, and that's typically led to good quality code. Other ways I think is how testable is your code, right? And I feel like that's one that's glossed over quite a bit, but it gives so much insight when you think about your coding from that angle. Whether you practice TDD, which is test-driven development, or if you test your code after you've written it, that's fine, too. That is legal. But how testable your code is is a really good indicator I've found in how well it's written. Yes, I've met that experience as well. I agree 100% on testability and spam calls. So I agree 100% on testability there. One thing I really like about Sign for Code Quality is when things become testable. I hear a lot of folks say, I can't write code that's testable. And I do a lot of coaching sessions with them, and a lot of times we'll ask why. Like why isn't it testable? What it really comes down to quite often is the code is, as you mentioned, Angie, too long. What's too long? I think that's subjective. But when your code is doing multiple things, when a function, for example, is doing seven things and you've got tons of comments explaining what all the ifs and thens and switches and for loops are doing, that's a great sign for maybe these things should be separate because then I could test each individual thing on its own instead of wondering where in this long journey did something go wrong. The other thing I look for in code quality and I recommend to folks is I, because I like Disney, I came up with a thing called the seven D's. And none of them stand for Disney. But Disney, we do these weird things. And I used to work there. And the seven D's, I think of code quality and these seven areas. You've got to design your application. You must handle that well. You must develop it well with code quality and styling and your guidelines. But you also must have a documentation of some kind in your company for that. Also another D in there is destroy. Can you destroy it? Can you test it? Make sure that your code works under the bad conditions, not just the happy path. There's also going to be a demo. How many of us write some kind of a test harness to make sure the thing works in a separate app somewhere? Why not include that as part of your repo so people can do an end to end test on that feature or integration as well? And then you also get down to things like deployment. Can you deploy your code out to production? So the final one is I couldn't come up with a good D for review. So I called it the review because I'm Italian. So the review. You got to do the review because you got to make sure you get a peer review from other people. If I write code, Angie and Jason have to review it. Not me, not the person who writes it. That's a great point. I think the way that I look at quality is how are you kind of constructing the automation that goes into every code change? So giving some really good thought around how much do I want to test before code is merged into the main branch? How many things am I okay testing while it's in production? You know, the form of monitoring to understand how my users are interacting with the system, how it's performing out there in production. So I think basically having like a conscious thought around this and planning ahead for what would make the most sense for you and your team and what kind of velocity you want to have makes a whole lot of sense. So that's a sign of quality in my view. Okay. And say I'm a person who just started coding. What do you think is a good way to learn about quality and to get into this whole topic? Where should I start? I think it's probably pretty easy to learn how to write unit tests right away. That's usually something, you know, soon after you're getting a hang of writing code and getting something working, you know, it's not too much of a stretch to take it a little bit further and be able to write that first unit test. And I think that that's a great first thing that you can do just to get a feel for what testing is like and how much of an impact it can have on your code as you develop it. Yeah. Unfortunately, testing is one of those things that isn't typically taught when we're learning to code, whether that be traditional university paths, whether that be bootcamp, self-taught, whatever. It's one of those kind of side notes if you even get it at all. So it's very much still a kind of on the job thing that you learn. I strongly recommend people review other folks' code, as John mentioned, right? So for code reviews, don't just look at the feature, look at the unit tests that were checked in as well. And this kind of gives you some insight into how other people are approaching their testing. And challenge yourself then as well. Did they miss a case? Can you think of anything else that's, you know, not so far out there that we don't want to bother with including it, but, you know, something that would be a value that was missed in the unit test. So this kind of gets you into the thought process. You step away from construction mode and you're more thinking about, for lack of a better word, what John said, destroying. But no, you think about, like, how to exercise this code and what are other various scenarios that are, you know, legal scenarios that your code should definitely cover. And in that, you start thinking more in this kind of quality mindset, and then you can leap into what Jason said and start, you know, contributing unit tests maybe for that other feature or some of your own as well. I'll give a... I agree testing is really, really important. And I don't want to diminish it by what I'm about to say, but I want to give a little devil's advocate to testing. I think one of the reasons testing is difficult today and not in the past, because we have great tooling now. For a while, that was a problem years ago. Now we've got great tooling. Is when we test, we all teach, when people do teach, they teach how to test, but we don't always teach what to test. And that becomes a problem because, yes, I can write a test that does something, but is there value in that test? And that's the real thing that this panel, we have not nearly enough time to get into, but that is far more important than the how, because you can have a hundred percent test coverage. And anytime when somebody tells me that, usually I'm looking at them going, what are you actually testing? Because that's like, how far did you go? Because the most important things are probably, you know, the business features and whatnot. So I'll add one different aspect as well, which is a style guide. And obviously I'm biased because I like creating these things, but it doesn't matter who writes it or what it says. Very bluntly, it doesn't matter what it says, as long as your team follows it. A big piece of code quality is when you have three people or you've got 3000 people on your dev teams, having a style guide that you can enforce through some kind of linting rules and whatnot is critically important for so many reasons. And consistent code is far easier to maintain as we talked in the beginning, than let's say Jason makes a pull request and he changed one line of code, but he has three spaces instead of two inside of his code files, which would be crazy. But if he does something like that, all of a sudden I've got 8,000 files in a pull request that I have to review and I'm not doing it. So things like this, I'm exaggerating on, but it really is important to have a style guide that your team can follow. And again, the rules are not as important as consistency. No, that's really good. The style guide part, yes, the linters and all of that, but more specifically the part about what are you testing? I find people really miss this. Again, I don't really fault them much because it's not something that we teach very well. Shameless plug, I have a free university that's dedicated to testing. It's called Test Automation University. It's online. All the courses are free. So you can take courses there to learn this stuff better, but I can't tell you how many times, and that's why I stress to not just review the features and code reviews, but also review the test. I can't tell you how many times I've seen unit tests. I'm going to give an example, let's say it's testing some API request and the assertion is literally, oh, make sure the response is not no. That's not really testing what we should be testing. Having just something come back is not the same as having the right thing come back, for example. So really give some thought into the coverage that you're adding, because these tests, the tests are really there to save you. There's multiple benefits to having tests, but the biggest one in my opinion is the fast feedback and to save you from really ugly, nasty, costly bugs that we can catch. We can catch this before it goes to prod if we test this appropriately. These are all really important points. One thing I just wanted to add to this was a test should always represent a user, in my opinion, and I think it's important to understand and think about what user am I testing for. So a much lower level test, a unit test or an integration test, the user you probably have in mind in this case would be another development team. So you may be writing a test against an API or against a function call or a class or something like this, and keeping in mind that the user in this situation is another development team that might be interacting with this function, interacting with this class and what have you. And then an end-to-end test could be an actual user out there interacting with your web app, with your backend service, whatever it is that you're developing. But I think it's important to keep in mind who your user is for each test that you're writing. And I think doing that, you're going to do a much better job avoiding testing the implementation, which is sort of what we call a white box test or a gray box test. And generally speaking, that's something you should try to avoid. Yes, definitely. We've already talked a little bit about reviews, and we actually have a question for that. Paul asks, how do you feel about pull request reviews? What are your steps? How much time do you put in? What's the workflow? I think we all probably have opinions on how to do reviews. And I don't think there's a lot of wrong ways. There's a lot of right ways to do it for sure. One of the things that's a wrong way is to look at the code in GitHub, not run it, not pull it locally, not make sure it works and just go, oh yeah, I trust Jason. Jason does good work and merge. A big thing for reviews like I like to do, and I have multiple processes, on some teams, I go to GitHub and I open up the review process right there. And I will make comments on code. And if it's small changes, I'll add the suggestion button, which I love, right in GitHub. You press that little plus sign, say, oh, they made a typo. Instead of making the person change it, fix it for them. And it creates a suggestion. And you can actually start a review, and I'm known for creating on a file 30 or 40 suggestions for little things. And that way the person can either accept each one or decline each one without making it a big process. The second step I like to do is to run the tests. If it's got CICD built in, I go look at the results there. Or you can just open up a container or something locally, pull the code down and run the whole thing and exercise the changes that they made. So there's lots of things to do. It should never just be a, oh, yeah, Angie's great. Let me just press the button. Yeah, that is so important. That's really good. Because a lot of people get caught there, right? You're basically becoming the human linter. And you're checking for things that the linter has already checked for. And you're like, oh, looks good to me. You know, very minor, minor suggestions. But no, just like you said, run those stuff. I said this to a colleague one time and their mind was blown. Like they literally never thought, oh, maybe I should actually run it, right? Another thing I love to do is to pull the issue, right? Pull the issue and see what the requirement actually was and determine if this implementation meets that requirement. Because it could be implemented, you know, great, the code is beautiful, code quality, yes, but it doesn't meet the business need that was requested. So that's where I usually start. I run it. Again, I look at tests, make sure we have, you know, some good coverage there. I also take a step back and think about, okay, if this person won the lottery tonight, and I had to inherit this, what does that mean for me? Is it readable? Would I be able to take this over if I had to? And if not, let me call out the issues that would prevent me from doing that. That's a very good point. I think that that's really, really good. I agree with everything that's been said so far. I know, I mean, especially what you said, Angie, around, you know, does this meet the business goals? I think I'm imagining a slightly hypothetical situation. You know, so let's assume that your code already has really good test automation and linting, and you're very confident with anybody being able to make contributions. Sort of at that point, what is left for you to review, right? And so I think everything that's meant, or that's been said here is perfect. I don't really have much to add there. Something I'd like to just pull onto this, because I, Angie, you made me think of it while you were talking, was pull requests and commits. I think we're all making a few assumptions here, because we live in the worlds we live in. And sometimes I see pull requests where people do, I'll make an analogy to the political system of many countries. A bill, for example, the United States go out for, we're going to make, we're going to paint all the streets purple, just picking something weird. That's the bill. And that's the feature that Angie's talking about in this case. That's what the code's supposed to do. But then I sneak in, well, let's add this button. And Jason sneaks in, well, let's add this thing. And all these little things get snuck into the bill, or the code in this case. It makes a pull request really difficult to review, because you can't really separate all these features. And think about the person looking at the code. They now are looking at the feature, like Angie said, and said, all right, that's what it's supposed to do, but why is all this other stuff happening? And it just causes a lot of friction and swirl. So it's much better to create separate pull requests and separate issues. I know it's more process, but it's a lot easier for somebody else to look at, and you have a lot less errors. Yes, definitely. We have another question. Kevin asked, you talked about deploying. Do you think end-to-end testing is primordial to be confident of your deployment? No. It's helpful. Is it what? Is it required? Primordial is the word that was used. I think it's helpful. End-to-end testing, I love it. I think it's very helpful. But I wouldn't be confident if I did end-to-end test, if it worked, that it would still deploy to production properly. I'd still want full CI, CD. I'd want to see it on a staging server with as much like the environment and staging as it is in production, including SSL and certificates and environment variables and everything else too, as close as you can get there. Even then, I think all of us have shipped something to production, where with all of that, something didn't go right. So always, always, always have a backup plan. Always have one. Speaking of backup plan, and we already talked about a little bit, spotting good code, spotting bad code. Jean asked, do you recommend tools like Zona to measure the quality or to help with spotting good and bad code? I think to what some of the other panelists said earlier, as long as you're consistent with what your definition of good code is and what your winting setup is and things like that, I think that that's what matters. In my view, there isn't any one tool that is going to be absolutely perfect for defining code quality, but consistency is really crucial here. Yeah. And I think more than anything, it gives a framework for the team to have a discussion on, what does good quality code mean for us? What are the rules here that we all agree to? And that way, you remove a lot of this back and forth and opinionated comments from code reviews. You get all of that out of the way so that we can focus on what truly matters. Agree, 100%. And you're right, the tools don't really matter, and the consistency does. One thing I like about SonarQube in general is that a company I've worked at, it creates metrics, and the metrics don't really mean anything. It's just a number. But the rule we had at that company as part of the consistency was never do any harm. So every time you put new code in, whatever that metric is, that number, it should never go lower. As your code gets in, it should always either stay where it is or go higher. So you can't just go in there and destroy the application because, oh yeah, I feel like that today. That sounds like a very good rule to have. We've talked a lot about how to get into quality now, but say I switch companies, I took a new job, and the company is kind of anti-quality, it doesn't see the value of quality. Or as Johan asked, the other team members don't see the added value of having quality code. How would you handle this situation? How would you convince your team and your boss that quality is something you should invest time in? I think if I could start, I mean, I would start and try to lead by example, of course. I mean, that's always going to be an impactful personal thing that you can do. And hopefully others see the results in what you do. Kind of like a bigger, longer term answer to this is something that I've been working on recently. And I think it is important for teams to do their best to basically be able to tell leadership what kind of an impact they're going to have by focusing time on tackling tech debt, on writing tests, on writing more automation. If you can tell a greater story around that, for instance, being able to say, I think that we can speed up our development process by 20% if we invest in a CICD pipeline. These are conversations that you can have that have a very clear impact and the business responds to impact. So those are the two things I would recommend really trying to do. That's it right there. That's the key. Businesses are just not in the, companies are not in the business of just doing things because, oh, this is the right thing to do. As much as we want them to, they just, listen, it just doesn't work. So you have to provide like, this is the benefits. We'll be able to move faster. This will increase or decrease X by Y, but you have to give these metrics and these incentives to get any change done I found. And unfortunately, this is very common where people are, we don't have time for this. We don't know how, it's a million different excuses. I've been doing this long enough that I can advise folks on, like I see the train coming. Like I'm like, okay, you continue down this path. This is what's going to happen next. And I try to give a lot of warnings and as they start seeing those things happen, then people start, okay, maybe she knows what she's talking about and try to adjust. But unfortunately, lots of times it had to be some kind of catastrophe, like some money lost or something like that, that gets people to buy in. So everybody out there who's thinking about this, these are fantastic points and we can tell people these stories. We're storytellers. That's what we do with presentations. We've had these experiences, but somewhere in your life, somebody told you something and you completely just ignored it. We've all done this. When I was a kid, my grandmother told me not to put my hand on the hot stove and I got mad and said, yeah, whatever, it's not that hot. And what happens? I put my hand on the hot stove and had a nice big burn on my hand as a little kid. You don't do it again after that. My point is you can tell somebody something 20 times, but until you actually experience it, you don't really get it. And those are the real experiences that drive us. So what I do with companies a lot of times, the last company I worked with on this was they had hundreds of apps in production. And instead of trying to really work it into everything right away, what we did is we picked a small mission critical, still mission critical project, which had smaller scope. And we did this with full code quality, all the things that we're talking about here. And then it went live and it was the first app that that company had put live in many years that had no bugs after it went live. They had no 24 hour maintenance for the weeks and weeks and weeks afterwards. And that, showing that scorecard to the executives is like, okay, maybe you implemented a few less features because you did more quality, but nothing broke. And the customer experience was great. That experience by them touching that hot stove and knowing all the other past experiences was far more valuable than me standing there saying, I told you so. Yes, experiencing the pain is often the best teacher. Do you think there are situations where quality doesn't matter? Like at least my experience is that often, especially in the startup scene, quality gets completely thrown out the window in the beginning to just push the product and make money. Do you think that's a viable way to go? Or are there other situations where I think it's okay to just say tests not needed? Quality doesn't matter? I think it always matters, but it's a degree. There's always a sliding scale of how much you need to do. And the ultimate thing for all of us to have jobs is business, the business goals and values and that the companies have to make money. That said, there's a, you know, you aim for the most quality you can get. And then you start figuring out what's the reality of the time, the budget, the resources and everything else we have. We can do things to mitigate this, like I mentioned before. Nothing we can do is become better communicators with our business stakeholders. So if we can really talk about the value of the quality and why we need scope creep not to happen and keep the scope where it is, so it's not a moving target, these things help. But in the end, yes, there are things that always slip because this is a business world. You just have to very carefully weigh if this slips, these are the things that you might be risking. I agree with that. I think that the only other situation I can think of where quality or testing isn't like the most important thing would be if I'm developing a prototype I intend to throw away. Although careful with that, because I've personally been in so many situations where that prototype never got thrown away. You know, so be aware that that's a pretty common pattern. You got some prototypes that are live out there, Jason? Hundreds. We have one more question from the audience. Niels says, I love the distinction between functional quality and structural quality. Of course, it most do what the user needs, but is it sustainable under the hood? Any thoughts on that? I think that's a tough one to answer for me. But I'll try to break the ice and talk a little bit. I think everyone has a different thought around how you should structure your software. And in my view, and maybe I'm happy to be wrong and for folks to disagree with me, I haven't seen any clear advantages from one pattern to another for the most part. I think the thing that's important is that you're consistent, like we've talked about so far. So as long as you have identified what pattern you and your team prefer using and how you're constructing the software, take the extra time to build in some automation so that people outside of your team or new joiners to the team can understand how to contribute to that pattern and that you don't need a manual review each time. In my view, that's the most important thing, apart from any other pattern that there is. I don't subscribe to anyone in particular. I think a big key is a friend of mine, Brian Holt, had a good saying, and I'll mess up his exact saying. But it's basically, if you can't automate it, it's not worth it. So if you come up with a rule, and I'll make up a silly one, like curly bracket cuddling. A good friend of mine years ago saying, should the curly bracket be in the same line as the if or the next line? And I'm like, I don't care. Just pick and make a rule and automate it, and we'll all do it. This was about 10 years ago when linters really weren't that great. But that's the kind of thing where if you can automate your rules, it's much better and much easier to have these kind of guidelines and consistency and keep your structural integrity. Yes, definitely. Pedro asked, should we incorporate refactors into our current stories or create technical chores for those kinds of more extensive refactors? I mean, we already talked a bit about pull requests becoming like just a puddle of all sorts of stuff that's being made. What are your thoughts on this question? My experience, you don't really get the cycles dedicated to refactoring something like you have to squeeze it in where you can. That doesn't mean you get some muddy up features with a whole bunch of refactoring. But I like to tackle it as it's needed. For example, if I'm modifying an existing feature and I find that in order to implement this, a refactoring of some things would be nice, then I'll try to get that into there. So maybe sometimes if it's really big, I'll split it up. So I'll split up the refactoring and that's one thing that I push and then I'll do the actual feature after that to make sure everybody's cool with the refactoring before I dump all of this into one. But yeah, I think that trying to find dedicated time to do refactoring and get that buy-in from management is really difficult. So you kind of need to do it as needed. Yes, I totally agree. I think to build on that even as you're asked to implement new features or launch things by the business, do your best to basically incorporate these refactors into your changes as you go along. So ideally an iterative refactor is the way to go. Well, we only have a few minutes left. Are there any last pieces of advice, any last words of wisdom that you want to share with the audience? I want to say that folks really need to get out of the habit of thinking of code quality as this separate task. And this should be something that you're taking with you as you develop your features. It breaks my heart when people say things like, oh, we don't have time or the client doesn't want to pay for testing and stuff like that. When you go and buy a car, you didn't ask anything about, okay, do I need to pay extra for them to test to see if it works? You expect this to be included with it, right? Same goes for our software. People are not looking for a separate line item that says, make sure the thing I'm paying you for actually works, right? This needs to be in our minds and exercised with everything that we do in development. Yeah, I'd say my final thought is not only is it good to create a style guide, but to create your process up front. Before you start a project with a team, this doesn't take hours. This is a very short meeting usually of what is our process going to be when we create features? How do we decide on the features? And are we going to have testing and CICD? Setting those things up takes a while, yes, but the decisions to do those in beginning of project are really important so everybody's on the same page and can really avoid later somebody like me saying, Jason, why didn't you do X? Well, we never discussed that, blah, blah, blah. That's the kind of thing where you communicate with your teammates right up front and it's just a lot easier and it really helps quality. My view will be controversial, but what I would say is double your estimates and look up agile, sustainable pace for when you're going to talk to leadership because this is basically the practice of moving at a sustainable pace instead of moving in sprints and fits like we usually do in the software development field. Take the time to write software that's of quality that you're proud of and that you're confident that it works. That's important and these are some ways that you can maybe try to sneak that in. Then you can deploy with the security on a Friday afternoon and leave in the weekend. Thank you so much, Jason, Angie, John. It was delightful to talk to you about quality and that's the end of this panel. All right. Thank you. Bye bye. Bye bye.