Get Testing out of your Tech Debt

Rate this content

Technical debt and testing have a long and entangled history. Across many organizations, teams struggle to define “technical debt” and what should fall into the “tech debt” bucket. Testing commonly suffers the fate of being categorized as tech debt, and consequently isn’t prioritized. Defining tech debt, and even rebranding it, can help your team to prioritize testing and reduce the negative stigma around tech debt.

8 min
15 Jun, 2021

AI Generated Video Summary

Tech debt is a common issue that teams struggle with, but addressing it effectively requires a structured approach and prioritizing issues that primarily impact developers. Feature preservation, such as automated testing and monitoring, should be considered part of building features, not tech debt. Fixing user-related issues takes priority over developer concerns, and scaling to serve more users benefits both users and developers. Climbing out of tech debt effectively is possible with a focus on developer productivity and user benefit.

1. Understanding Tech Debt and Prioritization

Short description:

Hi, everyone. Today, I'll talk about getting testing out of your tech day. Tech debt is a common issue that teams struggle with. It often becomes a dumping ground for tasks that don't make it to the roadmap. However, trying to fit all of that into just 15% of your team's capacity is not realistic. It slows down your team and has negative effects on morale and retention. To address tech debt effectively, it's important to have a structured approach and prioritize issues that primarily impact developers. Feature preservation should be considered part of building features, not tech debt.

Hi, everyone. My name is Carly, and today I'm going to talk to you about getting testing out of your tech day. Okay. First, just a quick introduction. I'm a software engineer. I work at Galileo, which is a small health tech startup in New York City. Previously, I worked at Haven and ZocDoc, which are also health tech companies. Do a lot of JavaScript, TypeScript, React. I love automation, which is why I'm here at this testing conference today, really excited. And in my spare time, I like to get outdoors.

Okay. So, let's start off with talking about sort of the state of tech debt, what I've witnessed on teams, how people are thinking about tech debt so far. I often see a roadmap, something like this, that product managers and engineering managers will work really hard to put together, figure out exactly what the very top priorities are for this team. And then there's usually some, like, 15-ish percent capacity that's dedicated to tech debt. And that's what I'm going to focus on today, that 15% capacity. Let's talk about what falls into there. I oftentimes see scaling will fall into this 15% capacity, testing might fall into there, upgrades and maintenance will be categorized as tech debt, bugs will be categorized as tech debt. And it sort of starts to feel like it's the junk drawer of software engineering, where anything that isn't explicitly put on the team's roadmap is kind of dropped into this tech debt 15% capacity drawer, and the team tries to get it done within that, within those constraints.

The reality is, though, I just don't think you're going to fit all of that stuff into just 15% of your team's capacity. There's actually, you know, a lot of work to do here. I've been on teams who have tried I mean, I've tried before to fit all that in. And the result I see is that it really slows down your team. Your team starts to suffer because the tech debt is just growing and is never really being fully addressed. So it's, of course, bad for your business when your team can't ship features quickly. It also has these second order effects, I think of like developer happiness starts to suffer because people don't usually want to work on teams that are saddled with a ton of tech debt. So you can have problems with morale and retention, you can either even like have problems with hiring because people often don't want to join teams with a ton of tech debt. So it's obviously not great to be in that place where you're just mounting with a ton of tech debt, but how do you get out of there? I think a solution is to apply some more structure and rigor around how your team decides what qualifies as tech debt. And what I've seen work well is this one philosophy that I've come up with, or that I've learned from others, which is that tech debt is issues that primarily impact the developer rather than the user. So that means things that are going to make the developer more productive, make the developer more efficient, rather than are directly sort of to improve the user experience. And the kind of like less obvious follow on to that is that I think feature preservation is in the scope of building features and should not be categorized as tech debt.

2. Feature Preservation and Tech Debt

Short description:

Feature preservation, such as automated testing and monitoring, prevents features from breaking and impacting users. It's crucial to have product manager buy-in for including these in the feature scope. Fixing user-related issues takes priority over developer concerns. Refactoring and writing tests are examples of tech debt and feature preservation, respectively. Scaling to serve more users benefits the users, not just developers. Applying this philosophy requires judgment and nuance, focusing on developer productivity versus user benefit. Climbing out of tech debt effectively is possible with this approach.

And when I say feature preservation, I'm talking about things like automated testing and monitoring, things that make this whole system aware that a feature exists and prevents a feature from disappearing or breaking without the developer team knowing about it. The reason I categorize this as not tech debt is because if these features are to break, the real impact falls on the user. Like it really is the user that suffers if a feature breaks. And so I think it's very much in the user's incentive to have feature preservation built in from the start.

So let me just double click on this philosophy. When you get a ticket or a project from a product manager, you usually have some very explicit scope built in, which is to make that new thing work. And then there's kind of some implicit scope hidden underneath, which is to not break the old things. If you were to break the old things, the person that's gonna suffer, again, is the user, rather than the developer. And so for that reason, I think it's really important to have product manager buy-in that things like automated testing and monitoring need to be in scope of the feature, because the users will suffer if you don't have that.

Okay, cool. So now we have that philosophy, let's try and pressure test it against some more real-life situations. And as we're going through these, I wanna again focus on, like, who is benefiting from solving this problem. First one, let's say you have a ticket to fix a bug, like fix this broken tooltip. This one I would categorize as for the user, and so it's not tech debt. It's probably not the developer who's suffering from this broken tooltip. It's probably the user, and so it should be prioritized against all the other user concerns. How about a code cleanup ticket, where we are going to refactor some huge authentication file into multiple modules. This one, I'd say, is a great example of tech debt. This is for the developer. This is probably in order to help developers move faster, understand things more quickly, and so this is a great use of that 15% capacity. Testing, like writing tests for a new model. I'd say this is feature preservation, and so it's not tech debt, and it really needs to be baked into the estimation of the original creation of the feature. And scaling, like making a user service work for 10,000 people. Again, this is actually for the user, and so I'd say it's not tech debt. Despite the fact that this feels like a very technical project, doing this work isn't going to make the developer more productive or more efficient. It's really to serve your users better, and so I'd say this isn't tech debt. This, again, should be prioritized against other user concerns.

Okay, so I recognize that I've sort of portrayed this scenario as very black and white, but it's not going to be, right? There's going to be lots of things that are maybe good for the user and good for developer productivity. And so it'll take judgment and nuance to apply this philosophy, for sure, but as you're thinking about it, I think just concentrate on the two ends of this spectrum, like tech debt being primarily for developer productivity, not tech debt being anything that's really for the user's benefit. And hopefully, with this structure, you can start to climb your way out of the tech debt that 15% capacity more effectively.

Okay. Thank you. That's my talk. Just a quick mention that Galileo is hiring, so definitely reach out to me if you're interested in that. And you can find me on Twitter at Carly Jo.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Whether you're testing your UI or API, Cypress gives you all the tools needed to work with and manage network requests. This intermediate-level task demonstrates how to use the cy.request and cy.intercept commands to execute, spy on, and stub network requests while testing your application in the browser. Learn how the commands work as well as use cases for each, including best practices for testing and mocking your network requests.
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Featured Video
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Cypress has taken the world by storm by brining an easy to use tool for end to end testing. It’s capabilities have proven to be be useful for creating stable tests for frontend applications. But end to end testing is just a small part of testing efforts. What about your API? What about your components? Well, in my talk I would like to show you how we can start with end-to-end tests, go deeper with component testing and then move up to testing our API, circ
TestJS Summit 2021TestJS Summit 2021
31 min
Test Effective Development
Developers want to sleep tight knowing they didn't break production. Companies want to be efficient in order to meet their customer needs faster and to gain competitive advantage sooner. We ALL want to be cost effective... or shall I say... TEST EFFECTIVE!But how do we do that?Are the "unit" and "integration" terminology serves us right?Or is it time for a change? When should we use either strategy to maximize our "test effectiveness"?In this talk I'll show you a brand new way to think about cost effective testing with new strategies and new testing terms!It’s time to go DEEPER!
TestJS Summit 2023TestJS Summit 2023
21 min
Everyone Can Easily Write Tests
Let’s take a look at how Playwright can help you get your end to end tests written with tools like Codegen that generate tests on user interaction. Let’s explore UI mode for a better developer experience and then go over some tips to make sure you don’t have flakey tests. Then let’s talk about how to get your tests up and running on CI, debugging on CI and scaling using shards.

Workshops on related topic

React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Featured Workshop
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
TestJS Summit 2023TestJS Summit 2023
48 min
API Testing with Postman Workshop
In the ever-evolving landscape of software development, ensuring the reliability and functionality of APIs has become paramount. "API Testing with Postman" is a comprehensive workshop designed to equip participants with the knowledge and skills needed to excel in API testing using Postman, a powerful tool widely adopted by professionals in the field. This workshop delves into the fundamentals of API testing, progresses to advanced testing techniques, and explores automation, performance testing, and multi-protocol support, providing attendees with a holistic understanding of API testing with Postman.
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
TestJS Summit - January, 2021TestJS Summit - January, 2021
173 min
Testing Web Applications Using Cypress
This workshop will teach you the basics of writing useful end-to-end tests using Cypress Test Runner.
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.
TestJS Summit 2023TestJS Summit 2023
148 min
Best Practices for Writing and Debugging Cypress Tests
You probably know the story. You’ve created a couple of tests, and since you are using Cypress, you’ve done this pretty quickly. Seems like nothing is stopping you, but then – failed test. It wasn’t the app, wasn’t an error, the test was… flaky? Well yes. Test design is important no matter what tool you will use, Cypress included. The good news is that Cypress has a couple of tools behind its belt that can help you out. Join me on my workshop, where I’ll guide you away from the valley of anti-patterns into the fields of evergreen, stable tests. We’ll talk about common mistakes when writing your test as well as debug and unveil underlying problems. All with the goal of avoiding flakiness, and designing stable test.