Transcription
Hi everyone, thank you for chiming in for this session. My name is Alex and I'm joining from Romania, from a city called Cluj, which is in the beautiful heart of Transylvania, where I work as a front-end engineer at CodeSandbox. And on the side, I'm also the founder of JS Heroes, our local community that organizes the local meetups here in Cluj, and also the JS Heroes conference that happens every spring. You can also find me on Twitter at AlexNmodovan. And today's topic is a framework for managing technical depth. And I wanted to talk about technical depth for a while, and basically this talk is also not just about technical depth, which is kind of inevitable for all the projects, for all the engineering teams, but also this is about refactoring. And I'm not necessarily going to talk with you about, like, you know, here's how you refactor code or here are some patterns for actually doing refactoring. I'm here to talk a bit more about the more meta layer, if you want, and I want to talk about refactoring culture. Because I've been working for the past five to six years with different product teams, different engineering teams, vertical teams, horizontal teams, platform teams, and it always felt weird to me that we never really got refactoring right, right? As an industry, we still have this problem. And actually, kind of few people talk about this in general. So I thought about exploring some things. And ever since I started thinking more and more about this, I started coming up with this, let's say, framework or kind of like structure or more, let's say, more like, how do you build the right structure for around the refactoring processes, right? And this is what I refer to as refactoring culture. And the trigger for this particular talk came some good months ago, when we were working at Code Samples. We were working on this, let's say, small refactoring experiment where we introduced a new way of communicating with our back end. And we have this kind of back end of our code editor, which we call Pitcher, which kind of handles all the low level primitives of what you can consider a code editor, like file system access or tasks or processes and stuff like that. So we came up with this new way of doing it. And now we have basically two ways of doing the same thing in the application. So we have a legacy Pitcher provider, which was at the top of everything. And then inside it, we have something called a Pitcher provider. And just looking at this code at first, just looking at the PR that was kind of introducing this change, kind of at first made me scream, like, oh no, this is not OK. We're just messing the code. We're going to live with this forever. And this is just kind of like a bad practice. But then I slowly started to understand that actually this is OK. There is no possible way, or there was back then no possible way for us to just transition the entire codebase overnight and just use the new way of consuming data. So we kind of had to support these two alternative ways. And this is what made me realize that, OK, as long as we have a method for this, as long as we have a clear understanding across the entire team that this is how we are handling this process, this is fine. This is more like managing this whole problem. We have this technical debt. We are aware of it, but we are managing it at the same time. We're not stopping all development to just focus on this. We are moving forward with all our feature development. And at the same time, we have a plan to kind of mitigate this problem. So pretty much this is the gist of it, what I want to say with this talk. Instead of fighting technical debt, instead of being in this process where whenever something pops up, whenever something is off, whenever the codebase does not reflect the quality that we want it to be at, we stop all product development. And we say to your product managers, we need to solve this because it's higher priority. And that clearly ends up becoming a problem and a struggle for product development because you no longer have a reliable way of just shipping on a cycle basis. And at the same time, so instead of doing that, instead of saying, OK, I want to eradicate all technical debt as soon as it pops up, I'm thinking of a framework to more like manage this, right? So you do not be crippled by it, so you don't have to fall into the other category, into the other extreme where, oh, we're not doing any refactoring anymore. We're just focusing on features and then things just pile up. But at the same time, not just be very reactive to this. Whenever something pops up, we immediately try to fix it. So this framework tries to put into place all these things, right? And this is what I call the three pillars of refactoring, because obviously there are three pillars and we're going to walk through them in a bit. So the first one or the first thing you do is or the first pillar is the practices. The next one is the inventory. And the third one is the process. Now they might sound a bit abstract, but once I get through all of them, they will hopefully make sense for everyone. So let's take them one at a time. What is practices about? Practices is pretty much about setting that goal with the team, right? Sitting together and saying, okay, in this team, in this engineering team, this is how we want to do things. For this particular project, this is how we want to approach. This is the architecture. These are the patterns. These are the guidelines, right? So you have, again, all these things, patterns, you know, we have your architecture. Ideally you have them documented as well. So a good practice of keeping track of these practices is to basically document them. You can have ways in which you want to structure your files in your folders. Do you want to structure by function, structure by feature, all these things? How you want to compose components, how do you want to create components? At Code Sandbox, we do have this document for what we call the general coding guidelines. These are things that you can't really automate, right? There are decisions that just pop up. You can imagine you have a PR at some point and someone figures, hey, we've been doing this kind of logic. This is how we approach things for this kind of problem. Let's document this as a practice so that others can read it and come back. Later on we can cross-reference it in other PRs and say, hey, we decided three months ago that we're going to name things like this or that we're going to use this particular abstraction like this. Or if you have this, you should extract it in a separate component. All these things that cannot really be automated and are not always black and white. There are subtle differences and there are things that your team needs to decide on. Practices is setting this North Star of where you want your architecture to be, where you want your patterns to be, what is the desired state in which the code base should be in, or what's your aim as an engineering team? And then the next step is, in my mind, super important, and it's the one that most of the teams are missing. It's doing a bit of introspection into the code base or research and figuring out what are we missing. What's the gap to this ideal or to these practices that we have in mind? This is where we want to get, but where are we now? This is the inventory, trying to figure out what's missing. This is where you document what is your technical depth. What are the parts of the application that need to be upgraded? All the things that you need, all the steps that you need to do to get to that ideal scenario. You could have things in your backlog, for example. You start documenting, hey, we have specific tasks to upgrade something, to swap out an old way of doing things with a new way of doing things, create a new component, extract a new component, deprecate the system. Whatever it is, you can have that somewhere documented. Some teams prefer the backlog. We did notice that backlogs are not the ideal scenarios for this because items tend to rot in the backlog, especially if you put them on low priority. We ended up with a separate document, for example. I encourage everyone to do this. Try to figure out which are the key aspects of my code base that have that growing technical depth. There can be very small things or it can be bigger things. In this case, we started documenting even the smaller things in this document called technical depth accounting. Whenever you introduce some technical depth or you just find something that all of a sudden you have, hey, this is technical depth that we haven't figured out, you document it somewhere. It's not just living in someone's head that, hey, there's some technical document. It is actually documented in a place. Once you get to that, you start building priorities. This is actually a fun story. This is code from production that we still have. We had this Friday where we implemented something called the Markdown preview, basically taking Markdown files and rendering them nicely, formatting them for the preview when you navigate on the code base. We kind of took this as a hackathon experience where we just said, okay, let's build this. We have this idea, let's build it. We shipped it in a couple of hours, a couple of engineers. Of course, the code is awful and it's just a long list of callbacks and plugins on top of plugins and all sorts of weird quick configurations, but it just works. This particular part of the application, it's one particular component, but it's on the very edge of the architecture. It's not something that people will open on a daily basis to have to navigate through. It's not a central experience. You can even disable it if you need to. It's very easy to just get that component out of that and all of a sudden Markdown files do not have the preview option anymore. When you start thinking about priorities, you have to understand, okay, there's no way I'm going to be able to address all the problems on this code base, so let's prioritize things. Is this important? Is this a system or a module that people access on a daily basis and have to deal with on a daily basis? Then yes, let's put it with high priority and deal with it as fast as we can. If not, it can be very well a low priority thing, or it can even be ignored. It can even not be on our radar. We shouldn't focus on this in our future process that we will talk about in a few minutes. And finally, speaking also about inventory, you can also add here plans, planning for what to do next. Not only that we have discovered technical depth, we have documented it and we have prioritized it, but you can also say, okay, for this particular technical depth, the solution is to replace the old component, the legacy component with the new component, to deprecate this use case, to extract some new subsystems or subcomponents from that. Planning like this can be done via RFCs or any sort of document that explains these are the steps which we need to take to get from A to B. Because coming back at these two first pillars that we talked about, you can pretty much imagine that your practices are kind of like the goal. I think I've already mentioned this. This is the aim that you have. This is where we need to go. This is what sets the expectations of the entire team that this is where we'll get with the code base. The inventory is kind of giving you the harsh reality if you want. This is where we are now. We're this far from our ideal practices. With planning and with all the steps taken, with documents on both sides, with clear expectations and clear understanding and everything shared inside the team that this is how we proceed with this, then the process, being the third pillar which we'll talk about in a second, is kind of like the step of taking everything from the current state to the next state. Almost like from A to B. That's why the plan is also part of the inventory, because it will give you the starting steps. In order to address this particular technical depth, we need to do this. In order to move from the legacy system to the new system, we need to address this and that. Very important, if you have these two prior steps, if you have done your research, if you have documented things, and if the team agrees on the ways to move forward, then the process kind of resembles a lot the process of any web development feature. Or any product feature, for that matter. You have execution, you have ownership, someone gets assigned to a particular task, they need to change something, they need to fix a particular area of the code. Time is also taken into account, because we have prioritized things, we have planned for solutions, so we know, mitigating this particular technical depth might take a week, might take one hour, might take longer, it depends of course on the complexity of it. And you also have the progress. Every week, since you've planned things out, since you work with a clear goal in mind, you can also pretty much figure out, this is the progress, I'm halfway there, we still need time, or I'm almost done. So the process is very similar to any other feature, so it can be very much a product feature, it can be an engineering feature, an engineering task like this, you can see them through the same lens here. So just to recap, we have the practices on one end, we have the inventory, which gives us the gap, how far are we from this ideal scenario in which we want to get to, and then you have the process, which gets you from A to B. And it's very hard to do a stable or a constant refactoring effort that doesn't disrupt product development if you don't really follow most of these things. Just think about it, if you don't have the practices in place, you don't really know where you're going, you're just refactoring in vain, like, okay, I'm refactoring for the sake of it, I just know that this is not okay, but I don't really know where to stop, when is enough, when is the quality of that code or of that architecture good enough for me. If you don't have the inventory, then you don't really know what to pick up, you start refactoring parts of your application, and again, the inventory is super important because it's often missed by teams, and you end up with someone picking up some task, wanting to refactor something, going through the entire code base, but missing chunks of parts of the code that are not even taken into account, because no one actually documented that, hey, there are some problems there, we need to also address that, that's legacy as well, not just the main path that was refactored. And then of course, without the process, you can't really have anything if you don't take into account the time and the fact that the team needs to own this and whatnot. So, considering these steps, let's try in the last part of this talk, I'm going to try to give you a couple of examples of what I call the rules to make this work. Things that I noticed, after thinking about this kind of process, how this could work, I started seeing, okay, what are the things that teams are doing okay, or the teams in which I worked were doing okay, that kind of helped this process move along, helped maintain these things long term, made this more like a culture thing, more like an engineering culture thing rather than just another thing that is on someone's plate. So rule number one is to make these things visible, right? You cannot really refactor in the dark. There is this common misconception that people then just should refactor things on their own, right? Whenever they find something, just quickly refactor it without the PM knowing about it, because the focus is strictly on product development. You need to be very transparent about this, and I feel like everything should be in the open, right? There should be clear tasks for refactoring that are taken into account when you're doing cycle planning, when you're doing iteration planning, when you kind of count for the amount of time that people will spend on the project, and so on. There's nothing wrong with having that here in your project management tool in this case. This is how we do it at Code Sandbox. We also have PRs that are very clearly defined as refactoring PRs, right? We don't just slip some PR, some refactoring work into a feature PR, because then of course you risk messing things up and you risk delaying the feature itself. So we try to separate. Whenever we find technical depth that even is high priority, we need to immediately refactor this. We don't do that in that PR. We just open another follow-up PR and we address that thing, and it's clearly marked as a PR for refactoring because then it gets reviewed differently, right? With different eyes, you have different approaches for looking at code that is refactored rather than code that just adds something or fixes something that is of more value to the product than to the code base. The second rule is to make it rewarding. Now it's very important that people get a sense of success from refactoring and from dealing with technical depth the same way as they do from shipping something, right? Whenever you work on a product team, whenever you ship something, you get a new release, you get that cool vibe that, hey, we shipped this, we made it, it's a success story. So what we do at Code Sandbox, we also celebrate any kind of success, which can be even like, okay, we got rid of a ton of code on a Friday. This is something that I recently just embarked on. There's always, you have in our Slack channel, there are always on different code bases, people are saying, hey, I just refactored this whole thing. It felt very nice. We got rid of a bunch of things. We addressed some common concerns that we had in the past, and now it's all gone and it's perfect. And we try to celebrate together with the team and we see it as a success. Another way to celebrate, and actually like a fun thing that we do since the end of last year is we try to get rid of basically code that doesn't spark joy. And we call this the Marie Kondo of the code base. So we kind of gather in our virtual office once every Friday or so, and we say, hey, we need to handle these particular parts of like, we have some code that exists for a while, doesn't really bring a lot of value. We can remove it. Even if it's a small feature that maybe is forgotten somewhere, if we know that it's not used anymore, we just get rid of it. If there are components that are unused, if there are components that are not on the same level as other components in the design system, we try to just level and whatever we feel is kind of like, we have that itch that, yeah, this needs to go away. We need a better way to do this or we just need to drop this. Then we do it in these sessions. So we ended up documenting all of this again and keeping track. And every month we try to, as much as possible, spend time and just do this cleanup, just get these out of the system, get this out of the code base and keep technical depth and keep all these legacy things at the end of the day to a minimum. Last rule is a bit of a tough one, is to make it resilient because the biggest threat to this successful refactoring culture is a culture in which feature development and product development always takes priority. And don't get me wrong, product development will always take higher priority and that's okay because that's what you need to do at the end of the day. That's the main task. It's just that when things get very intense, when you don't have the time to actually do all these things, you still need to make sure that the whole process is resilient. At least make sure that, okay, this month we won't have time to do any refactoring, but we do keep track of technical depth. We still continue documenting things. We still continue discussing about the priorities, about the future refactoring. We don't just let this slip because it's very easy for the process to just go to, okay, don't deal with this now and we'll pick it up whenever. And that whenever can be at any time in the future. But what's important here is that to make it resilient, you have to kind of have the buying of the whole team, right? And to have not just the buying, but also the ownership of the whole team. What we do here to make it resilient is we try to keep it in check. Whenever we go through a period of more feature development, we still have these meetings and we do have these huddle meetings with the entire horizontal team, working with frontend even if we work in different product teams. And we get together and we say, hey, on this code base, we still have this problem. We haven't had time to do anything with it, but this is the plan. We are pushing it forward. We have documented new things. Try to be in check. Try to get everyone in the team be aware that this process is still there, even if it's on low priority for the time being. But of course, when you do have the priority for that, make these meetings more actionable, assigned to your ownership. X can deal with updating some older tests. Y can add new components to the design system, standardize them because we use them in multiple places. There are tons of examples like this that you can do actionable things that you can do on a weekly basis. So these were kind of the rules and this is kind of the process that I have in mind and the framework that I have in mind for building this refactoring culture. I do want to leave you with one quote that I adapted, and that is that every team has a person that drives this refactoring culture. And if you don't know who that is, then it is probably you. I want to encourage everyone to fill in this gap if you feel like there's no one driving this effort. It takes a bit of effort at first to be the driver, but I'm pretty sure that using these things, using this very shared ownership model, you can convince everyone in the team that this is the right approach. And then the team can start really embracing this as a culture thing, not just as something that one single person pushes. Thank you very much and happy refactoring. And I'm super glad to see Marie Kondo again in action and not taking her off stage or off desks. I see they're working in the office. How are you doing today? I'm good, thank you. And thank you so much for having me. Let's go into the Slido question that you prepared for us and see the results. How do you feel about it? 46% of people said they have more than 20% time assigned and 37% between 10% and 20% time. Of course, it seems like they are fluctuating a bit. So not stable yet, but the trend is there. What are you expecting? It's quite interesting. Not really. I think overall it's a positive result. I would say the fact that people do allocate time for this is a good indicator that the industry grows towards more sustainable software. Everything feels a bit more organized. It feels like we've grown from that period of managers saying that you don't have to write tests, you just do a good enough job and then the code tests itself and then stuff like that. I think it's a positive indicator of people acknowledging that they do need at least more time to keep things in check and to ensure that a code-based project, any piece of technology grows in a steady way. There is hope that not only shipping code is the way to go, also taking care of the rest and how it integrates is a big important role. We also have some questions from the audience and I'm going to just go to them. The first one will be, which tool are you using to go through this process? So yeah, it's a good question, I guess if it's about specific tools that help you in these processes. Unfortunately, it is quite a human-centric process because in my mind, if you have tools, for example, for things that I talked about, like maintaining a certain level of code quality, of coding guidelines, if there are things that can be automated, then it's not even worth mentioning the process. You just have all the formatters, you have linters, you have CI pipelines that validate everything and just keep things in check. For everything that cannot be automated, it means that there's not really a tool that can highlight things. You can start looking at things like code complexity, but I would say always take that with a pinch of salt. Make sure it's not like you don't just... You can have tools that help you, but you can never rely 100% on tools if there are things that cannot be fully automated. For example, we did rely on things like node-prune is one package that you can run on a bigger code basis to just give you packages and modules that are unused, that sometimes some of those things are not easily spotted. So there are some solutions out there, but unfortunately I would say the vast majority of processes that I talked about are manual and require some sort of human ownership, so to say. Indeed. Indeed. And now that you mentioned human insight, Marie Kondo got appreciation. People said that they loved that you call it the Marie Kondo method. But it seems that this person doesn't have necessarily issues to convince the business side that they need tech depth done. But in general, what would be your approach into convincing the product manager, the product owner, that tech depth should be considered more and should not just brushed away? I think the classic answer for that is if you don't have the ownership over that, if you need to convince someone, you probably will most likely convince them with some sort of numbers, graphs, like just the idea that, hey, six months ago or one year ago, we were shipping features at this rate. Now we're shipping at that rate. And this is a difference. We clearly have a problem. Software experience has been slowing down or things like that. Yeah, I think I'm personally fortunate to have worked in the past years in an environment where I think everyone is kind of on board on this. So it's more like about the ownership. Like we more most it's more like about us having to take the ownership to doing the process rather than having to ask permission for it or something like that. But yeah, there's always I think there are always metrics that you can pull from, you know, from how fast you just how fast people are delivering. You can also get just to get the feeling of how the team feels about things. Just have a if you're in a position to do that, have like a one on one with each engineer in the team and just have the common concerns voiced for everyone. So it's clear that it's kind of like a common front that the team does right against this problem that's growing. Yeah, it's true. While we were saying this is that I wanted to mention that sometimes it's also mentioning the risks and put yourself in the situation that the product owner, product person will see that something is going wrong because of this. And we literally got the question right now as we speak related to that, that isn't it irrelevant to product. Some of the certain factoring tasks may take place along while the feature development, meaning that we mentioned that we can decide and put this time, but for product, they could literally take off the table that. Why should it be distinct and visible to product teams? So when I mean product teams, I mean, it should be visible to everyone that's concerned with them. It's not like some engineer in the shadow does the refactoring and others just approve it at the end and say, yeah, yeah, good, ship it there. Let's just cover it somewhere. It's more about us being transparent that it's just part of the job the same way as writing tests is part of the job or anything from code review to any other non-coding task in a way is part of the job in the same way, I would say. It's not necessarily doesn't have to be visible to the product manager or product owner or fully transparent to them, but they should be aware that this is how the team operates. That's the thing. And also transparent with the availability of yourself. These few hours will be invested there. It's not that they're not doing product work for nothing. It's that my availability will be split between this. Really, really insightful talk, Alex, and I'm really glad that you shared all the insights and the processes you have established with this framework. Thanks so much for joining us today. Thank you for having me. Bye.