Developing and Driving Adoption of Component Libraries


What makes a component library good? In order to create a component library that people want to use you need to navigate tradeoffs between extensibility, ease of use, and design consistency. This talk will cover how to traverse these factors when building a component library in React, how to measure its success, and how to improve adoption rates.



Hi there, I'm Lachlan, and today with my colleague Logan, we'll be talking to you about developing and driving adoption of component libraries. This will be broken up into two parts. First, I'll be speaking about what makes a good component API. And second, Logan will be talking about how we measure the success of our own component library and how we use data to inform us of how we can better improve it for our users. Quick introduction, I'm a tech lead at TikTok working on design systems and a maintainer of our internal component library called Tux. So I spend a lot of time thinking about component libraries and how our users interact with them. So first I'll ask what makes a good component library. And from my point of view, the API is easily the most important thing. The way that developers interact with the library. So what makes a good component API? So I found that APIs sit along a spectrum ranging from rigid to flexible. If you imagine a UI component, for example, a date picker, if we gave it a rigid API, there are some advantages to this. One is that if it's rigid and there are very few ways in which you can use it, it generally is very easy to use because there are very few ways in which it can be misused. Secondly, it has consistent output, meaning if multiple teams are using this component, it's very likely that they're using it in the same way and you'll get the same look and feel across the different products. Thirdly, and somewhat selfishly, it is a lot easier to implement a library that has a rigid API compared to one that is truly generic. On the other extreme, you could build a date picker in a very flexible way and that would cover more use cases. This is also really important because if a component doesn't fit a user's needs, they might have to build their own or get one from open source. And as soon as they do that, you're potentially sacrificing the consistent look and feel that you're trying to achieve through having a rigid API. So our approach is we recognize you don't need to and can't really cover 100% of use cases. So instead we aim for the common 90% and try to make the remaining 10% easy for teams to do themselves. So therefore, we start near the left of the spectrum with a really rigid API that focuses on solving the hard problems. For example, accessibility features and animation. I call these problems hard because most frontend developers don't have experience solving these problems and would rather spend their time working on things like application logic. We move further right along this spectrum as necessary. If we're presented with a really good use case that we can't support with our current API, we'll consider opening up the API somewhat. But this can involve breaking changes, which Logan will talk about later. We also need to be careful because moving right means we are potentially reducing consistency and increasing complexity of the library itself. I'd now like to give a demo showing how we can take a very simple component with a very rigid API and gradually make it more configurable without making too many sacrifices to its ease of use and to providing a consistent look and feel. So here on the right hand side, we have a demo component, which is a dropdown list box, which I'm sure you've all seen before. And let's have a look at the API. So as you can see on the left hand side, here are the props that our list box V1 takes. And importantly, it takes something called items, which as you can see up here is an array of what we call list items. And they can take a label and a value and optionally be disabled. So as you can see, this is pretty easy to use, but you can't really configure anything about it, especially how it looks. And imagine we have a product team who have this flavor picker and they maybe want to explain to the user why certain item is disabled. Maybe this item is sold out and they want to convey that to the user somehow instead of just disabling it or hiding it. So let's move on to A V2 of our component. This is one way of addressing the problem. Okay. So as you can see here, the props are pretty much exactly the same as V1, but the items additionally take an optional tag prop, which can pretty much convey whatever you need. So as you can see on the right, we can now mark an item as sold out, maybe mark an item as new, and this is really nice. But what if the team needs one item to have more than one tag? An item could be new and also sold out or an item could be popular and new. Or maybe we'll want to change the color of these tags so they're more identifiable. For example, green could signify some kind of positive connotation, maybe used for new items while red could be negative, used for sold out items. So of course, we could make this more complicated, this type, so you could configure everything about the tags and you could provide more than one. But this is really just to service the needs of one product team. And if there are 10 or more product teams, all with different requirements, this is going to get really complicated, maybe kind of hard to use. So maybe there's a better way to do this. So let's move on to version 3 of our component. And this uses an API that we use a lot internally at TikTok in our libraries. So version 3 of our list box still takes the same items array, but it now importantly takes two additional properties, render button and render item. You might recognize this pattern as render props, and we find it really useful. So here, let's configure how the button gets rendered. And this is basically the most simple way of using the component. But the cool thing about this is the items are a generic type. So you can add pretty much any data to these items and then use it inside the render method to get really configurable list items. So I'm going to copy an example I made earlier to show you what I mean. Okay. So you can see here, as well as the label and value, we've now added the additional properties is new and is sold out. And our render method not only shows the label, but also checks if the item is new or if the item is sold out and conditionally shows a tag. This tag is actually another component in our component library. So it's kind of cool that we are composing already existing components with this list box. So maybe we can get a more consistent look and feel across each of our components. So let's have a look what it looks like. Great. So this is pretty neat. And again, this is just generic type. Product team can just add more properties as they need. So maybe we have is popular on one of these. Maybe the banana is popular. Cool. And maybe if it's popular, we could add a popular tag. And you can see the type inference is great because it's generic type. So it's really handy. Okay. Yeah, we have a popular item. Very cool. I'll show another example. Maybe a product team needs to have a list box of employees and they don't need tags really, but maybe they need something like an avatar picture to show what the employee looks like. Or maybe even an online offline indicator. So let me bring up an example that I made earlier. And you can see, again, our items basically have now an avatar and an is online status. And the render method, if we look at render item, we're also optionally adding an avatar. And again, this is a component library that we're using. So it's really nice. We're able to compose things in this way. So if you have a look on the right, we can mark employees as online. And we can even display them inside the picker button. As you can see up here. So this is an example of how we design our APIs to try to strike a balance between ease of use, consistency, as well as allowing for some configurability. I'll now hand it over to Logan. Hey, everyone. I'm Logan Ralston, a software engineer at TikTok. And I'll be giving the second part of this talk today. So Lachlan already talked to you about what factors and balances we need to consider in order to design a good component library. And I'm going to continue by tackling the salient follow-up question. How does one go about quantifying metrics to measure the effectiveness of their component library? We'll go over how to collect these metrics on how your component library is being used in practice and then talk about how we use this data to drive our decision making and evolve our component APIs at TikTok. So brief intro about me. I'm Logan. I work on the Tux component library, which is TikTok's internal UI component library. And some of my work is on the infrastructure surrounding design systems. So static code analysis tools, like we'll talk about in a second. Linting, code mod kind of stuff. So let's start with how to measure a component library's effectiveness. So when Lachlan talked about what makes a component library good, he referenced a couple of kind of abstract quantities, like flexibility and rigidity that we can describe a component library in terms of. There's also like brand unity, developer productivity, and then increase to code quality. These are all things that we want to find just to optimize. We want to have high developer productivity, high code quality, obviously. But they're hard to measure in practice because they can be quite abstract. So when it comes to finding heuristic we can use to represent these, one of the primary ones that we use is adoption rate. So adoption rate is really important because it measures how much an engineer actually wants to use your components. Because let's say your components aren't flexible enough. So let's say you have a text input. And it doesn't have an invalid text state. For example, let's say they entered a password with too few characters or some form data is incorrect or an email is not in the right format or something like that. You want like a red border around it and an error message below it. Let's say you don't have that. Then the developer is going to have to go build their own component, which is going to be a huge waste of developer time. And it's also going to lower your adoption rate. And similarly, if your component library is not rigid enough, it's way too complicated, there's way too many options, the developers are just getting lost in some malaise of various configurations, then they're not actually going to use your component. And your adoption rate is also going to go down. So adoption rate is a primary heuristic because we know if that goes up, we're doing something right. So there's actually a variety of ways to measure adoption rate too. But one of the primary ones we use is component coverage. So component coverage is just the ratio of the number of times somebody is using one of your components to the number of times they should be using your components. So we call the times that they're actually using your components, good cases. So we're going to take TuxButton, our button component, for example here. So just every time they use the JSX element TuxButton in the file, that's a good case. They're doing something right, they're using the right component. And then every time they're using something like my own button or an anchor tag with the class name of BTN primary, that's probably going to be a bad case. That's something that they should be using TuxButton instead for. And then let's say that, given a source code file, they use TuxButton three times and they use a combination of my own button and an anchor tag with the class name of BTN primary nine times. So then we're going to say that in this file TuxButton has a component coverage of 25% because it's three good cases over 12 total cases. So three over 12, 25%. Now one thing to note here is, while it's possible to accurately determine the number of good cases completely, the bad cases is actually a text inference problem. So the logic there is a little fuzzy. So you can never guarantee bad cases is calculated perfectly. But that said, it can serve as a very accurate heuristic. Okay, but that's not the only metrics we're limited to. We can collect a bunch more metrics. For example, another one for adoption rate is we can do adoption rate at the repo level. So we look at all the code bases that we have internally, we say which ones are using Tux and which ones are should be using Tux. And we can calculate a coverage ratio for that too. We can look at the version distribution, which ones are stuck behind on old versions of Tux and not actually using the most updated components. That's something of interest. We can look at the styling standards adoption rate. So for example, if we're using Tailwind atomic CSS classes, how often are people actually using those atomic CSS classes versus styles that are exactly equivalent and should be replaced with that atomic CSS class. So we can enforce some semblance of code style or brand unity or whatever construct through here. We can look at the rate of bug fixes or feature requests over time. We can look at linting rule violations that are related to the component library. And there's many more metrics that are useful to collect. So how do we actually collect this data? So the primary way we do this is we, for a tool called Tux scanner, so Tux scanner is a static code analysis tool that goes in and looks at our individual files of source code without executing them and measures how Tux is being used in practice and sends us data all off to an API so we can go and analyze how the Tux is being used. So we start here. We have a file source code. We parse into an AST, which is just a tree representation of the source code. And then we go and we evaluate a set of metrics on that AST. And those will all give us scores. So it could be like 80% component coverage before or maybe there's 12 deprecated components being used or something along the lines of that. It will just collect us a bunch of scores. We'll conglomerate them all together and send it off to the API so that we can look at how all our code files are using Tux and filter over time by platform, by the metric and whatnot. So to do a little bit of a deeper dive into how scanner works, so we start by parsing the source code files into ASTs, abstract syntax truths, which is just a tree representation of the source code. It shows how all the constructs of the language are related. So like the JSX elements related to the class name attribute and have a tree based on that. And this just allows us to go through and evaluate a set of metrics on there. So a metric is just a function that takes in a set of ASTs. It goes in and traverses those ASTs and produces a set of scores. And a score is just the number of good cases, the things we're doing right, the number of bad cases, the things we're doing wrong and need to improve to improve our score. The call sites of the good and bad cases, which are just the links to the nodes, usually it's JSX elements, but it could also be like, say, like an input declaration or something, just a place in the code where something's going wrong. And then the coverage scores. And then when we go conglomerate that all together, we send it off to the API so we can filter by time and by package and whatnot. Okay. So now that TuxScanner has collected all this data, how do we utilize it? So at TikTok, things grow quickly. And our component library is no exception. We need to be able to evolve fast in order to keep up with the dramatic rate of change. And we want to be able to do that in order to stay nimble and keep on evolving. So we need to be able to keep what's working and throw out and redo what isn't. Now, this is a good truism for how code should best be maintained, but it's not often practiced. And that's because introducing all these breaking changes all the time isn't fun for developers to keep up with. It makes your life hard and introduces a lot of maintenance. But we can't look at these things as for the lens of being a maintenance burden. We need to always be striving to introduce breaking changes in order to go and reach for that asymptotic platonic form of what a component library should be. And we can't abandon that for the sake of stability. We've got to stay nimble. So we cannot be afraid of introducing breaking changes to our components. In fact, we want to be making them regularly. We've got to embrace the breaking changes. So this is easy to say in theory, but it's hard to do in practice. And at TikTok, we've made a concentrated effort to streamline our architecture to facilitate this. So one such example is we use a monorepo. And in a monorepo, almost all of our consumers are on the latest version of Tux. And this means consumers aren't left behind. Because let's say we introduce a bunch of breaking changes and release version 5 or whatever of Tux, but a lot of consumers are on version 3 because they want to have to go through an update to all the new changes. This means that we're not actually practicing what we're preaching because if people aren't actually using your new latest cutting edge components, then in practice, they're just all being left behind and it's kind of pointless. So this means that whenever we introduce a PR that introduces a breaking change as part of that, we've got to go and update the majority of our consumers to use the latest version of Tux because the majority of our consumers are always on the latest version of Tux. And this means an important philosophy that we have to enforce this is the one of co-ownership. So at Tux, if we make a breaking change to our package, it's our responsibility to update our consumers, not the consumers' responsibility. So we have to go through and update all the call sites for code mods or whatnot and make sure they are on the cutting edge and that they aren't left behind. Okay. So quick summary. We talked about what makes a component library good. We talked about how to quantify a component library's effectiveness and we talked about how they use that data to help your library. Thank you very much. I appreciate it.
22 min
24 Oct, 2022

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic