Building a design system is not enough. Your dev team has to prefer it over one-off components and third-party libraries. Otherwise, the whole effort is a waste of time. Learn how to use static code analysis to measure if your design system wins over the internal competition and data-driven ways to improve your position.
Find Out If Your Design System Is Better Than Nothing
AI Generated Video Summary
Building a design system without adoption is a waste of time. Grafana UI's adoption is growing consistently over time. The factors affecting design system adoption include the source mix changing, displacement of Homebrew components by Grafana UI, and the limitations of Grafana UI's current state. Measuring adoption is important to determine the success of a design system. The analysis of code through static code analysis tools is valuable in detecting and tracking component usage.
1. Measuring Design System Adoption with Grafana UI
Design system is worthless if not used. Building a design system without getting sufficient adoption is a waste of time and effort. In this talk, I will tell you how to measure the adoption of a design system. Grafana is a DevOps monitoring platform with a design system called Grafana UI. Let's take a look at how well Grafana UI is doing. The line goes up. Number of usages grows consistently over time. The reason you don't know is competition.
♪♪♪ Design system is worthless if not used. Design system is a tool, it's not enough to exist. It has to actively be used to make the product better or help deliver it faster. That's the only way for a design system to be valuable. Building a design system without getting sufficient adoption is a waste of time and effort.
My name is Arseniy, I'm a Solution Architect at Rangel Amsterdam. In this talk, I will tell you how to measure the adoption of a design system based on a metric I set up recently for one of our clients. Conversation about metrics requires data. Showing you data from a corporate client is unfortunately not possible, so I found an open source substitute. I will use Grafana as an example. If you don't know, Grafana is a DevOps monitoring platform. It's an old and big project. It was rewritten from Angular to React starting around 2018. It has an ecosystem of plugins and most importantly, for the stock, Grafana has a design system. It's called Grafana UI. In their storybook intro, they say that they built it to get a shorter development cycle and a consistent user experience. These goals are in line with what you'd expect to find in a corporate design system. I want my products to look like my products and I want to build them faster.
Let's take a look at how well Grafana UI is doing. On this chart, we're tracking how a project and its ecosystem evolved and how they adopted a design system. Horizontal timescale is five years. Measurements are taken at weekly intervals. Vertical number is Grafana UI component usages. Usage is when a component is referenced in code. We'll discuss later what it means but simplified, and it's when you mention a component in a JSX tag. For the sense of scale, last week at the right edge of the chart, Grafana UI components were used more than 7,000 times across Grafana itself and 290 of its plugin code bases.
What picture does this chart show you? The line goes up. Number of usages grows consistently over time. Is this good for the design system? Does this mean the design system is getting continuously adopted? The answer is you don't know. The reason you don't know is competition.
2. The Role of Design System and Homebrew Components
Developers have a choice of how they build things, whether using third-party libraries, building their own components, or utilizing a design system like Grafana UI. Homebrew components, which are low-level components implemented directly in the product code base, are also important to consider. By analyzing the chart, we can see that both the design system and Homebrew components are growing, indicating a healthy project and ecosystem.
I'm an engineer. I can use a design system. I can use third-party libraries or I can build my own components. Developers have a choice of how they build things. This is particularly true for the open source plugins in this analysis. If I build a plugin and host it on my own GitHub account, who can force me to use Grafana UI? The only option to affect my choice is to make a good design system and make it easy to use.
Even if your product doesn't use any third-party libraries, in any project there are going to be Homebrew components. Homebrew are the low-level components implemented directly in the product code base. You build your own button, that's Homebrew. It's very important to focus on the low-level components. We're not looking to find every possible component usage. We're looking for the competition. Component Online 2 is not Homebrew because it's compositional, it only uses other components. Compositional components are expected in any code base and don't compete with the design system. Component Online 1, on the other hand, is Homebrew, it uses lowercase jsx tag as opposed to capitalized one. That way, we know it deals with a raw markup. Because it deals with raw markup, we count it as Homebrew.
Now that we know what we're looking at, let's add Homebrew to the chart. This is the same chart as before. Same axis, same data, except with Homebrew usages added on top. I want to point out the scale once more. We're looking at combined 11,000 component usages at the right edge of the chart, across 291 repo. Total shaded area is almost a million usages, though they're not unique since we're tracking code over time. What can you see on this chart? Gray area at the top, representing Homebrew, starts before red area, representing Grafana UI. At first, there was no Grafana UI. Both lines are growing. While the user design system is growing, so does Homebrew. The fact that the two lines are growing means that the project and the ecosystem are growing. They look healthy. The shapes look pretty similar, in particular, over the last year.
3. Design System Adoption Factors
The bumps on the lines indicate changes in component usage. The shape of the lines may not always mirror each other, suggesting a change in the component source mix. By analyzing the source mix changing over time, we can understand the factors affecting design system growth. The chart shows the proportion of Homebrew to Grafana UI usages, indicating the displacement of Homebrew components by Grafana UI. Developers are increasingly choosing to use Grafana UI over Homebrew. The chart also reveals the initial absence of Grafana UI, a significant jump when existing components were pulled into it, and a recent slowdown in displacement due to the limitations of Grafana UI's current state.
The bumps on these lines are features every significant change set adds a bump or a dip to the line, because it means components are used more or removed from the code base. But notice that the shapes are not always mirroring each other. Noticeably around the 721 mark. Lines not being similar, means a component source mix is changing.
Imagine you have two buckets, Homebrew and Grafana UI. You take components from both buckets and compose to make your feature. Buckets are your sources. How much you take from the buckets is your source mix. Source mix changing means you're taking more or less from one of the buckets than you did before. We can infer from this chart, the two sets of factors affecting design system growth. How the project growth grows and how the design system itself is doing.
To isolate the design system adoption from the project growth and other factors, we have to look at how the source mix changes over time. And this, in my opinion, is the single most important chart for a design system. This is a source mix changing over time. It's a proportion of Homebrew to Grafana UI usages. The size of a code base is not fixed. It grows and sometimes shrinks over time, but each feature has limited opportunities to use the components. Out of the set of competing components doing probably the same thing, you're only going to use one in each case. This means choice of the component source is a zero sum game. If you use a design system, you're not using Homebrew and vice versa. So the chart you're looking at shows displacement. It represents Grafana UI taking over the choices developers make when building things. More and more, developers are choosing to use Grafana UI over Homebrew components.
Some things you can notice about this chart. In the beginning, there was only Homebrew, Grafana UI did not yet exist. Then you see drastic jump when the existing components are pulled into Grafana UI for the first time. The project was still small then, so this creates a big change in proportion. When you see a period of growth for a couple of years, and over the last year, displacement almost stopped. Grafana UI reached the limit of what people can do with it in its current state. The mirroring of the two lines that you've seen on the previous chart is almost perfect for the last year because the source mixed did not change.
4. Measuring Design System Adoption
Grafana UI reached a state where each new feature uses the same mix of a design system and Homebrew components. The chart shows team preference for the design system over the competition. It also accounts for availability. If your design system fails to displace the competition, the share it takes across the component usages will diminish over time. If the design system is successful at displacing the competition, that's how you know it's doing well. Up is good. Down is bad. What's an appropriate sideways level? For a general purpose design system ideally you should reach a level where there is no other homebrew components except for snowflakes. If you only have snowflakes among homebrew components you're doing a fantastic job.
Grafana UI reached a state where each new feature uses the same mix of a design system and Homebrew components, even if the components themselves are different. Why is this chart so much better than the previous ones? Source mix is independent of project growth, team size, level of investment, etc. You double the team size tomorrow, the chart remains valid. It removes any volatility due to features, the only changes are the source mix, nothing else.
Basically, it shows team preference for the design system over the competition. Interestingly, it also accounts for availability. If I'd like to use a design system component, but I can't find it, I am more likely to use homebrew. For a business or a product, you might look at the market share. This chart is market share for design system within the project.
Is your design system better than nothing? If you do nothing, if you don't build a design system at all, the product will still exist. The features will still be built using other components. If your design system fails to displace the competition, the share it takes across the component usages will diminish over time from the starting positions. That way project grows faster than the design system gets adopted and features don't use it much. If the design system is successful at displacing the competition that's how you know it's doing well. Which brings me to the basic property of this metric. Up is good. Down is bad. If the chart goes down for a while, something is wrong. If the chart goes up, keep doing what you're doing. Though it cannot go up forever. No matter what the coach says you can't give more than 100%. Even if you do everything absolutely right line will go sideways at some point.
What's an appropriate sideways level? For a general purpose design system ideally you should reach a level where there is no other homebrew components except for snowflakes. Snowflakes are those truly unique one-of-a-kind homebrew components. Imagine you're building a landing page for a million-dollar ad campaign. It's okay to have a bunch of unique components specifically for that situation. Having snowflakes in the project is okay and probably unavoidable. If you only have snowflakes among homebrew components you're doing a fantastic job. You might not reach that level. If you only did the very basics don't expect line to go too high.
5. Factors Affecting Design System Adoption
If your code base has too much legacy, the line won't go high fast. The level of adoption depends on goals and specific use cases. The design system requires governance and collaboration across the organization. Lack of adoption can occur at any point in the decision-making process. This analysis is valuable and should be part of the governance process. Computers can read code through static code analysis. Tools that read code programmatically are commonly used.
If your code base has too much legacy don't expect the line to go too high too fast either because refactoring legacy is slow so displacing legacy is slow. And the level you reach depends on goals. If you're focusing on a specific use case for example getting product images just right for e-commerce then it's okay to not get too much adoption relative to all the homebrew in the project.
If you have a specific focus you need to adjust your definition of the competition to get meaningful metric. Remember that homebrew definition so far was any component rendering raw markup. User share is a code based metric. If it doesn't go as high as you want you might be tempted to call out those pesky developers for meddling with the adoption success. That's not correct. Design system requires governance deciding what goes in and what doesn't. It requires a lot of collaboration between different people across the organization designers, developers, and product owners. Lack of adoption might be caused at any point in the decision-making process. If a designer doesn't like the system they will not incorporate it into the designs so the developer won't use it either. This measurement sits at the tail end of the development process so it's affected by all the choices involved in it.
The analysis is still valid though because in the end, software product is code and the design system is shipped as part of it. For this analysis to be useful it has to become a part of your governance process and for that it has to be cheap and repeatable. You should be able to run it every sprint to see how things change, to react, and to plan further work. Fortunately, we can automate it. Here's a famous example of gibberish. If I ask you to tell me what the statement is about you should be able to say that it's about ideas doing something. You should be able to find the subject. Notice that the phrase is designed to have no meaning yet we're able to make some judgments about it based purely on syntax.
Surprise! Computers can read code. In practical terms it means code is formal. It has strict syntax. It has to be formal for the machines to run it. We can teach a computer to read code and find patterns in the structured statements. This is called static code analysis. Just like with Chomsky's nonsense we don't have to run the code to figure out what it's doing. It's enough to look at code. You're probably already using tools that read the code programmatically.
6. Detecting Homebrew Components and Usages
Your linter does static analysis to find issues without having to understand what your code is doing. Let's take a look at the syntax of a component. We've seen this code before. How exactly do we know that homebrew is indeed a Homebrew component? React component is a function returning JSX or a class with a render function if you're old school. Both Homebrew and not Homebrew are components. They're both functions returning JSX.
On top of that Homebrew mentions a lowercase tag. Let's reiterate. To detect a Homebrew component, we'll look for functions. In functions, we look for lowercase JSX tags. If the function contains a lowercase tag, it's a Homebrew component. Same for classes. When doing this analysis, we assume the code is valid because it has to be correct to get shipped to production. If you're shipping invalid code to production you've got bigger problems and I can't help you.
7. Importing and Tracking Component Usage
Importing and tracking usage of components is essential. Usage can be determined based on package exports and subsequent usage. Higher order components can also affect usage count. Usage in code is not well-defined and depends on syntax. The tool used for tracking is called Radius Tracker. To calculate the metric, find homebrew components, design system imports, and collect usages. Adjust weights by component complexity for a meaningful chart. Build what is valuable for your team and measure usage share percentage. The open-source Radius Tracker tool by Wrangle can help with the analysis.
Importing is extremely common and we have to keep track of it. Usage of the button on line 6 should count. Notice it's possible to track usage of a button starting from the import at line 5. We don't need to know what the button is, only if the package exports something called button and that thing later gets used. This way we can track usages of a design system imported as a package.
You have a button, you put it into the object and spread it into props. You have now passed the button as a render prop. It's still a usage of a button. We don't know what happened to the button inside the component but we know that the button is likely to be used. Another common use case is higher order components. Notice that the button is used not when the higher order component is created on line 5 but when it is used on lines 6 and 7. Every time the HOC is used the original button component is also used. This code counts as two usages of the button.
Now that you know how to detect homebrew and find usages here's how to calculate the metric. Find homebrew components. Find design system imports. Collect usages of both. Calculate the source mix proportion. And repeat for historical commits. I've used button as an example throughout this talk so far. But not all components are created equal. If you compare an abstract button and a date picker you'll probably find that each use of a date picker is worth much more than the use of a button because date picker is a bigger more complicated component. For a more meaningful chart you have to discount simpler components. This means adjusting the weights by component complexity. The charts you've seen before treat all components the same. For the client I mentioned in the beginning we ignored all usages of components like box or typography because those aren't valuable even if significantly over represented in the code base. If you take one thing from this talk build what is valuable for your team and then measure the usage share percentage to see how much people prefer what you're doing over the internal competition. To run this analysis yourself grab the radius tracker we wrote in Wrangle it's open source and I suggest you check it out. Ask me for advice and reach out to Wrangle for practical help.