The Future of Performance Tooling

Bookmark

Our understanding of performance & user-experience has heavily evolved over the years. Web Developer Tooling needs to similarly evolve to make sure it is user-centric, actionable and contextual where modern experiences are concerned. In this talk, Addy will walk you through Chrome and others have been thinking about this problem and what updates they've been making to performance tools to lower the friction for building great experiences on the web.

by



Transcription


♪♪ Hey, folks. My name is Adi Osmani. I'm an engineering manager working on the Chrome team at Google. And today we're going to talk about the future of performance tooling. You may know me from the intranet for posting about DevTools features and talking about performance. But I've gone through the pandemic the same way that everybody else has. It's been a long pandemic. There have been a lot of folks recently announcing they have new jobs. I too am excited to announce my new position. It's the fetal position. I've been in it for some time. I'm probably going to remain in it right after this. The last two years were a good dry run. Now, as I mentioned, I tend to talk a lot about things like JavaScript and JavaScript bundles. But today I wanted to take a step back and focus on user experience. Now, users often experience web pages as a journey. And there's a few key moments to it. There's, is it happening, is it useful, is it usable, and is it delightful? Delight can mean making the experience more pleasant through adding things like animations. Now, as I mentioned, I love sharing DevTools tips. And one thing a lot of folks don't know is that DevTools actually has an animation inspector built in. Here it is in action. You can use it to modify the timing of animations, delays, duration, and so much more. Here it is working against a page transitions app by Sira Drasner. I love the animations inspector. It's one of my favorite features. Now, sometimes folks will ask, well, are there efforts to bring such APIs to the platform? Jake Archibald had a great talk about a new page transitions API. And here's a cool demo of it built by the community. So we have a shared element transition here. We're clicking on a URL. It's taking us to a page. We're going back and you'll see that both the URL bar is changing as well as giving us these beautiful animations. Now, there's a lot of work left to do to support things like page transitions in the animation inspector. But we can already see things like being able to play back these animations and see all of the different kinds of motion we are adding to our pages. Really powerful stuff, and I love it. So onto our main question. What does the future of performance design look like? What does the future of performance tooling look like? Now, I think it's three things. I think it's user-centric, actionable, and contextual. Let's start off with user-centric. Now, in the last few years, Chrome has talked about the importance of focusing on user-centric performance metrics, such as the Core Web Vitals. Now, the Core Web Vitals are three metrics. They're largest contentful paints, first input delay, and cumulative layout shift. This covers loading, interactivity, and visual stability. Now, these are great. We recommend checking them out using real-world data in the field or in the lab. But there's a lot more that we can do to bring how we think about performance closer to the user experience. Now, there are probably a set of core user journeys that you care about on your site these days. This isn't something that lab performance tooling or the tooling that we use on our laptops has really fully acknowledged just yet. Instead, we focused on things like initial page load performance. This continues to be really important, by the way, but it's a story we've told in a few places, like in Lighthouse and in DevTools. But we're evolving our understanding of user experience all the time. Now, if you look at the Core Web Vitals metrics, some of them are already thinking about that post-load performance, including things like layout shifts that might happen after the initial page has settled. Your largest contentful paint can change. And there are so many factors that play into here, like whether you're an MPA or an SPA, like do you have a soft navigation? Are you prefetching or prerendering? What's your caching strategy? A lot of nuance here. And so I would like us to think about user flows. Now, a user flow is a series of steps a user takes to achieve a meaningful goal. This is one of the best definitions out there, and it's by Alexander Handley. Now, a user flow usually begins at a user's entry point into an experience, and it ends usually in them completing a particular task, like completing a signup or placing an order. Here are a bunch of different examples of flows. You could be doing things like trying to purchase a product, going through the phases of finding what you want, customizing it and so on, until you add it to your cart and check out. You could be onboarding. You could be trying to create things or cancel a plan. Now, the way we've thought about measuring the performance of these types of experiences is usually by slicing them into URLs. So we'll say, well, what's the page load performance of step one, of step two, of step three, rather than reasoning about them in the aggregate or in the whole, the way that a user experiences them. And I think that there's an opportunity to do something to bring this much closer to what the user feels. I'm happy to share that we've added support for user flows to DevTools and Lighthouse. This is a big deal. We've been working on it for some time, and I'm really excited to walk you through it. It all starts with the DevTools recorder. So here's the recorder that allows us to measure performance across an entire journey. I'm going to start off with an add to cart flow. I've got our commerce experience. I'm browsing categories. I'm adding some things to my cart. Maybe I want to use the search feature to find a T-shirt. So we're going to do that. We're going to customize some colors. Then we're going to proceed to our checkout. Now here I can hit stop, and you'll see that all those interactions were captured by DevTools, including the selectors that were involved. I can now replay that flow. I don't have to go and tell each member of my team to do it themselves. I can hit replay. It's going to go end to end. And there we go. We've checked out. I can measure the performance of this flow. So we've got a measure performance button here. It's going to replay the entire flow, generate a trace of the end to end experience. And here we have it. We're in the DevTools performance panel, and I can zoom into any part of that experience I want to optimize. We can also export flows to Puppeteer in case we're writing tests. And we want those tests to also reflect what we're doing. So here I am. It's generated me a Puppeteer script. We can also use these Puppeteer scripts with Lighthouse for some bonuses. So in this case, I'm using a Lighthouse feature. It allows us to annotate things like time spans and snapshots. And via Puppeteer, I'm able to get this new report, this flow report. This allows us to visualize all of those different steps and see audits for parts of the experience that might need a little bit of work. Now in Chrome Canary, we've continued iterating on this support. The recorder panel now also allows you to import and export user flows, including using JSON. This addition makes it really trivial for you to be able to commit your user flows to your GitHub repo, share it with members of your team, share it with QA. And it's going to be really powerful for your sharing story. You can also do things like edit user flows for their selectors and timeouts, just in case things take a little bit longer than you might expect. Also in Canary, we have Lighthouse panel support for flows. So you can select a time span, you can hit start time span, and begin interacting with that experience. This is the first time we've allowed you to interact while Lighthouse has been running. So you can interact with this experience, you can customize your product. And then when you're done with the part of the flow you want to measure, you can hit end time span. This will go and it will generate you a part of that flow report. And we can use it to discover things such as, do I need to do a little bit more work on my cumulative layout shift? Because maybe there was some layout shift that happened during my interactions. This is really powerful. You can find it in Canary, and we hope you'll find it useful. Now, another question we had after working on flows was what if you could use flows with other tools? I'm happy to share something very experimental, very exciting. I love using WebPagetest, and the WebPagetest team put together a handy DevTools recorder to WebPagetest scripts repo that you can check out. So let's say that I've exported my flow to a cart.json file using one of the previous steps. I can use this with this script. This is what the output kind of looks like. It's not that interesting, but we're just going to skip ahead. And let me show you what this enables. We can paste this into our WebPagetest run, and now I can reason about my entire user flow. I get performance insights for my navigations, for my clicks. I get film strips. I can see the visual page load process for any of my steps. I'm able to scroll down. I'm able to use any of the great WebPagetest waterfall features for understanding bottlenecks. All of this is really, really great and gives us a step towards tooling interop, which I'm really excited about. Here, for example, we can see we've also got features like the WebPagetest videos, which I love using. Now, another question that we ask ourselves is what about cross-browser testing of flows? Maybe you know where we're going with this. So Cypress.io is a user-friendly test automation tool for end-to-end testing. It's great for things like UI testing, regression suites, integration, and unit testing. Now, the Cypress team has a special new package that can export Cypress tests from Google Chrome DevTools recordings. Let's say we take our cart.json once again, and I want to run it with Cypress. I'm going to use the recorder conversion package on my cart.json. Let's go ahead and say we've got that done. We're going to now run Cypress. Let's see what happens. So npx cypress open. It's going to open up a Cypress window. I can select a browser here. So I'm going to select Firefox. Our users.spec.js is the converted Cypress script. And here I am with the flow I recorded using Chrome DevTools, and it's playing back my whole flow in Firefox. This is so cool. I'm able to go to all of the different steps and sequences that were involved in this flow. I can debug, you know, what cross-browser might be working well or might not be working well. And it's just really powerful that you get this kind of cross-browser interop almost via such tooling. It's really, really cool. Now, if you happen to have a Cypress account and you have an API key, you can do things like use record to persist your flow recording to your Cypress dashboard. So here's an example of that. I am going to just go and show you something I did earlier. Here we've got our add to cart experience, and we can see what we just, you know, we're demoing. Here is a video of that experience. I can share it with others. I can play it back. I get the same kind of Cypress experience I would get if I was custom, you know, creating these flows myself. So really, really powerful stuff. I hope you'll check it out. Next up, we have actionable. Now, I mentioned earlier that I was excited to announce my new position. This is also kind of how I look when I'm looking at performance traces in DevTools a lot of the time. I love DevTools, by the way. Now, when people often look at the timeline, the performance panel, they find it daunting. They almost want to cry. I sometimes cry when I'm looking at the performance panel. Did you know that there are something like 17 muscles that get activated when you're crying? 17 muscles. It's great for fitness. Yeah, I've been doing some of that during the pandemic, too. I'm not going to lie. Now, when you're looking at the performance panel, you've often got a lot of data here. Here we've got a long task. And if you've ever wondered why time to interactive isn't great, it's often because long JavaScript tasks are keeping the main thread busy. Now, a good piece of life advice is if JavaScript doesn't bring users joy, thank it and throw it away. I'm pretty sure this is from a Marie Kondo special. She didn't say, in fact, say JavaScript. But I think this is good kind of evergreen advice. Now, we've been doing work to try and making sure that tools like Lighthouse are as actionable as possible for the web vitals. And if you haven't checked out Lighthouse recently, we've got all sorts of audits that can help you do things like identify your long main thread tasks or what your large layout shifts are. And we try to make this experience quite visual and try to pinpoint exactly what DOM nodes you might want to pay attention to. Now, we talk about interaction readiness quite a lot. And I wanted to talk to you about input responsiveness for a moment. So what is responsiveness? Responsiveness means understanding how quickly web pages respond to user input. Now, this can mean things like when I try to open the menu on a website, how long does it take before anything actually happens? Or if I'm trying to add something to my cart, how long does it take before I can actually see that my item has been added? These are all important steps in a user flow. Now, we've been working on a new interaction readiness metric. And we call it Interaction to Next Paint. This is a new experimental responsiveness metric. It measures runtime responsiveness, full interaction latency, and includes things like taps, drags, and keyboard input events as well. The way this differs to first input delay is that it measures the whole input latency from when a user interacts until they actually see a visual response, not just the initial delay on the main thread. In the lab, total blocking time is a pretty good proxy metric to be looking at. We find it correlates pretty well with Interaction to Next Paint. One of the nice things about this is that it's automatic and out of the box. Interaction to Next Paint is a field metric primarily and can vary depending on what interactions a user is making. This is one of those reasons it's great to use things like user flows, because it lets you know what the key interactions are. Now, what we find is that 70% of users have an experience that's terrible where responsiveness is concerned at least once a week. This is really, really a big opportunity for us to do better, especially given that UX research says that ideally it's 100 milliseconds you should be looking at for expected input latency. And we find that on desktop, users load twice as many pages if they experience good responsiveness there. So lots of reasons to be optimizing for these metrics. As these metrics have been rolling out, we've also been able to share more concrete guidance. So for example, if you're using React 18 and you wrap your UI in suspense boundaries, you can make hydration non-blocking, because it happens in slices instead. This can improve things like your total blocking time. And I'm looking forward to even more concrete guidance for optimizing metrics like total blocking time and IMP coming out before long. Now, we've been working hard on introducing IMP to many of Chrome's tools, starting off with PageSpeed Insights. So the first thing I'm going to show you is a new feature called Fast Crux. This gives you instant access to field data. You'll see that I just hit Analyze. This isn't sped up. And you've got instant access to field data. And we'll then progressively load in your Lighthouse diagnostic data so you understand where you can improve. So IMP is already in PageSpeed Insights. Next up, we have Lighthouse, where we've got that time span mode. I'm going to interact with one of Google's more interactive experiences, Google Flights. And here, I'm going to show you that I'm performing a number of interactions. I've sped this up a little bit. And we're going to end our time span here. And what this gives us is a Lighthouse report that's inclusive of interaction to NextPaint. We can scroll down, and we can see that we have a brand new audit called Minimize Work During Key Interaction. This includes our input delay, processing delay, presentation delay, as well as all the other audits Lighthouse typically gives you for reasoning about long tasks and JavaScript performance. Finally, we've worked on tools like the Web Vitals extension, which I know a number of folks like. This also now has interaction to NextPaint. Now, on the debugging performance side of things, people have been using the Performance Panel for years and years. And it actually started out in a place where it was a little bit simpler. Over time, we added flame graphs and the ability to reason about call stack depth. But it trended a little bit towards tools that browser engineers used for performance debugging. Could this be better? We've got a preview of something we've been thinking about. So I'd like to introduce you to the Performance Insights experiment. This is available as a preview. We hope you'll check it out. And feel free to share feedback. And finally, we have contextual. Now, we introduced a feature called Stack Packs, which allows Lighthouse to display stack-specific suggestions. We know that a lot of people are often using frameworks or CMSs when they're building things out. And so we've been hard at work making sure that Lighthouse can start to give you slightly better context if you're using a modern stack. Here's an example. So in an application that's using Next.js, rather than just saying you should be using a modern image format, we'll tell you that you should be using the Next.js Image Optimization API to serve modern formats like AVIF. Rather than telling you to just lazy load your images, we'll tell you to use the Next.image component, which defaults to loading equals lazy. Really powerful stuff. Really great contextual stuff. We'll also do things like tell you when you should maybe be considering splitting your JavaScript bundles using React.lazy, or using a third-party library like Loadable Components. Now, a parting thought is that improving performance is a journey. There are lots of small changes that can lead to really big gains. Step one might be an outline. Step two is definitely not this. You're not going to suddenly solve all the world's problems. But you might end up similar to where I usually do with performance, which is step 1.5. It's fine. It's a result of iteratively investing in improving your user experience over time. And you'll eventually get to something closer to this. But this is actually OK. It's still pretty. Users are going to thank you for an improved performance experience at the end of the day. So I hope you got some value out of this talk. I think that the future of performance tooling is user-centric, actionable, and contextual. I've been Adi Osmani. Thank you so much. Thank you so much.
21 min
16 Jun, 2022

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic