What Is the Accessibility Tree, Really?

Rate this content
Bookmark

Have you ever wondered how screen readers interact with browsers to provide accessible experiences? You may have heard terms like "accessibility APIs", “accessibility tree” or "accessible name computation". But what do they refer to, really? 

In this talk, we will demystify the process in which browsers generate and update the accessibility tree. We will look into its key elements, and how HTML elements and ARIA attributes map into it. Lastly, we will explore how web developers can leverage it for effective debugging. Let's dive into the inner workings of screen readers-browsers interactions!

Mathilde Buenerd
Mathilde Buenerd
19 min
13 Jun, 2024

Comments

Sign in or register to post your comment.

Video Summary and Transcription

This is a presentation on accessibility and screen readers. The speaker discusses the evolution of screen readers and how they adapted to graphical user interfaces. Accessibility APIs and the accessibility tree are introduced, allowing programs to construct a text database used by assistive technologies. The accessibility tree may vary across browsers and platforms, excluding elements that are not relevant to assistive technologies. The ARIA hidden state and element properties play a role in determining the accessibility of elements, and the accessible name can be derived from text content or specified using ARIA attributes.

1. Introduction to Accessibility and Screen Readers

Short description:

This is a presentation by a team from the Google Cloud Platform, discussing the topic of What's Accessibility 3? The speaker, Mathilde, shares some background on the importance of accessibility and the evolution of screen readers from text-based operating systems to graphical user interfaces. She explains how screen readers adapted to the complexity of graphical user interfaces by constructing a text database called an off-screen model.

This is a presentation by a team from the Google Cloud Platform, and we're on a virtual stage to show you how to build a really powerful and fast-growing platform. Nice to see you all, especially I know it's 5 o'clock, I know it's late, we had a big day today, so I'm happy to see this room is full and that you all look awake. So thanks for being there.

So let's start. I made a mistake on this first slide. I hope it's not a bad sign for the rest of the presentation. The name of this presentation is What's Accessibility 3? Hi, my name is Mathilde, I'm a front-end developer and an accessibility professional. I left my job at Shopify a couple of weeks ago, so I don't work there anymore. You can hear from my accent that I'm from France originally, but I live in Madrid in Spain.

So the topic of today is What's Accessibility 3? and we'll cover that, but before, I'd like to go a bit back in time and give some context on why the accessibility is an interesting topic. So on this slide, we can see a picture of an 18-key square keyboard with nine digits, and the A, B, C, D letters help and stop keys. And out of curiosity, can you raise your hand if you know what this device is? I see no hands. Actually, I'm not surprised. If I saw any hand up, I would have been pretty amazed. But this device is from 1988, and it's a screen reader keypad that was developed by IBM, and so this keypad was coupled with a screen reader software as part of one of the first screen reading systems that were developed to allow people who are visually impaired or who are blind to access computers.

But did you ever wonder how screen readers were working back then? So in a very simple way, in a text-based operating system like MS-DOS, it was kind of easy for screen readers to access the characters that were presented on the screen, and all they had to do was to convert this text into speech. But as we know, computers evolved fast, and already by the end of the 80s, text-based operating systems were replaced by graphical user interfaces. And it became much more complicated for screen readers because a graphical user interface doesn't just run the characters. It runs the pixels, and the information presented on the screen is just much more complicated than what it was. For example, text can belong to different windows, but the screen readers should only read the text on the currently-selected windows. So you have different types of text. You have menus. You have items. You have buttons. And on top of that, you have elements that are purely visuals like icons. So how did screen readers adapt to that? Well, this is a picture of an article called Making the GUI Talk, so the Graphical User Interface Talk written in 1991. And it explains a new approach they've been developing at IBM to make screen readers work with graphical user interfaces. And the idea was to construct some kind of text database that models what's displayed on the screen. And this database was called an off-screen model, and in the previous article, the author explains that in the off-screen model, assistive technologies had to make assumptions about the role of things based on how they are drawn on the screen. For example, if a text has a border or background, then it's probably selected.

2. Accessibility APIs and the Accessibility Tree

Short description:

Accessibility APIs were introduced in the late 90s, allowing programs and applications to construct the text database used by assistive technologies. Accessibility APIs provide a tree representation of the interface, with objects that describe UI elements, their properties, roles, states, and events. The accessibility tree is a separate representation of the DOM that focuses on accessibility-related information. Browsers generate the accessibility tree using the render tree, which is based on the DOM minus hidden elements. DevTools can be used to view the accessibility tree.

Or if there's a blinking insertion bar around it, probably the user can enter text. I'm pretty sure you can imagine how complex the systems were and how difficult this was to maintain, and there was a sort of ambiguity. So on top of that, every time a new version of the user interface would come out, screen readers had to ensure the off-screen model was still accurate. Drinking time, sorry. So what's a better solution? Well, already at the end of the, in the late 90s, accessibility APIs were introduced, and the role of accessibility APIs was to allow programs and applications to construct the text database that assistive technologies were previously building themselves. Concretely, they allow operating systems to describe UI elements as objects with names and properties, so that assistive technologies can access those objects. And I put accessibility APIs in the plural form because there are many, they are platform-specific. You have some for Mac, you have some for Windows, you have some for Android, and they're pretty much standards.

So you might wonder what does an accessibility API look like? And in practice, it's a tree representation of the interface. For example, on the Mac, a window would be an application that contains two children, a menu bar, the thing that's at the top, and a window that contains the application itself. Then the menu bar would also contain, as children, all the menu items, and the window would contain a left bar, a close button, search inputs, et cetera. For each UI element, developers can typically set roles, is that the window, is that a menu? They can set properties. What's the position of this window? What's its size? They can set states. Is this menu item selected or not? And other properties that are useful for developers, like events. Okay, so now we all know, we know all about screen readers and accessibility APIs, but what about the accessibility tree then? Well, the accessibility tree is a separate representation of the DOM that focuses on accessibility-related information. It is built by the browser that then passes this information to the platform accessibility API. So in summary, the accessibility tree makes the link between your HTML, the platform accessibility API, and assistive technology.

So now let's look at how browsers generate this tree. If you think of a typical page that's meant to be interacted with a screen, a mouse and a keyboard, a typical critical rendering path would look like this. The browser creates the DOM and the CSS object model, then it creates the render tree, which is basically the DOM minus some elements that are hidden by CSS. When the render tree is ready, the browser can determine the exact size and position of the element, does the layout step, and then it paints the nodes on the screen one by one. And after that, the user can interact with the page with its mouse, keyboard, et cetera. Now looking at the experience from the point of view of a user that uses assistive technology, it was different. Well, the layout and the paint step aren't relevant here because it's not something that assistive technologies will leverage. However, the browser will use the render tree, the tree without the hidden elements, to construct the accessibility tree. Then the accessibility tree will pass the information to the platform accessibility API that can be queried by assistive technologies like screen readers. Okay, so let's look at what the tree looks like. So how to show the accessibility tree. The best way to get a grasp of it is to look at it using DevTools. I'm going to use Chrome DevTools in this presentation.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Modern Web Debugging
JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Top Content
Few developers enjoy debugging, and debugging can be complex for modern web apps because of the multiple frameworks, languages, and libraries used. But, developer tools have come a long way in making the process easier. In this talk, Jecelyn will dig into the modern state of debugging, improvements in DevTools, and how you can use them to reliably debug your apps.
Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
As developers, we spend much of our time debugging apps - often code we didn't even write. Sadly, few developers have ever been taught how to approach debugging - it's something most of us learn through painful experience.  The good news is you _can_ learn how to debug effectively, and there's several key techniques and tools you can use for debugging JS and React apps.
Accessibility at Discord
React Advanced Conference 2021React Advanced Conference 2021
22 min
Accessibility at Discord
From Friction to Flow: Debugging With Chrome DevTools
JSNation 2024JSNation 2024
32 min
From Friction to Flow: Debugging With Chrome DevTools
Coding and debugging should flow, not fizzle! Let's see what's new and improved in Chrome DevTools to make your web development & debugging journey smooth sailing.
Configuring Axe Accessibility Tests
TestJS Summit 2021TestJS Summit 2021
30 min
Configuring Axe Accessibility Tests
Top Content
Axe-core is a popular accessibility testing engine that is used Google, Microsoft, and hundreds of other companies to ensure that their websites are accessible. Axe-core can even integrate into many popular testing frameworks, tools, and IDEs. In this advanced session, we'll be learning how to configure axe and its integrations to fine tune how it runs and checks your pages and code for accessibility violations.
Debugging with Chrome DevTools
JSNation Live 2021JSNation Live 2021
11 min
Debugging with Chrome DevTools
Jecelyn will share some tips and tricks to help you debug your web app effectively with Chrome DevTools.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Performance Debugging
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Web Accessibility for Ninjas: A Practical Approach for Creating Accessible Web Applications
React Summit 2023React Summit 2023
109 min
Web Accessibility for Ninjas: A Practical Approach for Creating Accessible Web Applications
Workshop
Asaf Shochet Avida
Eitan Noy
2 authors
In this hands-on workshop, we’ll equip you with the tools and techniques you need to create accessible web applications. We’ll explore the principles of inclusive design and learn how to test our websites using assistive technology to ensure that they work for everyone.
We’ll cover topics such as semantic markup, ARIA roles, accessible forms, and navigation, and then dive into coding exercises where you’ll get to apply what you’ve learned. We’ll use automated testing tools to validate our work and ensure that we meet accessibility standards.
By the end of this workshop, you’ll be equipped with the knowledge and skills to create accessible websites that work for everyone, and you’ll have hands-on experience using the latest techniques and tools for inclusive design and testing. Join us for this awesome coding workshop and become a ninja in web accessibility and inclusive design!
Automated accessibility testing with jest-axe and Lighthouse CI
TestJS Summit 2021TestJS Summit 2021
85 min
Automated accessibility testing with jest-axe and Lighthouse CI
Workshop
Bonnie Schulkin
Bonnie Schulkin
Do your automated tests include a11y checks? This workshop will cover how to get started with jest-axe to detect code-based accessibility violations, and Lighthouse CI to validate the accessibility of fully rendered pages. No amount of automated tests can replace manual accessibility testing, but these checks will make sure that your manual testers aren't doing more work than they need to.
The Clinic.js Workshop
JSNation 2022JSNation 2022
71 min
The Clinic.js Workshop
Workshop
Rafael Gonzaga
Rafael Gonzaga
Learn the ways of the clinic suite of tools, which help you detect performance issues in your Node.js applications. This workshop walks you through a number of examples, and the knowledge required to do benchmarking and debug I/O and Event Loop issues.
Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
JSNation 2023JSNation 2023
44 min
Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
WorkshopFree
Ryan Albrecht
Ryan Albrecht
You know that annoying bug? The one that doesn’t show up locally? And no matter how many times you try to recreate the environment you can’t reproduce it? You’ve gone through the breadcrumbs, read through the stack trace, and are now playing detective to piece together support tickets to make sure it’s real.
Join Sentry developer Ryan Albrecht in this talk to learn how developers can use Session Replay - a tool that provides video-like reproductions of user interactions - to identify, reproduce, and resolve errors and performance issues faster (without rolling your head on your keyboard).