Can AI make the web more accessible?
A blog post adapted from the talk Accessibility through AI? delivered at Programmable 2025. Find out more about our speaking engagements.
AI hype is everywhere. Whether it’s warranted or not is up for debate, but it does force us to ask questions about how we can use it to solve some of the biggest and most longstanding issues on the web: digital platforms that continually get built without accessibility in mind. How can we leverage this new technology to increase access? Hint: it's not what you think!
Your users are disabled
The disabled community makes up a significant portion of users. Globally, the number is estimated to be 16% - that’s 1.3 billion people. In Australia, the prevalence of disability is 1 in 5, and in New Zealand it’s 1 in 4. Put in other terms, that’s equal to the entire population of Melbourne in Australia, and Auckland in NZ!
One in 5 Australians have a disability; or, the entire population of Melbourne.
And this is a minimum number. The way we derive this number is through census questions called the Washington Group Short Set of Questions on Disability, which asks respondents how a disability limits their activities. Only those who indicate “a lot of difficulty” are considered disabled. That means that anyone who experiences “some difficulty” due to a disability, long-term illness or chronic pain, wouldn’t be considered disabled.
“Disabled people in Australia number the same as the population of Melbourne.”
But we know there’s more to the picture. For one, many people who live with permanent disabilities, chronic pain or undiagnosed neurodiversity like ADHD may have developed coping mechanisms that reduces the impact of those disabilities, but it doesn’t mean they don’t struggle. Or, someone who has lived with a disability all their life may assess their challenges as less difficult as someone with a newly acquired disability since they have adapted to the day-to-day living.
Disability has a wide reach
Disabilities can be permanent, temporary or situational. For example, if a right-handed person breaks their right hand or arm, they will - hopefully - recover from that injury but in the meantime it will impact the way they interact with technology.
Disability can do with our physical body, from our ability to see, use our hands and body, to our hearing ability. Our abilities with our body is also a spectrum, so difficulty with vision can range from being near-sighted, or colour blind, to having an impairment that impacts one’s vision, to being legally blind.
We’re learning more and more everyday about neurodiversity. Many people are diagnosed late in life, or not at all. Still, the way things are built in this world can make it challenging to navigate for people with ADHD, autism, or auDHD. Dyslexia and other reading disabilities also impact people on a daily basis, although those individuals might not consider themselves disabled by census definition.
Finally, the older population is hugely impacted by disability. 49% of people aged 60+ have a disability. The 60+ population group are the baby boomers, who are not only many but also make up the group with the largest disposable income and are technologically active.
All of these people use technology on a daily basis to do the things they want and need in life. Just because someone broke their arm doesn’t mean they don’t need to eat! In fact, they may be more likely to do their grocery shopping only specifically because they’re injured. Someone who is blind in one eye can still be an amazing player in their favourite video game. Neurodiverse people make for colourful and engaging conference attendees, so it’s important they’re able to use technology to buy their tickets. And retirees need to do their online banking to stay on top of their finances during this stage of their life.
The accessibility gap
WebAIM, a digital accessibility organisation, performs an annual study where they scan 1,000,000 homepages across the web and test them for accessibility. In February 2024, WebAIM found that 96% of homepages are not accessible. They detected 56,791,260 accessibility issues, an average of 57 bugs per homepage!
“When we build technology that isn’t accessible, we’re creating barriers between people and the things they want and need to do.”
Can AI bridge this gap and make the web more accessible?
How AI is improving web accessibility
AI has actually been around for a while, making improvements to the accessibility of digital platforms and tech:
Predictive text: from sending a social message to writing a professional CV, predictive text is valuable for those with reading disabilities, dyslexia, and even those who are writing in their non-native language.
Voice recognition: this was originally developed for people with disabilities, who have difficulty or can’t type into a computer. From communicating to our personal devices, to navigating through a phone-in menu, voice recognition benefits everyone and in particular those with mobility and dexterity difficulties, and learning disabilities.
Automated captions and transcripts: when COVID hit and everything moved online, there was an urgent need to make the web more accessible. This global event, paired with the sophistication of AI, likely ushered in the rapid development of live auto-captions and transcripts.
There are more recent uses for AI, too:
Generate image alt text: missing alt text is among the 5 most common issues on the web, according to the WebAim Million Project. Now, a content creator can easily put an image into ChatGPT and ask it to generate an alt text for it. The alt text provided is often verbose and may not be entirely relevant, but it offers creators a starting off point. This feature is so powerful, company Scribely leverages AI to “operationalize image descriptions”.
Read images of text: ever go to a restaurant and check out the menu online ahead of time? Unfortunately, the menu that’s uploaded is often an image, which is not accessible to assistive technology (unless it has a very long alt text). Fortunately, AI can make sense of images of text. Now, a blind or low-vision person who once would have needed someone else to read out a menu to them can upload the image of a menu into ChatGPT and how the AI output text that is digestible by assistive technology. Be My Eyes, an app that connects blind and low-vision users who want sighted assistance to volunteers, recognizes this powerful use case and is using AI to provide visual assistance.
Importantly, all these technologies offer users the ability to review the AI-generated content, as well as provide users with alternative interactive methods. For example, if voice recognition doesn’t work because of an accent, speech impediment or stutter, the user can type out their command instead.
“AI improves efficiency, not accuracy.”
Context matters in UI, code and accessibility
AI is limited on what it is able to perform because it doesn’t understand important context. Moreover, it doesn’t realize that it doesn’t have the context is requires and will be over-confident in its inaccurate response, a common AI issue.
Take for example this code snippet:
<div onClick=“doAction()”>
What element is this? A developer might be able to observe the onClick and assume it’s a button. But it can also be a link. Or a radio button, when “Other” is selected and a textbox is triggered. The element could also be an accordion or a tab, which are foundationally both buttons but have context around it that make them more than simple buttons.
There are even more questions that are important to ask to make sense of this code snippet:
What relationships does it have with other elements on the page
What state(s) does it have?
Is it in the right tabbing/reading order?
What keyboard behaviour should it have?
Does it have a good focus state?
Does it have good colour contrast?
If humans can’t answer these questions, how can AI? AI is admittedly not good at ambiguous or context-dependent code, which is why we can’t rely on it to always be accurate, especially in a complex domain like user interaction.
Image of text says:
Summary: AI struggles with ambiguous or context-dependent accessibility issues such as:
Visually hidden content
Custom interactive elements
Redundant or incorrect ARIA
Dynamic updates without live regions
Vague or misleading labels
Using semantic HTML and established accessibility patterns reduces these ambiguities.
Technology can help - it’s called automation
While AI can’t understand the context of the code you write, technology can be used to make your code more accessible. We wrote an article full of free accessibility tools that you can easily implement into your product development lifecycle. From browser extensions to linters to page scanners and Figma plug-ins, they’re all free and relatively easy to use.
Remember, automation only catches 20-30% of accessibility issues! So it’s important to pair these tools with manual and user testing.
AI is biased
AI poses another, possibly greater risk when we rely on it for accessibility. How does AI learn? By training off data sets. Remember that Million Project statistic - how can AI learn to write accessible code when it is training off code that is riddled with accessibility issues?
Content creator Jeremy Andrew Davis created a TikTok video where he asked MidJourney to generate images of an autistic person. Of the 148 images that were generated:
Only 2 were female-presenting
Only 5 were over 30
They were all white
None were smiling
This is an example of how AI leans on stereotypes and prejudices that are “patterns” in our society. Moreover, AI is meant to exclude outliers, amplifying those biases.
“AI trains on biased, ableist data.”
We’re asking the wrong question
When we consider AI and accessibility, asking if the new tech can “solve” accessibility is the wrong question. This is because disability is not a technology problem; it’s a social one. Disability has existed long before and apart from technology:
It’s creating buildings that don’t consider the different mobility needs of its patrons
It’s discriminatory hiring practices that make it less likely for disabled people to be employed or fully employed
It’s believing that disabled people are a “niche” when in fact they represent significant buying power across the globe, and evidence shows that accessibility practices improve companies’ bottom line
“Disability isn’t a technology problem; it’s a social one.”
Accessibility isn’t a problem to “solve”. It’s something to build. We do this by:
Embedding accessibility into our entire product development lifecycle: don’t leave accessibility to the end as an afterthought. Shift left is the most effective way to impact a product and influence culture.
Establishing executive-level policies: make accessibility a directive from all the way at the top, and have senior leaders own the success of the product’s accessibility.
Bringing in accessibility expertise: Whether that means bringing in external experts or hiring dedicated accessibility specialists, having dedicated experts will help you understand where you are and where you need to go.
Testing with disabled users: the truest way to ensure the accessibility of your digital products is to test with real users. When doing user testing, be sure to actively seek out disabled users to be part of the testing group.
Upskilling: make sure you and your team understand the accessibility and user requirements to set yourselves up for success.
It’s by focusing on improving our processes, culture and training that we actually make digital products accessible.
Find out more about Aleph Accessibility's auditing, training and consulting services. Or get in touch to start or accelerate your accessibility journey.