AI & Accessibility with Deafblind Writer
Key Points
- The episode explores how AI intersects with disability and accessibility, featuring a conversation with Elsa Honison, a deaf‑blind speculative‑fiction writer and long‑time disability advocate.
- Elsa recounts early experiments with Microsoft’s co‑pilot AI, which produced distorted or apologetic images when asked to depict a mother with hearing aids and blindness, highlighting the technology’s initial inability to accurately represent disabled identities.
- Over the past two years, both hosts note a rapid evolution in AI tools—moving from crude, inaccurate outputs to more nuanced capabilities—yet persistent challenges remain in ensuring these tools serve as inclusive aids rather than reinforcing biases.
- The discussion delves into practical hurdles of building accessible products, including the need for diverse data, thoughtful engineering, and respectful design that avoids tokenism or over‑compensation.
- By blending technical, artistic, and advocacy perspectives, the interview underscores that AI can be both a powerful accessibility tool and a potential hindrance if development doesn’t actively center disabled user experiences.
Sections
- AI, Accessibility, and Deafblind Perspectives - The host introduces a conversation with deaf‑blind writer and disability advocate Elsa Honison to explore how AI is applied to accessibility, its dual role as tool and obstacle, and the technical and creative challenges of building inclusive products.
- AI Progress on Disability Representation - The speaker discusses how recent AI models like ChatGPT and Claude have improved by stopping apologetic language, recognizing disability‑specific terminology, and accurately depicting disabled bodies without erasing them.
- AI as Adaptive Assistive Tool - The speaker explains how they use AI to read text, locate visual information, and organize daily tasks—offering a privacy‑preserving alternative to human‑based apps—and references other accessibility professionals who champion similar technologies.
- Beyond Checklist: Inclusive AI Design - The speakers argue that treating accessibility as a simple compliance box—like checking the WAI guidelines—fails to serve the billions of users with diverse disabilities, urging AI builders to actively understand and design for this large, often overlooked audience.
- Avoiding Accessibility Tech Debt - The speaker cautions against postponing accessibility fixes—labeling them as tech debt—and argues that AI overlays aren't genuine solutions, urging built‑in accessibility support for emerging low‑code “vibe coder” platforms.
- AI Browsers for Accessibility Audits - The speakers discuss leveraging Comet and Claude Haiku 4.5 as AI‑powered browser agents to generate SEO punch lists and automate accessibility QA on live websites.
Full Transcript
# AI & Accessibility with Deafblind Writer **Source:** [https://www.youtube.com/watch?v=T_yHQbZ4hF0](https://www.youtube.com/watch?v=T_yHQbZ4hF0) **Duration:** 00:20:18 ## Summary - The episode explores how AI intersects with disability and accessibility, featuring a conversation with Elsa Honison, a deaf‑blind speculative‑fiction writer and long‑time disability advocate. - Elsa recounts early experiments with Microsoft’s co‑pilot AI, which produced distorted or apologetic images when asked to depict a mother with hearing aids and blindness, highlighting the technology’s initial inability to accurately represent disabled identities. - Over the past two years, both hosts note a rapid evolution in AI tools—moving from crude, inaccurate outputs to more nuanced capabilities—yet persistent challenges remain in ensuring these tools serve as inclusive aids rather than reinforcing biases. - The discussion delves into practical hurdles of building accessible products, including the need for diverse data, thoughtful engineering, and respectful design that avoids tokenism or over‑compensation. - By blending technical, artistic, and advocacy perspectives, the interview underscores that AI can be both a powerful accessibility tool and a potential hindrance if development doesn’t actively center disabled user experiences. ## Sections - [00:00:00](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=0s) **AI, Accessibility, and Deafblind Perspectives** - The host introduces a conversation with deaf‑blind writer and disability advocate Elsa Honison to explore how AI is applied to accessibility, its dual role as tool and obstacle, and the technical and creative challenges of building inclusive products. - [00:03:12](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=192s) **AI Progress on Disability Representation** - The speaker discusses how recent AI models like ChatGPT and Claude have improved by stopping apologetic language, recognizing disability‑specific terminology, and accurately depicting disabled bodies without erasing them. - [00:06:58](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=418s) **AI as Adaptive Assistive Tool** - The speaker explains how they use AI to read text, locate visual information, and organize daily tasks—offering a privacy‑preserving alternative to human‑based apps—and references other accessibility professionals who champion similar technologies. - [00:10:54](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=654s) **Beyond Checklist: Inclusive AI Design** - The speakers argue that treating accessibility as a simple compliance box—like checking the WAI guidelines—fails to serve the billions of users with diverse disabilities, urging AI builders to actively understand and design for this large, often overlooked audience. - [00:14:37](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=877s) **Avoiding Accessibility Tech Debt** - The speaker cautions against postponing accessibility fixes—labeling them as tech debt—and argues that AI overlays aren't genuine solutions, urging built‑in accessibility support for emerging low‑code “vibe coder” platforms. - [00:17:54](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=1074s) **AI Browsers for Accessibility Audits** - The speakers discuss leveraging Comet and Claude Haiku 4.5 as AI‑powered browser agents to generate SEO punch lists and automate accessibility QA on live websites. ## Full Transcript
Finally, we are going to dive into the
vexed question of AI and disability and
accessibility. And I'm going to be
interviewing a surprise guest to dive
into this with me. Help me understand
how is AI being used for accessibility?
How is accessibility both a tool and a
hindrance? What are some of the
challenges that come with building
product in the world of accessibility?
We're going to get into the vibe coding
side, the engineering side a little bit.
We're going to talk a little bit about
even the artwork and drawing side. very
fun stuff. So, have fun. I hope you guys
enjoy this. I love doing interviews.
Okay, we have a fun guest post today.
Uh, I want to interview my spouse and
there's a special reason for that. Uh,
Elsa, do you want to sort of introduce
yourself and tell us a little bit about
uh why we both thought it would be a
great idea for you to do a video?
>> Sure. So, my name is Elsa Honison. I'm a
deafb blind speculative fiction writer
and non-fiction writer. I've been
spending the last 16 years of my life
doing disability advocacy that has
rolled into tech and we've been talking
a lot over the last two years about how
disability and AI have been intersecting
which is why we thought it might be
interesting to have a conversation.
>> Yes. And so for for context, I've been
the one that has come back to you and
said AI this, AI that, what about this
and that. And I feel like that
conversation has really shifted over the
last 24 months. And maybe it would be
helpful just to give folks a sense of
where we've come from and how far we've
come in the last two years with AI and
disability.
>> Yeah. So two years ago, I think we were
using, you know, it's been so long since
I used it because it was kind of crap
that I don't remember the name anymore.
>> No, it wasn't even Chad GPT. It was the
Windows Microsoft Edge one. Oh,
co-pilot. Early co-pilot.
>> It was early co-pilot. So, it was early
co-pilot. We were using it to generate
images and I made coloring book pages
that were custom for our kids. And one
of them was a unicorn corgi and one of
them was a skeleton corgi. And I was
like, "Oh, you know what might be really
fun is to do an image of me with kids
because it was close to Mother's Day or
something." Mhm.
>> Um, and I asked it to do a picture of a
mom with one eye with hearing aids and
glasses with kiddos, and it couldn't do
it. The first version was this weird
surrealist almost Picasso thing that was
vaguely terrifying. The second one gave
the child the blindness, but not the
adults, which I thought was an
interesting choice. And it kept
apologizing to me. It kept saying, "Oh,
I'm so sorry that you're blind. I'm so
sorry that you're deaf. this is
terrible. I was like, "Excuse you. I
don't want you to apologize to me." And
because I am a disability activist, I
started playing with AI to understand
whether or not this was just co-pilot.
And what I discovered is that nothing
could replicate my blind eye. It kept
kicking it out and changing it into two
eyes that matched. And so, somebody
would be doing the Renaissance painting
selfie game and I couldn't do it because
it would turn me into a two-eyed person.
So we went we started there and I just
kept asking questions of the AI. I kept
saying things like, "Hey, what do you
think about disability?" And again, it
used to apologize. Over the last 24
months, Chad, GPT, and Claude have both
stopped apologizing for my disability,
which is frankly better than most
average people.
>> It's progress.
>> It's progress. Claude even knows
disability policy. Like I can go into
Claude and ask it questions about
disability language choices. And Claude
will say,
>> "Well, I know person first language. I
know identity first language. Which one
do you like?" Like, so it's starting to
get it. And today there was an article I
think it was on BBC.
>> Yeah. The Are you talking about the one
where someone was talking about how she
her body was able to be drawn now
because
>> and that was a BBC article. Yeah.
>> So she she was she's a prosthetic
wearer. Um, and she was able to get AI
to draw her prosthetics. And I hadn't
run a test in a while. I thought, I'm
bored with this. I'm tired of just
constantly seeing my disability erased.
So, I didn't want to keep trying. And
then today, I tried it cuz I saw the
article and it's able to mostly do it.
Sometimes all the way. It depends on the
prompt. But now, it actually will draw
my cataracted eye. It will draw my
hearing aids. It doesn't try to make me
a non-disabled person through an AI
lens.
>> Yeah. I remember you and I had some
conversations about why that was an
interesting problem for the modelmaker
community. And I'm really curious. I I
don't have an inside story on how they
solved that one, but there's some
presumably some kind of reinforcement
learning or perhaps some new images
they've ingested that are helping the AI
to figure out what to draw when asked
for a prosthetic or asked for hearing
aid. I mean, I think it's interesting
because from my perspective, I've been
pushing a lot publicly saying we need to
have AIs that are trained to respect
disability and to see it because that's
been an issue in previous AI
experimentation before. If anybody is
familiar with the morality machine
project from MIT, this is back in 2019,
but they were testing the trolley
problem using AI and they fed a whole
bunch of different kinds of bodies that
might be crossing the street. It's a
really dark thought process, but what
was darker is that there were no
disabled people in the training data.
So, the only way you could think about
whether or not the autonomous vehicle
was going to hit a disabled person was
if it were an old person or a child or a
dog.
>> There were no wheelchairs. There were no
white canes. And so, it just it showed a
lack of information. And I talked with
people at the Allen Institute about
this. I've talked with people just
generally about the issue of not seeing
or talking about disability within AI
spaces and how dangerous that can be.
So, we're talking about a really sort of
fun example of just being able to see
yourself, but I will say that being able
to see yourself even in silly selfie
games matters for people's inclusion
within community. And so, it's all in
some ways kind of serious. If we don't
envision disabled people in a future
thinking world, we're not envisioning
disabled people at all. I I think that's
a really interesting sort of segue or a
point because one of the things that
I've been thinking a lot about is this
concept of AI as a universal uh enabler
a technology that helps all of us do a
lot of things well which makes it one
very difficult to talk about because
everyone's experiences sort of their own
experience of AI but it also means that
it applies in a lot of surprising ways
and so as much as you've talked here
about struggling with getting AI to see
you I think there's also a side of it
where I see AI as at least potentially
extremely powerful as an accessibility
tool and I'd be curious sort of for your
thoughts there as well.
>> It absolutely is. So I'll talk about my
experience first and then I have a
couple people I can shout out who people
can go research. Um I use AI to do
things like if I can't see something far
away, I can take a picture of it and
tell the AI to read it to me and then it
becomes large print. If I'm trying to
find something on a wall like
handwriting or something in a whole
packet of things and I can say this is
what I'm looking for, it can zero in on
that. I've used it to read prescription
bottles. And I'll tell you that there
used to be an app for that. It's called
Be My Eyes, but I never liked that app
because it required me to interact with
a stranger who was a volunteer on an app
who would read something for me. And I
am well known enough in the disability
community that I did not want to
necessarily go on an app where somebody
could see my name and where I was and
say, "Hey, tell me how to get around
this airport that I'm currently in by
myself. This just seems like a bad
idea." Or blind people having to have
someone read their credit cards. So now
the AI can do that for me. These are all
examples of ways that you can use AI as
an adaptive aid. Another one is people
with ADHD sometimes use it to track
medications. They'll talk to their AI
agent or they'll say, "Hey, here's what
I did four hours ago. What do I do
next?" And then there's people like
Jesse Laurens who you can find on
LinkedIn. She's also an accessibility
professional like I am. She's blinder
than I am. And she used AI to help her
take a cross-country trip on Amtrak. She
used Yeah. Like she basically used it to
take pictures, tell her where she was
going. It also helps her in her kids'
classroom to see her kids artwork. I do
the same thing. Like this is an
application of AI that allows people to
interact with the world the way that a
cited person would without having to be
a cited person.
>> You know, I love the range of examples
that you included there. One of the
things that has come up a little bit in
the Substack chat has been the range of
possible use cases around neurode
divergence. not just ADHD, but also we
have autistic folks in the substack chat
who are talking about how they're using
AI. It seems like if I were to sort of
maybe I'll try slapping a principle or a
layer on it and see how you feel about
it. It feels like one of the things that
makes AI really compelling as an
adaptive aid or a support in these
situations is that it can fill in the
spaces that you need it to fill in and
disappear when you don't want it to be
there. you don't have the privacy
intrusions associated with another
person the way you were talking about.
If your sort of ADHD is all about
hyperactivity, maybe it's about focus,
right? And about how you can get focus.
If it's more about sort of the inability
to get into flow state, but you can be
on the task, then you can work on that.
And there's just different ways to get
engaged. I'd be curious if that
resonates for you as sort of a universal
take, or if it feels too prescriptive. I
think the way I would frame it is
actually that it puts adaptability in
the hands of the disabled person. One of
the major issues with adaptive aids and
with sort of our culture of
accessibility is that it often relies on
external forces to give you the access
that you need. So as an example for
somebody who needs a guide dog, you need
that dog with you all the time, every
day. You are also responsible for that
guide dog all the time, every day. And
that means that you are sort of
externally required to use something
that's not just right there and can be
put down. And so I think your example of
like ADHD focus and flow where you don't
need to take a pill necessarily if you
have something that can help you go
through the guideposts. It's allowing
you to take control of your
accessibility in a way that's really
meaningful. And I think that's true for
the blindness examples that I was giving
you as well. I don't have to rely on
another person. I don't have to ask
someone for help. I actually can take
personal control and autonomy and that's
very rare in accessibility.
>> So let's pull on that thread because I'm
really curious. I know that in your day
job you think about product and
accessibility a fair bit. I think about
product a lot. It feels like you might
have some perspective for builders, for
people who are constructing with AI how
they can think about accessibility
beyond just like WIKG because I had to
run WIKG for chat bots back in the day
and I feel like that
>> that can't just be the only answer,
right? Like it can't just be okay, well,
we tick the box, we're done, right?
>> I mean, it's not. And I'll give you the
number one example why it's not. Um,
WIKG only solves for one disability at a
time and there are 1 billion disabled
people in the world. One in four
Americans are disabled. Just as
>> people didn't know that. I bet I bet
most people watch.
>> I bet most people had no idea. One in
four people in the United States are
disabled. And the disabled population in
the world is roughly the size of China.
>> In other words, In other words, there
are more disabled people than there are
users of chat GP2 right now.
>> Yes, that's correct. So, if you're
thinking about those numbers, then the
number one lie that you hear in building
conversations is that there are no
disabled people using your product
>> because there are absolutely disabled
people using your product. So, then the
next question is, well, what does
accessibility really mean if WIKG
doesn't solve for everything? And the
answer is to get to know your audience
and to get to know what your users can
and can't do. Now, not every single
product is made for every single
disability equally. I don't expect, for
example, a video game to be perfectly
accessible if it is a visual thing. But
I do expect you to have accessibility
controls so someone can try. Um, that is
the way that we get to things like
what's outside of WIKAG. Well, what's
outside of WIKAG is lots of things. And
so really look at problem solving
through logic rather than problem
solving through checkboxes. If you are
solving something with just audio, think
about whether or not a deaf person can
actually access the content.
>> Because at the end of the day, what
you're solving for is equal access to
information and equal access to
experience. And every experience might
look different depending on what
disability you need to think about.
>> So I want to throw another question at
you that feels related and I'm sort of
curious for how you think about it. We
are going to be in a world in call it 10
months to 12 months where the Intel deal
with Nvidia is done and there are Intel
chips that are LLM friendly in a lot of
laptops which are going to enable what I
call local inference or local LLMs. From
a practical experience perspective, I
think about it as we are almost on the
verge of a world that feels like Cloy
all the time. So, the the Cluey
experience is where you have this almost
glass-like overlay on the screen and
like it it talks it at you in text while
you are talking or having this
conversation or whatever. I don't have
clearly on, I promise, but it's just
always there and it's always on. It's
like a layer. It's very similar to what
Meta has come up with with their glasses
approach where they you put on the
glasses and it's like an always on
layer. I am a little bit worried or a
little bit curious maybe both that
having that kind of technology in
glasses, having that kind of technology
in laptops is going to very much offer
the opportunity to app builders to give
up on the accessibility problem and say,
"Well, the good news is, you know, the
new version of Cluey can just see the
screen and we don't have to worry about
this or the the glasses that you'll be
wearing can just sort of take care of
this for you and they just sort of punt
the problem down the road because we're
expecting an intelligence layer to catch
it. And I
>> Oh god, that's going to be so expensive
for you to fix if you don't do
accessibility at the beginning. So tech
debt for accessibility is very real.
People will often say, "Well, we'll just
fix it in post." And that's how you end
up with massive tech debt. So the first
thing that I would tell people is don't
build up the accessibility tech debt
because you will regret that life
choice. But I also think that the same
thing is true for accessibility in terms
of websites. Like layovers don't do the
job. They're not actually accessibility.
And so I think it's the same thing with
AI. AI layovers are not accessibility.
The way that the user may use accessibil
uh accessibility as a LLM tool. That's
that's the accessibility is the user
making that choice. But you can't force
a user to adapt using LLMs because I
don't think that's going to function
very well for a variety of reasons. One
of them being that you can't possibly
solve or every version of accessibility
when it comes to using an AI agent. I
guess maybe my last question and then
I'll leave it to you to sort of wrap up.
I I am curious. We are in a world where
it's not just developers building apps
anymore. It's also vibe coders who are
using services to build and there's sort
of accessibility implications there. I
know that tools like lovable have made
big strides on the security front
recently on bringing backend into the
tool. I'm curious. I haven't seen any
telegraphed updates on accessibility
from those tools. Do you feel like that
is a lovable bolts kind of level problem
to solve so vibe coders get support for
that or how would you frame that?
>> I do think that things like lovable need
to be thinking about that because if you
start building it in on a base level
with that kind of a product, it opens
the door for people like vibe coders to
learn accessibility versus relying on
every single vibe coder to do the right
thing. And as much as I think all the
vibe coders want to do really cool
stuff, I want vibe coders to have the
tools and a little bit of a nudge to
make sure that everybody they can't
think of because you can't think of
every product user, right? Like that's
not realistic even when you're just
thinking about non-disabled people. So
expecting everybody to know everything
doesn't work. And that's where I think
that larger companies have an ethical
responsibility to give people the
support to do that thing. So, I do think
that Lovable needs to build the nuts and
bolts, give people the opportunity to
learn these things because then you're
training people in skills that they may
not have or have even thought they
needed to get.
>> Have you ever handed a screenshot or
something like that to an LLM and asked
it to generate an accessibility
critique?
>> I have.
>> What happened?
>> It didn't catch everything.
>> Tell me more. So, it knew a lot,
understood, it understood things like
being able to see contrast, but because
it wasn't looking at the website itself,
it couldn't catch things like the link
wasn't an accessible link because it
can't see that through a screenshot. It
wasn't able to catch things like whether
or not the web page was able to be read
out loud using a screen reader because
if you're using a screenshot,
that information isn't available to the
LLM. I would be very curious to see how
to solve for screen reader use with an
LLM. I haven't really played around with
that a whole lot recently, but it's
definitely something I should look into
more. Maybe I'll report back another
time.
>> You could. I think one area that I would
be curious for your take on is aic
browsers as accessibility reviewers. So,
two examples comes to mind. Comet I
think is really interesting because
Comet lets you pull up the sidebar and
examine a living site that it will
navigate in any way you want. So as an
example have done a terrible job with
SEO optimization on my own personal site
and I was like I need a punch list and
so I asked Comet to go through my site
and navigate it and figure out the
issues with SEO and come back with a
punch list and it was able to look at
multiple pages and come back with a very
complete list. Oh, that's a really
interesting thought. I haven't used
Comet to try doing accessibility work
yet.
>> The other one that is going, it's in
research preview now, but we're all
going to get it soon, is Claude Haiku
4.5
in Chrome. So, not in the app.
>> And so, if you're in Chrome and you
install a browser, then Claude can act
as that browser agent for you. and
people are starting to use it to
automate QA testing of live sites
because Claude can go through and do
that kind of navigation.
>> I would say I would trust Claude because
I've I've had really good experiences
with Claude around WIKAG and also with
accessibility in general and I've
noticed that that particular model is
well trained on disability and
accessibility
>> which sort of fits with Anthropic's
constitutional AI approach.
>> It does. I think that they they
definitely have the market on that
particular sort of aspect is looking at
disability as part of the ethos of who
they serve. And I think that's
interesting that you can kind of tell
just from talking to the different
products what comes up.
>> Cool. Any final words of wisdom?
>> Well, I think two things. One, don't
trust Chad GPT to write perfectly
accessible code. Please double check it.
It makes mistakes. And two, uh, it
because one always puts in a plug at the
end of an interview, I have a book
called Being Seen, which is out now. And
I have a second book that's coming out
next fall called Dear Blind Lady. And it
basically answers all your questions
about disability, even questions like
these
>> and stuff. Well, thank you for coming
on. I had a good conversation. I don't
think this gets talked about enough. I'm
glad we were able to have a
conversation.
>> I mean, fortunately, we live in the same
house.
>> Yeah, we talk about it a lot, but but
the world in general,
>> it's true. All right. Thanks for having
me on. Of course, toxin.