What Exactly is "Disabling AI"?
Some Sort of Origin Story
I was having dinner with some 80-year-old friends last night. A pair I like to call: “The kids.” Both are academics and authors and thinkers also battling with the current state of AI and global affairs. We sat around a dinner table talking about things big and small but mostly wondering about the future. One of the pair kept us full of facts all night through a constant consult with the Perplexity app using voice and an iPhone display close to max type. It wasn’t the moment I came up with Disabling AI, but it was a moment where I knew its potential impact.
AI is upending everyone.
But who we don’t talk about it upending is people with disabilities. Older users of technology. Or folks with little digital literacy. All with their own flair when it comes to navigating products now but who may be left out of the conversation in the future if we don’t weave them into the consideration set as we build AI today.
They’re why design must deliver.
They’re why data must be scrutinized.
And they’re why I’m spending a second year researching and actively trying to alter the way we builders of digital experiences design AI.
How it all started.
In the Winter semester of 2024, my Visionary Voices cohort and I wrapped our heads around AI. It was a special project at CUNY SPS, where I’m now in my last year of an MA degree in Disability Studies. Visionary Voices was all about AI’s evolving influence across industries with the hope of raising critical questions about its limitations and how it could be more principled in its applications. And for us in the Humanities, it was a chance to discuss AI through our individual disciplines. Theatre. Disability. And Leadership.
We started meeting in the Fall and rallying each other to keep going despite full-time jobs and other regular course work. Thanks to support from a wonderfully strong cohort, by the tail end of that year I had gathered 40+ research papers and spent the holiday break poring through work on GenAI, disability, data, and design. I even dove into to Dr. Ashley Shew’s amazing work around Technoableism. And after a few weeks of nerding out and digitally dog-earing PDFs, I came up for air with the idea for Disabling AI.
My thesis was that AI is inherently ableist, but that we could learn to change it by deconstructing data and design.
The outcome of that research is posted here: CUNY Visionary Voices
How it’s going.
Flash forward to the end of 2025 when I started prepping for a workshop. A hybrid presentation/interactive session for a design conference. I was staying in Japan for a month and found myself in the mountains of western Kyoto at the end of the year, before the New Year’s rush. To me, this particular spot in Kyoto is one of the most beautiful and Zen places on earth. The ideal spot to hunker down and jam on the narrative that is about to drive my 2026. I’m not much of a public speaker, so I grabbed a workshop spot where I could facilitate a group discovering together. This presentation would be the glue that holds the conversation together. So it had to be right.
The workshop is slated for February 2026 in Tokyo; and I’m super jazzed to hang with a gaggle of designers and creatives in a room, tackling big things and making new pals.
In my sesh, we’ll learn how to assess GenAI image outputs for ableism and bias in groups. There’ll be very analog-y instructional cards. There’ll be tips and stories. There’ll be a ton of examples specific to Japan.
I’m legit nervous as heck to launch this project, but I’m also driven by something else that 80-year-old friend told me last night. He has a Substack. He writes about peace. And as of yesterday, he had 11 followers.
And with that, welcome to Disabling AI.
********
Folks in Japan, come through to Spectrum Tokyo:
https://fest.spectrumtokyo.com/2026/session/en/jennifer-andrews



