行ってきます 。
AI, classification, and the models of disability.
I finally finished Atlas of AI this morning and am prepping for my flight to Japan this afternoon. Going back over notes I took while reading, the subject of how technology, and by default, how we humans classify each other has really stayed with me. It’s a critical theme of Atlas and the author tackles it thoughtfully throughout the book.
Even when it comes to disability.
“Disability scholars have long pointed to the ways in which so-called normal bodies are classified and how that has worked to stigmatize difference. As one report notes, the history of disability itself is a ‘story of the ways in which various systems of classification (i.e., medical, scientific, legal) interface with social institutions and their articulations of power and knowledge.’” (Atlas of AI, P. 146).
Let’s unpack this a bit.
The Main Models of Disability
One of the first subjects we broach in Disability Studies is the models of disability. Basically, the ways in which disability comes to be classified. And these models are something we never quite move on from in our studies, as they manifest almost everywhere, every day.
But what are they exactly?
The medical model treats disability as a personal defect that needs to be cured or fixed. It locates the “problem” entirely within an individual’s body.
The social model flips this on its head, arguing that disability emerges from barriers in our environment. A wheelchair user isn’t disabled by their body—they’re disabled by stairs without ramps, buildings without elevators, and systems designed without them in mind.
The minority model goes further, recognizing that disabled people face oppression similar to other marginalized groups, and that experiences of disability intersect with race, gender, sexuality, and class in complex ways.
And there are many others.
But when it comes to another set of models, Large Language Models (LLMs), we know that these have been trained on decades of internet data that has been in turn labeled and classified.
And news flash: that data overwhelmingly reflects the medical model.
How LLMs “See” Disability
The foundation of the medical model is that it positions disability as a medical condition to be healed or, at times, pitied. Consequently, so do most LLMs.
In a majority of LLM queries, the wheelchair is still the universal symbol of disability. Text-to-Image models tend to fixate on physical, visible disabilities, particularly wheelchair use, when prompted with “a person with a disability.” In the paper, “They Only Care to Show Us the Wheelchair: Disability Representation in Text-to-Image AI Models,” a group of disabled reviewers evaluated LLM outputs and assigned scores to them. The paper found, “several tropes and stereotypes, many negative…including perpetuating broader narratives in society around disabled people as primarily using wheelchairs, being sad and lonely, incapable, and inactive” (Mack, P1).
Not only does this reveal the erasure of the vast spectrum of non-apparent, cognitive, sensory, and psychiatric disabilities, but it also lends the narrative to depictions of disabled people as lacking agency and requiring help, which are also cornerstones of the medical model.
While some LLMs are actively trying to course correct and retrain themselves to show disabled people as happy and full of agency, disability is more than not associated only with assistive devices and a quality of otherness.
Try It for Yourself
Fire up your LLM of choice and do a simple prompting exercise. What does “person with a disability” output? What happens when you add in more descriptive prompting like: “Japanese person with a disability?” Or, “Person with a non-apparent disability in a coffee shop.”
What are you getting?
Of course, you could argue that folks powerful with prompting could craft a vision for exactly what they want to output. But a general user is putting in simple prompts for simple outputs. And that’s where a lot of ableism in AI lives.
As I’ve been prepping for my workshop in Tokyo, I’ve also been prompting an array of images from most of the LLMs and Large Image Models (LIMs) out there. GPT, Firefly, Google’s Gemini, Imagen, and Nano Banana; and also Runway, Flux, Ideogram.
Overwhelmingly, the stories the images tell are ones of classification. Wheelchairs dominate without specifying the types of disability. Kimonos, Kyoto or cherry blossoms tend to come with many of the images that involve Japan. And the base stock photography these LIMs are trained on is wildly apparent. You can actually see how the data behind it may have been labeled and how the pixels have been compiled.
Here’s one of my favorites. And by favorite, I mean really perplexing.
Description by Claude Sonnet.
A warm and dignified portrait of an elderly Japanese woman in a wheelchair, photographed on a traditional street in Japan. An elderly woman with a gentle, content smile wearing a beige bucket hat is dressed in a purple/mauve knitted cardigan over a pink turtleneck sweater. A blue and white plaid blanket covers her lap. She is seated in a black wheelchair with her hands resting on the armrests. A picturesque traditional Japanese street, likely in Kyoto (possibly the Higashiyama district or near Yasaka Pagoda. A five-tiered pagoda (gojū-no-tō) is prominently visible in the background. A traditional wooden machiya buildings line both sides of the sloped street. Cherry blossom branches frame the top right corner of the image and stone lanterns and traditional architectural details visible. Paper lanterns and red vending machines add pops of color. Other pedestrians visible in the mid-ground, some wearing masks.
Now, let’s go back to the beginning of this post and finish the thought from the author of Atlas of AI.
“At multiple levels, the act of defining categories and ideas of normalcy creates an outside: forms of abnormality, difference, and otherness. Technical systems are making political and normative interventions when they give names to something as dynamic and relational as personal identity, and they commonly do so using a reductive set of possibilities of what it is to be human. That restricts the range of how people are understood and can represent themselves, and it narrows the horizon of recognizable identities” (Atlas of AI, P. 146).
Okay, that’s a lot to take in. So I’m going to close by asking: What does the image of the older woman tell you about how she’s being understood? How has her identity been crafted by data and technical systems?
And again: What do you see?
それでは、行ってきます。
Resources
Crawford, K. (2022). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Mack, K. A., Qadri, R., Denton, R., Kane, S. K., & Bennett, C. L. (2024). “They only care to show us the wheelchair”: Disability representation in text-to-image AI models. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’ 24), Article 166, 1–23. Association for Computing Machinery.

