鏡 Kagami.
Disabling AI through imagery.
There is, in reality, a virtual me.
This virtual me will not age, and will continue to play the piano for years, decades, centuries.
Will there be humans then?
Will the squids that will conquer the earth after humanity listen to me?
What will pianos be to them?
What about music?
Will there be empathy there?Empathy that spans hundreds of thousands of years.
Ah, but the batteries won’t last that long.
The legendary artist Sakamoto Ryuichi passed away in March 2023. A few months later I went to his performance at the Shed in Hudson Yards where we all sat in a dark room and listened to the mixed-reality version of him playing solo piano. The piece was called 「鏡」(Kagami) or “mirror.”
It was haunting. It was beautiful. But mostly, it was a deep reflection on the connection between humans and technology.
And that had always been Sakamoto’s jam.
On the way out, I stopped by the little bookstore in the lobby where I bought some Kagami goods, including a satchel of a scent he designed. That’s also where I spotted a salmon-colored book with deep navy type called Atlas of AI. Now, it had nothing to do at all with Sakamoto or the exhibit, but someone knew it belonged there.
Books about AI are supposed to be dark and foreboding, aren’t they? You know, with robots and futuristic worlds with no humans and like multiple moons in the sky. But this one had a diagram on it that was totally not AI at all. It had brains, and trees, and mountains, and scales. It was pretty weird. So naturally, I bought it.
But then it sat on my bookshelf for three years.
A few weeks ago, I cracked it open. I think maybe I needed to hold a hardcover for a minute in this current world of digital disconnection. Admittedly, the first few pages were as weird as the cover and went into detail about a talking horse in Berlin.
(Wait, what? This couldn’t be a book about AI.)
But, I read on.
One of the beginning chapters talked a lot about training AI to see. And that’s where the lightbulb went off.
I’ll be facilitating a workshop in Tokyo in two weeks where we’ll explore how AI sees disability and break down what it’s trained on. While I’m saving that full experience for the room full of designers at the conference, I’ll start to unpack here how we as humans interpret and classify images; and how AI systems and models are trained on a somewhat false sense of truth.
I’ll let the Atlas of AI kick it off:
“Images are remarkably slippery things, laden with multiple potential meanings, irresolvable questions, and contradictions. Yet now its common practice for the first steps of creating a computer vision system to scrape thousands-or even millions-of images from the internet, create and order them into a series of classifications, and use this as a foundation for how the system will perceive observable reality. These vast collections are called training datasets, and they constitute what Al developers often refer to as ‘ground truth.’ Truth, then, is less about a factual representation or an agreed-upon reality and more commonly about a jumble of images scraped from the Internet.” (Atlas of AI, pg. 96)
Whoa. I mean…
Most images coming of out of AI, when you really lean in and deconstruct them, are indeed full of meaning and contradiction. But that’s not for the reasons we think.
What AI is considering as the truth in the images was basically born from just a random collection of things “scraped from the Internet,” as Atlas of AI states. It’s not some magic sauce or mystical source data: It’s just what society (via the Internet) thinks of a whole lot of rando stuff. And when we start to apply this training data “truth” to begin calling out ableism in AI, stuff gets real interesting.
In an academic study called, “They Only Care to Show Us the Wheelchair”: Disability Representation in Text-to-Image AI Models,” we begin to understand how AI interprets disability by assessing imagery.
“…generative AI relies on the past to produce the future. Whereas disability representation advocacy focuses on moving forward by involving people with disabilities (e.g., casting calls and sensitivity consultants), we must contend with legacies of erasure as they quite literally make the data that make the images. We speculate a tension between respectful representation and available training data, since people with disabilities have long been absent from digitized data sources given disability-based segregation’s legality in the US until the last few decades. (Mack, p. 13).
So if disability isn’t fully represented in data, or if there’s a skewed version of it in the training of AI, how are we actually telling its “truth”?
This provocation is why I think examining how LIMs and LLMs interpret disability could be a key to unlocking how we, as a society, can collectively alter our views of disability in the future. So, as we navigate the most disruptive technology of our time, one which is currently reshaping us as humans, it’s imperative that we learn to see what’s in front of us.
And, more importantly, question where it comes from.
I’ll be unpacking how to do that in the next few posts. But for now, with Sakamoto Ryuichi’s music and mirrors on my mind, I’ll close by simply asking:
What do you see?
Description:
An image showing a diverse group of people in a sleek, futuristic urban setting with elevated transit tubes, flying vehicles, and modern skyscrapers under a bright blue sky. The group includes: A person in tactical gear with augmented reality eyewear; Someone seated in what appears to be advanced mobility technology; A woman holding a tablet, wearing athletic prosthetics; A man in sunglasses with a white cane, accompanied by a robotic assistance device; A person seated in a wheelchair with futuristic design elements; A robotic dog-like companion; Others with various technological enhancements or accessories.
Resources
Crawford, K. (2022). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Foley, A., & Melese, F. (2025). Disabling AI: Power, exclusion, and disability. British
Journal of Sociology of Education.
Mack, K. A., Qadri, R., Denton, R., Kane, S. K., & Bennett, C. L. (2024). “They only care to show us the wheelchair”: Disability representation in text-to-image AI models. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’24), Article 166, 1–23. Association for Computing Machinery.


