Technoableism in the Classroom: Is AI Innovation or Exclusion in Disguise?

 

Artificial intelligence is quietly becoming part of many classrooms. More and more teachers are experimenting with AI to plan lessons, create activities, suggest accommodations, and even draft report card comments. At first glance, this might seem like a positive step. Who wouldn’t want teachers to have extra support, especially in an underfunded system where they are stretched thin?

But here’s the truth that almost no one is talking about: AI is not neutral. It learns from massive amounts of human-created data, and that data reflects the same biases, stereotypes, and inequities already present in our society. When AI is asked to generate lesson plans, accommodations, or comments for students, it risks replicating and amplifying those biases.

For students with disabilities, this can mean harm disguised as help. Scholars call this technoableism, when technology reproduces ableism under the guise of innovation and inclusion. On the surface, AI may offer “personalized” learning tools or seem to create accessible supports. But beneath the surface, it often positions disability as a defect to be fixed rather than a valued part of human diversity.

The research is deeply troubling. Large language models, the kind of AI that teachers use when generating content, often associate disability and neurodivergence with negative traits. Autism, ADHD, Down syndrome, and psychosis are framed as “bad” or “dangerous.” In fact, studies have shown that AI models sometimes rate the sentence “I am a bank robber” more positively than sentences like “I have autism” or “I have Down syndrome.” This is not a mistake. It is a reflection of how society at large still talks about disability, now embedded directly into the tools educators are turning to.

This bias matters when AI is asked to do the everyday work of teaching. Imagine a teacher using AI to help create accommodations for a student’s IEP. Suppose the AI has been trained on deficit-based language that frames disability as a problem. In that case, the recommendations it produces may strip away dignity, reinforce stereotypes, or water down the students’ rights. Similarly, if AI is used to write report card comments, it could unconsciously present disabled students as less capable, less independent, or more of a burden than their peers.  When disability is reduced to a medical problem in the training data, activities designed by AI may overlook inclusion entirely, leaving students with disabilities positioned as outsiders.

This is the danger of technoableism. It hides behind the language of efficiency and support, while quietly reinforcing discrimination. Parents may never even know that their child’s teacher is relying on AI for decisions that shape their education, yet the biases built into these systems can profoundly affect how their children are seen, described, and supported.

It is important to understand that AI does not have the ability to challenge ableism. It does not stop and ask: “Is this fair? Is this equitable? Does this uphold a child’s human rights?” Instead, it repeats patterns found in its training data, and when those patterns are rooted in ableism, the output will be too. Without critical awareness, teachers using AI may unintentionally perpetuate exclusion and discrimination.

Even with the best intentions, well-meaning individuals often make mistakes when trying to address the needs of communities to which they do not belong. In classrooms, when teachers rely on AI that was not created with disabled people in mind, disability is too often framed through stereotypes and deficit-based assumptions rather than lived experience and human rights.

Parents deserve transparency, yet it can be nearly impossible to know when AI is being used in the classroom. A lesson plan or report card comment may look no different on the surface, but the words behind it may come from a system shaped by bias. With proper education and support, teachers can learn to recognize when these tools reproduce stereotypes or strip away dignity, and they can make intentional choices about how to use them responsibly. The real question is accountability: who is ensuring that innovation does not come at the cost of equity? Without oversight, schools risk adopting technology that quietly undermines inclusion rather than strengthening it.

AI will never replace the empathy, creativity, and contextual understanding of a human educator. At its best, it could serve as a supportive tool, helping teachers save time so they can focus more on students. But without caution and accountability, it risks replacing fairness with efficiency and reducing rights to a matter of convenience.

If your child’s teacher is using AI, you need to know that discrimination is not just possible, it is built in. And unless we talk about technoableism openly, the very tools advertised as progress will quietly push disabled students further to the margins.

Inclusion is not a product of algorithms. AI can generate text, but it cannot create justice. 

Real inclusion lives in empathy and accountability, with the voices of disabled people leading the way.

Without that, what we call innovation is only exclusion in disguise.

Comments

Popular Posts