The fastest way to lose the AI debate is to win it with shame
If you want to understand where AI is going, stop asking who is "pure" and start asking why so many people feel they need it. The moral panic around tools like ChatGPT and Google Gemini has produced a familiar internet ritual: public confessions, purity tests, and the quiet threat of social exile for anyone who admits they typed a prompt. It feels righteous. It also fails at the one job ethics is supposed to do, which is to reduce harm in the real world.
Shaming AI users doesn't slow down data centers, doesn't protect artists, doesn't fix broken schools, and doesn't stop employers from using automation as a pretext to cut labor. What it does do is push ordinary people into defensive camps, where they stop listening, stop sharing how they use these tools, and stop trusting anyone who claims to care about harm.
How "AI virginity" became a badge of virtue
In some online spaces, refusing to touch an LLM has become a kind of moral identity. People brag about never losing their "AI virginity," as if a single Gemini prompt permanently stains your character. That framing is emotionally satisfying because it turns a messy, systemic problem into a clean personal choice. It offers a simple story: good people abstain, bad people indulge.
But the world doesn't run on clean stories. It runs on incentives, scarcity, disability, burnout, loneliness, and institutions that increasingly treat humans as overhead. When ethics becomes a purity contest, it stops being a tool for navigating reality and becomes a tool for sorting people into "us" and "them."
That sorting is not a side effect. It is the point. Shame is a social technology. It creates belonging for the people doing the shaming, and it creates silence in the people being shamed.
What shame is really doing for the people who deploy it
In a conversation I had with Dr. Fatima, we kept circling the same uncomfortable truth: shame often functions as a shortcut to control when people feel powerless. AI is big, fast, and backed by enormous capital. Many people feel they cannot influence the companies building it, the schools adopting it, or the employers using it to squeeze labor. So the target shrinks to something manageable: the individual user.
Psychologically, that move makes sense. It gives anxious people a lever they can pull today. It also gives them a clear villain they can see. "The user" is right there, in your feed, admitting they used ChatGPT to draft an email or to calm down at 2 a.m. A data center is not in your feed. A procurement contract is not in your feed. A university's budget model is not in your feed.
Shame is also seductive because it feels like accountability without requiring strategy. You don't have to build coalitions, propose policy, negotiate tradeoffs, or sit with ambiguity. You just have to condemn.
Listening to AI users is not endorsement. It's intelligence gathering.
There is a difference between empathizing with a person and approving of every system they touch. Listening to AI users is not "normalizing harm." It is how you learn what harms are actually happening, where the incentives are strongest, and what interventions might work.
Right now, many people use LLMs for reasons that are painfully ordinary. They are overwhelmed at work and need help organizing. They are disabled and need a planning scaffold. They are isolated and want something that responds. They are students in institutions that have quietly replaced education with credentialing, and they are acting accordingly. They are teachers drowning in administrative load, watching leadership adopt AI "efficiency" while ignoring the human cost.
If you want to reduce AI-related harm, those motivations matter more than your preferred moral narrative. They tell you where the demand is coming from. They tell you what gaps society is failing to fill. They tell you what people will defend, even if you call them names.
The disability justice split is a warning sign
One of the most revealing fractures is happening inside disability justice communities. Some posters argue that using LLMs is inherently unethical. Others, often similarly disabled, describe using ChatGPT to decide what to eat, to plan a day around pain and fatigue, to draft messages when executive function collapses, or to find safer options when they are sleeping in a car.
You can dislike the technology and still recognize the reality it is being used to patch. When someone uses an LLM as a prosthetic for a world that refuses to accommodate them, shaming them is not solidarity. It is a demand that they absorb the cost of your politics.
A compassionate ethics asks a harder question: what would it take to make that person less dependent on a corporate chatbot? That question points outward, toward healthcare access, housing, income stability, community support, and assistive tools that are accountable to users rather than shareholders.
Education is where the moral panic meets institutional hypocrisy
Few arenas show the mismatch between rhetoric and reality as clearly as education. Students are scolded for using AI, sometimes with language that implies moral failure. Meanwhile, institutions adopt AI to streamline administration, reduce staffing, and standardize learning into something measurable and cheap.
Teachers, already undervalued, are asked to police AI use while their own jobs are quietly made more precarious by the same technology. They face piles of AI-generated assignments that feel soulless, but the deeper problem is not that students are uniquely lazy. It is that many students have correctly inferred that the degree has become symbolic, expensive, and detached from meaningful learning. When the system treats education as a transaction, students respond transactionally.
Shame cannot repair that. Only institutional redesign can. That means clearer assessment models, smaller class sizes where possible, better pay and support for educators, and honest policies that distinguish between acceptable assistance and misrepresentation. It also means admitting that "ban it" is not a plan when the institution itself is buying it.
Harm reduction beats purity politics
Harm reduction is often associated with public health, but the mindset travels well. It starts with a simple premise: people will do what they do, for reasons that make sense to them. Your job is to reduce the damage, not to win a moral performance.
Applied to AI, harm reduction asks what guardrails can be built now, even in an imperfect world. It asks how to protect people who are vulnerable to manipulation, dependency, or delusion without treating them as stupid or sinful. It asks how to reduce ecological impact without pretending individual abstinence can substitute for regulation and infrastructure choices.
It also creates room for honesty. People are more likely to disclose risky or unhealthy patterns when they are not afraid of being humiliated. That matters for emerging concerns like AI-fueled paranoia, compulsive use, and the phenomenon some clinicians and researchers have begun discussing under the loose umbrella of "AI psychosis," where a person's interaction with a system can intensify delusional beliefs. You cannot intervene in what you refuse to hear.
What listening sounds like in practice
Listening is not a vibe. It is a method. It means asking users what they are doing with these tools, what problem they are trying to solve, and what they would use instead if they had better options. It means taking seriously the emotional functions LLMs can serve, especially for people who are lonely, anxious, or socially isolated.
It also means being precise about harms. "AI is evil" is not actionable. "My employer is using AI to increase workload without increasing pay" is actionable. "My school is replacing human support with chatbots" is actionable. "This model was trained on my work without consent" is actionable. Precision turns outrage into leverage.
And it means separating the user from the system. A person using ChatGPT to draft a cover letter is not the same moral object as a company using AI to surveil workers, or a platform using AI to flood the internet with low-quality content, or a vendor selling "AI tutors" as a substitute for human educators.
A practical playbook for compassionate AI ethics
Start by replacing purity tests with better questions. Ask whether a use case increases a person's agency or decreases it. Ask whether it replaces human support that should exist, or temporarily fills a gap that society refuses to address. Ask who profits, who pays, and who gets locked in.
Then focus your heat where it belongs. Push for transparency about training data and licensing. Push for energy reporting and realistic accounting of water and power use in AI data centers. Push for procurement rules in schools and government that require impact assessments, not just glossy demos. Push for labor protections so "AI productivity" does not become a euphemism for speedups and layoffs.
At the interpersonal level, treat disclosure as a gift. When someone admits they rely on an LLM to get through the day, they are telling you where the world is failing them. If your first response is disgust, you teach them to hide. If your first response is curiosity, you gain a chance to offer alternatives, boundaries, and support.
How to talk to someone who uses ChatGPT without turning it into a trial
If you want a script, keep it simple. Ask what they use it for. Ask what they like about it. Ask what worries them about it, if anything. Share your concerns in concrete terms, like privacy, accuracy, labor impacts, or environmental costs, and avoid implying they are morally defective for trying a tool that is being aggressively marketed and embedded into everything.
If the person is using AI for companionship, tread carefully. Mockery can deepen isolation. A better move is to ask what they are getting from the interaction and whether there are human or community supports that could meet some of the same needs. You can dislike the product and still care about the person using it.
If the person is using AI to cheat, don't pretend shame will fix it. Ask what the incentive structure is. Ask what feels pointless. Ask what support is missing. Accountability can coexist with empathy, but it has to be paired with a credible path back to dignity.
The real fight is over systems, not souls
It is completely reasonable to hate what LLMs have done to parts of our intellectual and artistic landscape. It is reasonable to fear ecological damage from energy-hungry infrastructure. It is reasonable to be furious about consent, compensation, and the way AI can be used to degrade work and flood culture with sludge.
What is not reasonable is to treat individual users as the primary moral battleground. That approach flatters our sense of righteousness while leaving the underlying machinery untouched. It also guarantees that the people most in need of support, the burned out, the disabled, the isolated, the precarious, will be the ones most punished for adapting.
If you want to build an AI politics that can actually win, start where people are, not where you wish they were, and remember that listening is not surrender; it is how you learn where to place your hands on the levers that move the world.