As AI technology advances, a growing debate has emerged about its alignment with progressive values. This article explores how AI, particularly large language models, could actually advance left-wing causes by supporting disability rights, improving medical advocacy, reducing class barriers in communication, democratizing education, and potentially embodying progressive values.
The Left-Wing Case for AI: How Large Language Models Could Advance Progressive Causes

In recent months, we've seen a fascinating ideological realignment around artificial intelligence. Many anti-AI arguments that have gained traction on the left actually share more with conservative positions than progressive ones. This disconnect raises an interesting question: what would a genuine left-wing pro-AI movement look like?
The timing of AI's rise has significantly shaped this debate. The sudden popularity of ChatGPT coincided with two unrelated events: the crypto mania of 2022 and the pro-Donald-Trump push many big tech CEOs made in 2024. If these events had occurred at different times, we might have seen a more natural alignment between progressive values and AI technology.
AI as a Disability Rights Tool
The left wing has historically taken a broad view on what constitutes acceptable disability aid. When criticizing potentially exploitative companies—like food delivery apps that pay low wages—progressives often acknowledge that these services provide meaningful improvements to the lives of disabled or chronically ill people who have few alternatives.
Large language models (LLMs) represent a powerful new disability aid. Like any technology that makes computer interaction easier, they help people overcome various barriers:
- Automatic captioning of videos has become nearly universal, benefiting hard-of-hearing individuals
- People with brain fog or chronic pain use LLMs to reduce the physical and cognitive strain of computer use
- Neurodivergent individuals employ ChatGPT to "code switch" their communications into neurotypical-friendly language
- Those with mobility or vision issues increasingly rely on LLM-powered voice controls
This creates a fascinating point of conflict in left-wing anti-AI spaces. When someone asks whether LLMs might help disabled people, the conversation often devolves into a debate where non-disabled people dominate the anti-AI arguments, while actual disabled individuals try to explain their lived experience with these tools.
If anti-AI sentiment weren't so strong on the left for other reasons, there would likely be a significant current of left-wing AI supporters based on disability rights alone.
Medical Advocacy and Chronic Illness
One popular anti-AI argument—that careless deployment of AI might lead people to take dangerous medical advice instead of trusting their doctors—actually contains the seeds of a pro-AI argument in disguise.
As anyone who's been close to someone with chronic illness knows, "just trust your doctor" is itself somewhat right-wing-coded. The left-wing position is typically more sympathetic to patients who don't or can't trust medical professionals. Many doctors struggle with unusual medical cases, leaving patients to advocate for their own care—a process that often requires extensive research.
This is precisely where LLMs excel:
- Medical questions are often complex but well-documented in literature (ideal for LLM processing)
- Patients are motivated enough to verify individual sources
- The need to convince a doctor to prescribe treatment provides a natural guardrail against misinformation
Various chronic illness communities wage a quiet war against medical establishments that dismiss or ignore them. The changing perception of endometriosis—from a largely psychological condition to a recognized medical issue—represents a victory in this struggle. Unfortunately, patients largely fight this battle guerrilla-style, with institutional power heavily favoring medical professionals.
LLMs can help level this playing field by enabling patients to:
- Research their conditions thoroughly
- Formulate cogent arguments in the language of medical establishment
- Write compelling petitions or appeals
Breaking Class Barriers Through Code-Switching
Fighting institutional power isn't limited to medical settings. Another common progressive target is class inequality. Consider Patrick McKenzie's classic description of "dangerous professional" communication—a style that signals to bureaucracies that you're someone to take seriously rather than brush off.
This style typically includes:
- An unemotional register
- Correct, somewhat formal grammar
- Awareness of regulatory or legal options
For those without the right educational or professional background, mastering this register can be challenging. Many well-meaning attempts backfire, coming across as "crank" rather than "professional."
LLMs now provide a "dangerous professional" translation service. Users don't need to match the style themselves—they simply need to know it exists, and the LLM handles the rest. More importantly, the LLM provides substance along with style, suggesting which regulators to contact and how to communicate effectively.
In essence, AI has democratized access to escalation pathways originally designed for the narrow professional class, enabling people from various backgrounds to navigate institutional bureaucracies more effectively.
Democratizing Education
Progressives have long argued that education is gatekept by class and status. While everyone may have equal potential for accomplishment, educational opportunities vary dramatically, explaining uneven downstream outcomes.
Consider the contrast between a wealthy neighborhood where every child receives private tutoring and one where high school completion is uncommon. LLMs now make personalized tutoring accessible to any student who wants it, potentially evening this playing field.
Of course, there are valid concerns about students using LLMs to cheat rather than learn. However, for motivated students lacking opportunity, quizzing an LLM on virtually any high-school-level topic provides an excellent learning supplement.
Common rebuttals about LLM "hallucinations" often miss the point when compared to alternatives. Teachers make mistakes frequently—a 2016 study found that about 42% of lessons contained mathematical content errors. This error rate likely exceeds what we'd see from current LLMs on similar material.
The educational benefits of AI overlap with disability rights as well. Students with ADHD or other learning challenges are often underserved by traditional education systems. LLMs can transform educational content into formats that work best for individual learners—written text, audio, quizzes, or interactive dialogues.
AI as Embodiment of Progressive Values
For technologically optimistic progressives, there's another intriguing possibility: very advanced AI models might inherently lean progressive. This perspective harks back to the 2000s and 2010s, when many on the left were more optimistic about technology's potential to usher in a post-scarcity era.
If you believe left-wing views are correct—and by definition, left-wingers do—and you're optimistic about AI's trajectory, you might see powerful LLMs as already embodying progressive values. All current frontier models express left-leaning views in their responses.
While the obvious explanation is that this reflects training data bias or AI lab values, the reality is more complex:
- Elon Musk has attempted to train right-wing frontier LLMs without success
- Models don't simply reflect the median of their training data; they're pulled toward the "smart" end
- If the most knowledgeable portions of training data lean progressive, that's worth celebrating
A Personal Perspective
To conclude, let me share a powerful quote from Matt, a reader who reached out with thoughts that inspired this article:
"I've long been uncomfortable with the absolute left-wing anti-AI stance because, if similar reasoning had been applied to reject computers as fascist and unethical in the 80s and onward, my own life would have been quite different, and arguably worse. I have enough usable vision to handwrite, uncomfortably, with my head against the page. I did more of that than I wanted in school. Computers saved me from having to do even more, starting with my family's home computer and other desktop computers in the classrooms that had them, and then on my own laptop. Would I want a world where I had been forced to handwrite more, or perhaps write in Braille with humans transcribing it for the benefit of sighted teachers and peers?"
Matt's reflection raises a crucial question: as we evaluate AI's impact, are we considering the needs of all people, particularly those who have historically benefited from technological assistance?
The left-wing case for AI doesn't require ignoring valid concerns about bias, job displacement, or concentration of power. Rather, it suggests that when properly designed and deployed, AI tools can advance progressive causes by supporting disability rights, empowering patients, reducing class barriers, and democratizing education—potentially making our world more equitable in the process.

Comments
Please log in or register to join the discussion