
Over the past week, I’ve watched left wing commentators on Bluesky, the niche short form blogging site that serves as an asylum for the millennials driven insane by unfettered internet access, discuss the idea that “the left hates technology.” This conversation has centered around a few high profile news events in the world of AI. A guy who works at an AI startup wrote a blog claiming that AI can already do your job. Anthropic, the company behind the AI assistant Claude, has raised $30 billion in funding. Someone claimed an AI agent wrote a mean blog post about them, and then a news website was found to have used AI to write about this incident and included AI-hallucinated quotes. Somewhere in this milieu of AI hype the idea that being for or against “technology” is something that can be determined along political lines, following a blog on Monday that declared that “the left is missing out on AI.”
As a hard leftist and gadget lover, the idea that my political ideology is synonymous with hating technology is confusing. Every leftist I know has a hard-on for high speed rail or mRNA vaccines. But the “left is missing out” blog positions generative AI as the only technology that matters.
I will spare you some misery: you do not have to read this blog. It is fucking stupid as hell, constantly creating ideas to shadowbox with then losing to them. It appears to be an analysis of anti-AI thought primarily from academics and specifically from the professor Emily Bender, who dubbed generative AI “stochastic parrots,” but it is unable to actually refute her argument.
“[Bender’s] view takes next-token prediction, the technical process at the heart of large-language models, and makes it sound like a simple thing — so simple it’s deflating. And taken in isolation, next-token prediction is a relatively simple process: do some math to predict and then output what word is likely to come next, given everything that’s come before it, based on the huge amounts of human writing the system has trained on,” the blog reads. “But when that operation is done millions, and billions, and trillions of times, as it is when these models are trained? Suddenly the simple next token isn’t so simple anymore.”
Yes it is. It is still exactly as simple as it sounds. If I’m doing math billions of times that doesn’t make the base process somehow more substantial. It’s still math, still a machine designed to predict the next token without being able to reason, meaning that yes, they are just fancy pattern-matching machines.
All of this blathering is in service to the idea that conservative sectors are lapping the left on being techno optimists.
The blog continues on like this for so long that by the time I reached the end of the page I was longing for sweet, merciful death. The crux of the author’s argument is that academics have a monopoly on terms like “understanding” and “meaning” and that they’re just too slow in their academic process of publishing and peer review to really understand the potential value of AI.
“Training a system to predict across millions of different cases forces it to build representations of the world that then, even if you want to reserve the word ‘understanding’ for beings that walk around talking out of mouths, produce outputs that look a lot like understanding,” the blog reads, without presenting any evidence of this claim. “Or that reserving words like ‘understanding’ for humans depends on eliding the fact that nobody agrees on what it or ‘intelligence’ or ‘meaning’ actually mean.”
I’ll be generous and say that sure, words like “understanding” and “meaning” have definitions that are generally philosophical, but helpfully, philosophy is an academic discipline that goes all the way back to ancient Greece. There’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails.
The essay presents a few other credible reasons to doubt that AI is the future and then doesn’t argue against them. The author points out that the tech sector has a credibility problem and says “it’s hard to argue against that.” Similarly, when this author doubles back to critique Bender they say that she is “entitled to her philosophy.” If that’s the case, why did you make me read all this shit?
All of this blathering is in service to the idea that conservative sectors are lapping the left on being techno optimists, but I don’t think that’s true either. It is true that the forces of capital have generally adopted AI as the future whereas workers have not—but this is not a simple left/right distinction. I’ve lived through an era when Silicon Valley presented itself as the gateway to a utopia where people work less and machines automate most of the manual labor necessary for our collective existence. But when companies from the tech sector monopolize an industry, like rideshare companies like Uber and Lyft, instead of less work and more relaxation, what happens is that people are forced to work more to compete with robots that are specifically coming for their jobs. Regardless of political leanings, people in general don’t like AI, while businesses as entities are increasingly forcing it on their workers and clients.
Instead of creating an environment for “Fully Automated Luxury Communism,” an incredibly optimistic idea articulated by British journalist Aaron Bastani in 2019, these technologies are creating Cyberpunk 2077. Hilariously, although the author of this blog references Bastani’s vision of an automated communist future as the position leftists should be taking, Bastani does not appear to be on board with generative AI.
Part of the reason I made a hard leftwing turn was because I was burned by my own techno-optimism.
Friend of Aftermath Brian Merchant points out something important about all this discourse: most of this conversation serves as advertising.
“We’re in the midst of another concerted, industry-led hype cycle, this time driven more visibly by Anthropic, which just landed a $30 billion investment round,” Merchant writes. “This time the hype must transcend multibillion dollar investment deals: It must also raise the stock of AI companies ahead of scheduled IPOs later this year and help lay the groundwork for federal funding and/or bailout backing.”
Part of the reason I made a hard leftwing turn was because I was burned by my own techno-optimism. I am part of a generation that believed it could change the world, and then was taught a harsh lesson about money and power. The first presidential election I voted in featured a platform of “Hope and Change” and then did not deliver hope or change, and that administration embraced Silicon Valley in their ambitions. Techno-cynics are all just wounded techno-optimists.
In fact it is following those two things—money and power—that have made me a critic of AI and the claims of corporations like Anthropic and OpenAI. More than anything, understanding that tech companies will just say things because it may benefit their bottom line has led me to my current political ideology. After President Barack Obama allied with Silicon Valley, these same companies have been happy to suck up to President Trump. Asking the question “who benefits from this?” is what has created my criticism of AI and the companies pushing these models. As far as I can tell the proliferation of the technology mainly benefits the people making money off of it, whereas, say, a robust and fast train network would provide a lot more obvious benefits to working people in the country where I live.
Like Merchant, I do feel more and more like the Luddites were right, a view that is bolstered by leftist theory. But as Merchant has argued, Luddites did not hate technology. They were skilled workers who understood the potential for technology to exploit them. So much of how technology integrates into my life also feels like exploitation—watching Brian Merchant destroy a consumer grade printer with a sledgehammer at a book reading several years ago unlocked this understanding for me. Does that printer actually make printing easier, or is it primarily a device that eats up proprietary ink cartridges and begs me for more?
The questions leftists ask about AI are: does this improve my life? Does this improve my livelihood? So far, the answer for everyone who doesn’t stand to get rich off AI is no. I’ve been working as a writer for the past decade and watching my industry shrivel up and die as a result, so you’ll excuse me if I, and the rest of the everyday people who stand to get screwed by AI, aren’t particularly excited by what AI can offer society. One thing I do believe in are the words of Karl Marx: from each according to their ability, to each according to their need. The creation of a world where that is possible is not dependent on advanced technology but on human solidarity.











