Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2595 stories
·
140 followers

The Left Doesn't Hate Technology, We Hate Being Exploited

1 Comment
The Left Doesn't Hate Technology, We Hate Being Exploited

Over the past week, I’ve watched left wing commentators on Bluesky, the niche short form blogging site that serves as an asylum for the millennials driven insane by unfettered internet access, discuss the idea that “the left hates technology.” This conversation has centered around a few high profile news events in the world of AI. A guy who works at an AI startup wrote a blog claiming that AI can already do your job. Anthropic, the company behind the AI assistant Claude, has raised $30 billion in funding. Someone claimed an AI agent wrote a mean blog post about them, and then a news website was found to have used AI to write about this incident and included AI-hallucinated quotes. Somewhere in this milieu of AI hype the idea that being for or against “technology” is something that can be determined along political lines, following a blog on Monday that declared that “the left is missing out on AI.”

As a hard leftist and gadget lover, the idea that my political ideology is synonymous with hating technology is confusing. Every leftist I know has a hard-on for high speed rail or mRNA vaccines. But the “left is missing out” blog positions generative AI as the only technology that matters.

I will spare you some misery: you do not have to read this blog. It is fucking stupid as hell, constantly creating ideas to shadowbox with then losing to them. It appears to be an analysis of anti-AI thought primarily from academics and specifically from the professor Emily Bender, who dubbed generative AI “stochastic parrots,” but it is unable to actually refute her argument.

“[Bender’s] view takes next-token prediction, the technical process at the heart of large-language models, and makes it sound like a simple thing — so simple it’s deflating. And taken in isolation, next-token prediction is a relatively simple process: do some math to predict and then output what word is likely to come next, given everything that’s come before it, based on the huge amounts of human writing the system has trained on,” the blog reads. “But when that operation is done millions, and billions, and trillions of times, as it is when these models are trained? Suddenly the simple next token isn’t so simple anymore.”

Yes it is. It is still exactly as simple as it sounds. If I’m doing math billions of times that doesn’t make the base process somehow more substantial. It’s still math, still a machine designed to predict the next token without being able to reason, meaning that yes, they are just fancy pattern-matching machines.

All of this blathering is in service to the idea that conservative sectors are lapping the left on being techno optimists.

The blog continues on like this for so long that by the time I reached the end of the page I was longing for sweet, merciful death. The crux of the author’s argument is that academics have a monopoly on terms like “understanding” and “meaning” and that they’re just too slow in their academic process of publishing and peer review to really understand the potential value of AI.

“Training a system to predict across millions of different cases forces it to build representations of the world that then, even if you want to reserve the word ‘understanding’ for beings that walk around talking out of mouths, produce outputs that look a lot like understanding,” the blog reads, without presenting any evidence of this claim. “Or that reserving words like ‘understanding’ for humans depends on eliding the fact that nobody agrees on what it or ‘intelligence’ or ‘meaning’ actually mean.”

I’ll be generous and say that sure, words like “understanding” and “meaning” have definitions that are generally philosophical, but helpfully, philosophy is an academic discipline that goes all the way back to ancient Greece. There’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails.

The essay presents a few other credible reasons to doubt that AI is the future and then doesn’t argue against them. The author points out that the tech sector has a credibility problem and says “it’s hard to argue against that.” Similarly, when this author doubles back to critique Bender they say that she is “entitled to her philosophy.” If that’s the case, why did you make me read all this shit?

All of this blathering is in service to the idea that conservative sectors are lapping the left on being techno optimists, but I don’t think that’s true either. It is true that the forces of capital have generally adopted AI as the future whereas workers have not—but this is not a simple left/right distinction. I’ve lived through an era when Silicon Valley presented itself as the gateway to a utopia where people work less and machines automate most of the manual labor necessary for our collective existence. But when companies from the tech sector monopolize an industry, like rideshare companies like Uber and Lyft, instead of less work and more relaxation, what happens is that people are forced to work more to compete with robots that are specifically coming for their jobs. Regardless of political leanings, people in general don’t like AI, while businesses as entities are increasingly forcing it on their workers and clients.

Instead of creating an environment for “Fully Automated Luxury Communism,” an incredibly optimistic idea articulated by British journalist Aaron Bastani in 2019, these technologies are creating Cyberpunk 2077. Hilariously, although the author of this blog references Bastani’s vision of an automated communist future as the position leftists should be taking, Bastani does not appear to be on board with generative AI.

Part of the reason I made a hard leftwing turn was because I was burned by my own techno-optimism.

Friend of Aftermath Brian Merchant points out something important about all this discourse: most of this conversation serves as advertising.

“We’re in the midst of another concerted, industry-led hype cycle, this time driven more visibly by Anthropic, which just landed a $30 billion investment round,” Merchant writes. “This time the hype must transcend multibillion dollar investment deals: It must also raise the stock of AI companies ahead of scheduled IPOs later this year and help lay the groundwork for federal funding and/or bailout backing.”

Part of the reason I made a hard leftwing turn was because I was burned by my own techno-optimism. I am part of a generation that believed it could change the world, and then was taught a harsh lesson about money and power. The first presidential election I voted in featured a platform of “Hope and Change” and then did not deliver hope or change, and that administration embraced Silicon Valley in their ambitions. Techno-cynics are all just wounded techno-optimists.

In fact it is following those two things—money and power—that have made me a critic of AI and the claims of corporations like Anthropic and OpenAI. More than anything, understanding that tech companies will just say things because it may benefit their bottom line has led me to my current political ideology. After President Barack Obama allied with Silicon Valley, these same companies have been happy to suck up to President Trump. Asking the question “who benefits from this?” is what has created my criticism of AI and the companies pushing these models. As far as I can tell the proliferation of the technology mainly benefits the people making money off of it, whereas, say, a robust and fast train network would provide a lot more obvious benefits to working people in the country where I live.

Like Merchant, I do feel more and more like the Luddites were right, a view that is bolstered by leftist theory. But as Merchant has argued, Luddites did not hate technology. They were skilled workers who understood the potential for technology to exploit them. So much of how technology integrates into my life also feels like exploitation—watching Brian Merchant destroy a consumer grade printer with a sledgehammer at a book reading several years ago unlocked this understanding for me. Does that printer actually make printing easier, or is it primarily a device that eats up proprietary ink cartridges and begs me for more? 

The questions leftists ask about AI are: does this improve my life? Does this improve my livelihood? So far, the answer for everyone who doesn’t stand to get rich off AI is no. I’ve been working as a writer for the past decade and watching my industry shrivel up and die as a result, so you’ll excuse me if I, and the rest of the everyday people who stand to get screwed by AI, aren’t particularly excited by what AI can offer society. One thing I do believe in are the words of Karl Marx: from each according to their ability, to each according to their need. The creation of a world where that is possible is not dependent on advanced technology but on human solidarity.

Read the whole story
tante
2 hours ago
reply
"Part of the reason I made a hard leftwing turn was because I was burned by my own techno-optimism. I am part of a generation that believed it could change the world, and then was taught a harsh lesson about money and power."
Berlin/Germany
Share this story
Delete

The only taboo left is copyright infringement

1 Comment

Garbage Day Live is coming back to Brooklyn. We’re doing three nights across three months at Baby’s All Right in Williamsburg. TICKETS ARE GOING FAST for our March 10th show, with special guest Katie Notopoulos. So grab them while you can. Also, you can still vote for what we’ll be doing on stage by clicking this link here. (We’ve received some fantastically insane suggestions that we’re very excited about.)

The Future Of Media Is Pre-Deplatformed

I have made the argument — in newsletter issues, podcasts, at various dinner parties — that the central project of Gen Z is to rediscover what “cool” is. When I say “cool” I’m talking about a very specific thing that used to happen that suddenly stopped at some point around 2018. I’m going to pin it to the US launch of TikTok for ease here, but I think there were a lot of compounding factors.

Young people used to stumble across a way of making art or culture and it would get so popular that the mainstream would be forced to react. Grunge music, mumblecore films, mall emo, comic conventions, digital media, the list goes on and on. You could view this as the machinery of capitalism chewing up and spitting out the ever-changing tastes of young people (it was) but you could also argue that it worked as a corrective. A way to inject novelty into the system, a way to prevent the profoundly boring world that we, well, now live in.

And before you throw a bunch of Gen Z trends at me from the last five years — hyperpop, brainrot, crowdwork comedy, Instagram collages, their weird post-COVID pop punk exploration — coolness is not just identifying trends. (You could argue it’s the opposite!) Liking something obscure or niche or dreaming up some new exciting way of doing things is only half the battle. The point is to change the tastes of the masses and, hopefully, ascend alongside it.

Part of the problem here is that the pillars of culture that defined coolness, that were also ransacked by it every five-to-10 years and would begrudgingly canonize it, are pretty much gone now. You could write a big feature like TIME Magazine’s “Generation X Reconsidered” cover story about Gen Z now, but no one would care. Hollywood lost to YouTube. Magazines lost to TikTok. Radio lost to Spotify. And most artist management teams are basically just hype houses now. As The New York Times’ Jon Caramanica said recently (on a livestream, it should be noted), "The gap between 'I'm a comedian,' 'I'm a content creator,' 'I'm a musician' — we used to think of those as three different jobs. They're not three different jobs anymore. It's basically just one job, which is getting attention. That's the main job.”

The question of our time is how do you artistically rebel — and win — against a totally flat cultural landscape? And before my readers, who I assume are all approximately 36 years old and very tired, say, “so what, who cares?” This does matter. I mean, just look around right now lol. You know things are bad when even OpenAI President Greg Brockman is posting stuff, like “Taste is a new core skill.” If people had taste, your company wouldn’t exist, Greg.

But if everything is just attention now, and attention is completely commodified by algorithmic tech platforms, how can you push back against that? Well, I am slowly coming around to a theory on the new cool: You have to essentially pre-deplatform yourself.

Culture right now is determined not by human teams of editors and producers picking and choosing what youth culture gets the spotlight, but, instead, by the unthinking algorithms that power YouTube and TikTok. Which means the only things that have the level of scarcity and danger required to be seen as cool by young people will, slowly, but surely, be whatever is unacceptable on those platforms.

Now, you will probably immediately rankle at this idea. It is uncomfortable to say that young people find reactionaries like Clavicular cool. But, yes, to a certain pocket of deeply unwell young men, he is. He is quickly ascending the new ladder of mass media — streams, podcasts, Peter Thiel-adjacent fashion events — and your mom will, no doubt, ask you who he is soon. Remember, all that matters now is attention. Which is, also, why certain “mainstream” media organizations are so fascinated by far-right streamer Nick Fuentes right now. Chicago Magazine put Fuentes as the seventh “most powerful” Chicagoan, just behind the fucking mayor, in a recent feature.

But I actually think that whole world is losing steam. It’s still too dependent on social media. As streamer Hasan Piker pointed out last week, the reason it feels like there’s a new “cool” far-right streamer every week is because of Discord clip farms. They trawl through hours-long streams for tip-of-the-iceberg moments that are salacious enough to pull people off safer, more restricted platforms to the wildly unmoderated streaming platform Kick. Which has long been home to deplatformed creators. The same strategy pornstars with fake podcasts were using back in 2023. Porn, as always, decides the media landscape of the future.

The most exciting examples of how this pre-deplatforming works, however, are happening beyond the far-right manosphere. Just this week, Stephen Colbert posted his interview with Texas Rep. James Talarico to YouTube after he claimed CBS’ censors blocked it for political reasons. It’s moving in a different direction — posting to the web what you couldn’t put on TV — but it’s the same idea. It feels subversive and exciting for liberals the same way, I assume, Zoom schooled zoomers feel about methed up facemoggers going on monologues about race science. But politics, left or right, is actually not actually the most subversive thing you can do right now. It’s copyright infringement.

(The future of media, jokerfied)

In 2022, filmmaker Vera Drew created a movie called The People’s Joker, which turned the story of The Joker from Batman into a trans allegory. Drew received a cease and desist from Warner Bros. and held guerilla screenings of the film until the rights were worked out. And this trend, of filmmakers using the corpse of the theater system to bypass the world of algorithms, has only continued. The 2022 film Hundreds Of Beavers had a similar renegade quality to how it was screened. Hell, even Taylor Swift was savvy enough to screen the Eras Tour concert in theaters directly through AMC. And you could argue that’s what YouTuber Mark “Markiplier” Fischbach just did with Iron Lung, which bypassed the studio system entirely and caused such a stir in Hollywood its massive ticket sales were removed from box office charts.

In fact, just this week, filmmaker Matt Johnson released Nirvanna The Band The Show The Movie. It had the biggest opening ever for a live-action Canadian film and not only is the film itself a massive copyright rats nest, but the web series it’s based on is completely illegal to watch on streaming platforms currently. Johnson, at a screening I attended last week, said he was excited to find out if they were going to get sued once the film debuted this week. (They haven’t yet, it seems.)

I could throw a million examples at you — and will happily duke it out with you if you email me lol — but it seems to clear to me that pre-deplatforming is, to use Gen Z slang, the new meta. The Clavicular’s and Andrew Tate’s of the world sort of understand this. But it’s extremely surface level. As platforms police speech less and less, edgelords lose their sheen. While Nirvanna The Band The Show The Movie is much closer to the blueprint for a post-TV and, soon, post-social media world. The culture that feels the most dangerous, and, thus, exciting to young people, will be what you can’t see online. And the most dangerous thing for platforms is not racist garbage. It’s unmonetizeable content. The “metric” that will matter most going forward will not be the numbers at the bottom of a post or video, but the human beings in a room that left their house to experience something. Which, of course, will be filmed and put back online. You can’t escape the matrix entirely.

🔓 UPGRADE TO PREMIUM

This is the free version of Garbage Day. Premium members get two more posts per week, including our deep-dive investigations and monthly trend reports. Just $5/month or $45/year for full access, or private Discord, and member events.

What a bargain! Hit the button below to find out more.


The RFK/Kid Rock Workout Video Is One Of The Funniest Things I’ve Ever Seen In My Life

Instagram post


YouTube Is Finally Cracking Down On Slop

—by Adam Bumas

Last month, YouTube CEO Neal Mohan wrote in a blog post that the platform would be working to limit “low quality, repetitive” AI-generated content. Mohan’s post is the first case I can find of YouTube saying “AI slop” in an official communication. Which is a pretty big moment for a platform that’s still automatically messing with your videos with tools that they defend as “traditional machine learning” (such a rich tradition — if it’s not from the historical Bay Area region, you have to call it “sparkling content”). Since then, there’s been a change in how YouTube handles AI-generated videos on their platform — but not as big of a change as you might expect.

In the weeks since Mohan’s post, there’s been a wave of removed videos and demonetized channels. Whenever we see the actual reasons YouTube gives to these channels, like this fascinating video from “Manhwa Anime Story”, the reasons are the same policies they’ve had in place for a while. In fact, we wrote about the “low quality, repetitive” restrictions when they were first put into effect back in July, and we didn’t see them making much of a difference. Now, all the growth hackers are taking YouTube’s comments a lot more seriously. One of them, who was profiled in Fortune, said that these channels had “until around 2027 to meaningfully profit” from flooding YouTube with AI content. 

We’ve seen some speculation as to what’s changed for YouTube. Some are blaming a new Indian law that puts harsher penalties on platforms that keep illegal videos up. This BBC report sees it as the backlash against generative AI boosterism finally reaching the corporate structures that imposed it in the first place. But in this specific case, it seems to be nothing more than an upgrade to their artisanal, homespun, heirloom moderation software. That’s why YouTube is also demonetizing channels that make kaiju battle animations, saying Godzilla isn’t appropriate for children.


Crooked Media Compilations Are Coming To MSNBC (Which Is Called MS Now, Now)

I can’t say this fits exactly with the diatribe I wrote above about coolness and the new media landscape — Crooked Media and MS Now (formerly MSNBC) aren’t exactly the MTV of Gen Z — but it doesn’t not fit either. MS Now will be airing a compilation of Crooked shows like Pod Save America, Lovett Or Leave It, and Hysteria during an hour-long block on Saturdays starting at the end of the month.

I view this deal as a sort of double obituary in a way. Not for Crooked, they’ll be fine. But for both cable news and the video podcast industrial complex. We all know that cable news doesn’t really matter anymore — even President Donald Trump seems to have lost interest in it, instead retreating to his own filter bubble on Truth Social. But this also makes me think that the economics of, specifically, video podcasts are not scalable. Yes, you can start one and you can make money, but as anyone who has a video podcast will happily tell you, the cost of video production quickly outpaces organic growth, which turns these shows into money-losing clip factories. And clips are even less monetizable than long-form video. (It’s me, I have a video podcast, I’m telling you.)

As Center for New Liberalism founder Jeremiah Johnson succinctly wrote this week on X, “Vertical video is the Ice Nine of internet content It takes over anything it touches and eventually everything becomes vertical video.” Referring to the chemical from the novel Cat’s Cradle, that freezes all water it comes into contact with, including the whole ocean.


Let’s Check In On How The Prediction Market Is Going

There’s a new online gambling platform called Rush Hour CCTV that let’s you gamble on CCTV footage from traffic cams. It was created by a company called 155.io and I’m sure this will have no unforeseen consequences on society. Anyways, I assume in like six months we’re going to learn that the top user on the platform hired a fleet of drivers to cheat the platform somehow.

Luckily, Commodity Futures Trading Commission Chief Mike Selig loudly declared this week that it is our constitutional right as Americans to lose all of our money.


LOOK MUM NO COMPUTER Is Representing The UK At Eurovision

The modular synth Youtuber Sam James Bartle, better known as LOOK MUM NO COMPUTER, is headed to Eurovision. Bartle is definitely at the whackier end of the synth YouTube scene. He performs with a massive synth rig, making all of the sounds live on the spot. (It’s called modular synthesis, you can google it if you want to go down that rabbit hole.)

Here’s a good video of his to start with.


A Good Post



P.S. here’s a magical sounding supermarket freezer aisle.

***Any typos in this email are on purpose actually***



Read the whole story
tante
6 hours ago
reply
"You know things are bad when even OpenAI President Greg Brockman is posting stuff, like “Taste is a new core skill.” If people had taste, your company wouldn’t exist, Greg."
Berlin/Germany
Share this story
Delete

NEW REPORT – The AI climate hoax

1 Comment

Read the full report (PDF)

Read the press release (PDF)

Access the raw data, chart files and other documents


I guarantee you’ve heard the harms of data centre expansion justified on the grounds that “AI” will ‘solve climate change’. These range from sci-fi claims of superintelligence through to detailed reports stacked with hundreds of examples of ‘AI for good’ helping energy, transport and industry cut emissions.

In partnership with the good people at the organisations shown below, I’ve created a new report that, for the first time, interrogates both the logic and the evidence for this claim.

We found that most of the ‘benefit’ tends to relate to older, smaller and leaner forms of machine learning, what has been called ‘traditional AI’, while we also know that most of the new harm is likely stemming from consumer generative AI over-deployment.

This distracts from the decisions made by companies that result in their own fossil fuel use rising at an unprecedented rate.

We also found that the evidence presented for examples of climate benefit, regardless of AI type, tend to be weak whether from companies or organisations like the IEA. The potential benefits are overstated, in surprising and significant ways.

What we see is companies veering wildly away from their climate targets. In most cases, this is true whether you use their ‘adjusted’ metrics that incorporate renewable energy offsets and deals or not.

That is a choice, and this focus on ‘AI for climate’ is a distraction from the decision to worsen the pollution of data centres through an unprecedented explosion of digital bloat.

Video and social

3 minute overview: AI vague-washing (watch on Youtube)

3 minute overview: Weak evidence for benefit, strong evidence of harm (watch on Youtube)

Supported by

Very genuine and warm thanks to the good people at the follow organisations, who supported this work and continue to push for accountability from polluters:

About Beyond Fossil Fuels
Beyond Fossil Fuels is a civil society network committed to ensuring a just and rapid transition to a fossil-free, renewables-based future. Building upon the Europe Beyond Coal campaign, its goal is for Europe to be coal-free by 2030 and phase out fossil gas from the power sector by 2035. A clean and flexible energy system will deliver lasting benefits for people, the climate and the broader economy. Beyond Fossil Fuels is a non-profit organisation with an office in Berlin, with staff spread across Europe.

http://www.beyondfossilfuels.org

About Stand.earth
Stand.earth is a global advocacy organization delivering large-scale change for our planet and its people by interrupting the systems that create environmental and climate crises. Its mission is to challenge corporations and governments to treat people and the environment with respect. Stand’s worldwide community of more than one million members advocates for a climate-safe, equitable future, where environmental and climate justice policies uphold the dignity of people everywhere – at the scale our world requires.
https://stand.earth

About Climate Action Against Disinformation
Climate Action Against Disinformation is a global coalition of over 120 leading climate and anti-disinformation organisations across the globe demanding robust, coordinated and proactive strategies to deal with the scale of the threat of climate misinformation and disinformation.
https://caad.info

About Friends of the Earth U.S.
Friends of the Earth U.S. works to reduce the spread of disinformation that potentially affects all of our campaigns. As technology and media companies consolidate their power, our fundamental ability to campaign on any issue is threatened, as corporate polluters gain more control over the basic communications systems that are needed for social change and democracy itself.
https://foe.org/projects/disinformation/

About Green Screen Coalition
The Green Screen Climate Justice and Digital Rights Coalition is a group of funders and practitioners looking to build bridges across the digital rights and climate justice movements. The aim of the coalition is to be a catalyst in making visible the climate implications of technology by supporting emerging on-the-ground work, building networks, and embedding the issue as an area within philanthropy. https://greenscreen.network

About Green Web Foundation
Green Web Foundation is a non-profit organisation working towards a fossil-free internet by 2030 by reducing absolute emissions and phasing out fossil fuels in data centers – fast, fairly and forever. The foundation maintains the world’s largest open dataset of websites that run on green energy and builds open source tools for measuring and mitigating emissions from digital services.
https://greenweb.org/





Read the whole story
tante
1 day ago
reply
"We also found that the evidence presented for examples of climate benefit, regardless of AI type, tend to be weak whether from companies or organisations like the IEA. The potential benefits are overstated, in surprising and significant ways."
Berlin/Germany
Share this story
Delete

Story About AI (Being Mean) Gets Pulled Because Journalist Used AI (That Made Mistakes)

1 Comment
Story About AI (Being Mean) Gets Pulled Because Journalist Used AI (That Made Mistakes)

On Friday the website Ars Technica published a story about Scott Shambaugh, a coder who made headlines in the tech world last week with his story about AI agents, and in particular one that he claims had written what he called a 'hit piece' on him and published it for the world to see.

Shambaugh’s story is as interesting as it was horrifying. Agents are just the latest front in tech's war on our collective sanity, a type of AI that's essentially a glorified autocorrect that in this case has been given a uniform and sent onto the internet to try to do human things like propose code changes and then, when humans like Shambaugh decline them, writing pissy blogs complaining about it.

Among other sites, Ars Technica covered this last week with a news story that remarkably appeared to have some AI-created filler of its own, citing quotes from Shambaugh that never appeared in the very blog Ars was linking.

Not long after readers--including Ars' own community--began noticing this, the story was pulled (though you can still read the archived original here). It's general journalistic practice that published stories which contain inaccuracies are edited and updated, not deleted entirely.

Ars has since published an editorial statement, bylined by EiC Ken Fisher, addressing the story, its deletion and the outlet's policies on AI:

On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.
That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.
Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.

By citing Ars' clear rules, Fisher's statement points the finger for this disaster at the two authors bylined in the piece, with one (Benj Edwards) having since posted a statement of his own, assuming full responsibility for the incident and saying the other (Kyle Orland) had 'no role in this error':

I have been sick with COVID all week and missed Mon and Tues due to this. On Friday, while working from bed with a fever and very little sleep, I unintentionally made a serious journalistic error in an article about Scott Shambaugh.

Here’s what happened: I was incorporating information from Shambaugh’s new blog post into an existing draft from Thursday.

During the process, I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline.

When the tool refused to process the post due to content policy restrictions (Shambaugh’s post described harassment). I pasted the text into ChatGPT to understand why.

I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words.

Being sick and rushing to finish, I failed to verify the quotes in my outline notes against the original blog source before including them in my draft.

Kyle Orland had no role in this error. He trusted me to provide accurate quotes, and I failed him.

The text of the article was human-written by us, and this incident was isolated and is not representative of Ars Technica’s editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.

I sincerely apologize to Scott Shambaugh for misrepresenting his words. I take full responsibility. The irony of an AI reporter being tripped up by AI hallucination is not lost on me. I take accuracy in my work very seriously and this is a painful failure on my part.

When I realized what had happened, I asked my boss to pull the piece because I was too sick to fix it on Friday. There was nothing nefarious at work, just a terrible judgement call which was no one’s fault but my own.

Look, I understand mistakes can happen when you're sick, but Edwards--who it should be noted is Ars' 'Senior AI Reporter'--has used AI not once but twice here, and in doing so has caused a huge amount of reputational damage for himself and his employer in the process. And it's not like he used it to comb through 800 pages of impenetrable legal documents, either; Shambaugh's original blog was only a couple of pages long (he's since written a follow-up), and written in plain English, making the AI's hallucinations (and Edwards' use of it) even more damning.

It's disappointing someone working in this space felt the need to have to use this garbage, particularly when it violates their employer's own policies. As this whole mess has shown, the tech simply cannot do the most basic things the people selling it keep claiming it can. Citing quotes from a blog for your own story is bread-and-butter stuff for a journalist, it's what the job is, and seeing this busted tech worming its way into a profession that should be its sworn enemy– and fucking the whole thing up in the process--is just a huge bummer.



Read the whole story
tante
2 days ago
reply
"It's disappointing someone working in this space felt the need to have to use this garbage, particularly when it violates their employer's own policies. As this whole mess has shown, the tech simply cannot do the most basic things the people selling it keep claiming it can."
Berlin/Germany
Share this story
Delete

Diffusion of Responsibility

1 Comment

One of the features of “AI” is the diffusion of responsibility: “AI” systems are being put in all kinds of processes and when they fuck up (and they always fuck up) it was just the “AI”, or “someone should have checked things”. “AI” companies want to sell machines to solve every issue but give no warranties, take no responsibilities and the same dynamic often extends to organizations using “AI”: You get the support-chatbot to promise you a full refund and when you claim that you get a half-assed “oh but that was just the bot, those tell bullshit all the time”. That’s where human in the loop setups come into play: What if the company can just hire one sucker to “check” all the “AI” slop and when things fall apart that one person has to take the blame. Fun!

(Sidenote: It should be the law that when you offer or run an “AI” you are 100% liable for everything it does. Sure, that would kill the whole industry but who gives a shit?)

But let’s get to the actual topic here. ClawdBot Moltbot OpenClaw is all the rage these days. It promises to be (quoting the website):

“The AI that actually does things.

Clears your inbox, sends emails, manages your calendar, checks you in for flights.
All from WhatsApp, Telegram, or any chat app you already use.”

It has it’s own “social network” called Moltbook that “AI” influencers treat like it’s proof for emerging actual intelligence in LLM based systems, proof that we should take them seriously and whatnot. Sure, it looks like it’s mostly humans posting or directly triggering posts but that does not change anything, right?

OpenClaw is still very popular among a group of men1 who want to use it to run their life and sure. As long as you know very little about IT, security or risk that surely is a good idea. But everybody needs a dumb hobby.

OpenClaw was vibecoded by Peter Steinberger, an Austrian software developer. He’s very proud about the vibecoding part repeatedly posting how he happily releases code he has never seen or checked.2

In the end of January Steinberger posted something on the other facist social network besides truth.social:

The amount of crap I get for putting out a hobby project for free is quite something.

People treat this like a multi-million dollar business. Security researchers demanding a bounty.
Heck, I can barely buy a Mac Mini from the Sponsors.

It's supposed to inspire people. And I'm glad it does.

And yes, most non-techies should not install this.
It's not finished, I know about the sharp edges.
Heck, it's not even 3 months old.
And despite rumors otherwise, I sometimes sleep.

Because people had been criticizing him for releasing OpenClaw (back then still called Moltbot): For releasing unchecked code and giving it to people to run. For allowing that code to interface with all kinds of relevant external services making purchases for people, posting as them, deleting their files and whatever. You know. Basic responsibility shit.

But OpenClaw is just small beans hobby pwoject. Peter just had some fun wif da computer. You cannot criticize him because he was just trying to inspire. For free! How dare people to expect even the base line of responsibility. HOW DARE THEY!

So I had a quick look at the OpenClaw website. You know to look at this hobby project and be inspired.

Screenshot of the OpenClaw website. It looks very professional, claiming that OpenClaw is the "AI" that can actually do stuff and directly has a "how to run" code snipped without any warnings or anything

Hobby project just to inspire people. Sure thing.

OpenClaw presents itself like a mature and usable product, with testimonials and a convenient “curl | bash” install command: that’s how you know that it is quality software. (For the non software people: curl $URL|bash just downloads some code from the internet and runs it. No checks, no rollbacks. It can just fuck up your whole home directory for shits and giggles. Upload all your private keys and files to a server somewhere. Anything you could do it can do.)

And here we see another kind of diffusion of responsibility that the “AI” wave is creating: People just releasing whatever software they generated into the wild for others to run. Often with huge promises: OpenClaw “ACTUALLY DOES THINGS” as the website says. No “this is experimental”, “this is potentially dangerous”, “this code has not been checked by anyone and running it is the software equivalent of digging a half eaten kebap out of the trash can and eating it”.

Steinberger did not just generate some shit code for himself to do whatever. No: He needed to release it. And not for “inspiration” but for people to run it. He’s doing the “right-wing tech podcast tour” currently going on Lex Fridman’s horror show and talking to startups and whatnot. He wants something and it’s not really to inspire: It’s to be “the inventor” of OpenClaw. He wants the reputation boost you get from running a popular open source project whose name people might actually know. He wants to be important.

What he does not want is the work. The work that made “having a well known open source project” mean something. The reputation that people got from being good stewards of responsible projects that made sure that people’s digital existence was as safe as possible. That software had as few security issues as possible.

I was wondering why this made me so fucking furious and then I remember that I did actually talk about that before: In my FluCon talk last year. Because while formally one could argue that Steinberger did create something Open Source (because you can download whatever code his chatbots generated and it has some open license [might might be irrelevant cause LLM generated code is not under copyright]) that cannot, no must not, be enough. It just shows how “having some code and an open license” is not a sufficient set of requirements for building a sustainable, resilient digital landscape for everyone.

In this case “uwu little open source pwoject” is just by Steinberger to absolve himself from any responsibility for the thing he explicitly put out into the world for people to use. And we have been accepting this kind of behavior for way too long.

I don’t want to focus on Steinberger too much. He’s a random tech bro who wants to impress his other Elon Musk wannabe friends. Fine. But this is a pattern that the whole “AI” acceptance movement is establishing that preys on our experience of being able to rely on open source projects who take their product, their work and their users’ safety seriously and invalidates decades of hard work establishing that kind of trust.

Because up to now trusting open source was – heuristically – not a bad idea. Especially bigger, more mature projects are very professional and take great care about their users’ and smaller, younger projects talk explicitly about being early stage software with flaws and warn against certain use cases.

But now we have “AI” and everyone can generate some code. That might work. Or might mine some crypto or give your laptop an STI. Decades of collective work proving that “open source” is not less but at least as secure as commercial offerings now slowly going down the drain. Because a bunch of men – and it is always all men – just don’t want to be responsible for their actions. Which is fine if you are 5. But after 18 it gets old really fucking fast.

We deserve good software in a world where participation is often connected to having access to a computer, to software, etc. We should push towards more reliable software, more secure software, software that is accessible, that protects people against misuse and allows them to be as safe as possible in doing what they want to do.

What do we get? Slop. Slop generated by guys who – when called out for their irresponsible behavior – just start crying about how they only wanted to “share” or “inspire” or “educate” while handing out running chainsaws to kids.

And that was what makes me fucking furious. Not just these dudes being spineless but the disrespect to those who have run serious projects for decades to build a more humane stack.

And it reminds me that “Open Source” is not enough. Open source code can still be harmful to you and your digital existence, can put you in danger without you realizing it. We need something better. Something more.

We need to be willing to take responsibility for and care of one another. “AI” generated software is the opposite of that.

Coda: Never forget. Nothing only men like is cool.

  1. it’s 99% men. Look at any picture from the OpenClaw meetups. It’s more dudes than in an incel forum. Well. Let’s not speculate about the overlap here
  2. There is a weird parallel between Chrossfit people and Vibecoders: Both cannot just do their thing but need to make that their personality, tell you about it constantly. As if anyone cares.
Read the whole story
tante
4 days ago
reply
"Decades of collective work proving that “open source” is not less but at least as secure as commercial offerings now slowly going down the drain. Because a bunch of men – and it is always all men – just don’t want to be responsible for their actions. Which is fine if you are 5. But after 18 it gets old really fucking fast."
Berlin/Germany
Share this story
Delete

How to raise children

1 Comment
A painting with a stick attached, so it looks like a protest sign. The background is pink, and in lighter pink it says FUCK ICE.
A wee little painting (11×14”, sans stick) I made last week. This one got auctioned off on Bluesky to help folks in Minnesota. I’m making more.

This week’s question comes to us from April Piluso:

My daughter turns 3 this month. I want to help her have fewer troubles than I did by teaching her about boundaries, values, independent thinking etc. I think if more kids learned this stuff, we’d have more good humans and fewer jerks. What do YOU think every kid should grow up knowing?

Every kid should grow up knowing they are loved.

Everything else is pretty close to a rounding error. Ok, maybe not a rounding error. I’m exaggerating to make a point. But honestly, there is nothing a child needs more in life than knowing they are loved. Love can make up for a lack of a lot, but a lack of love is very hard to make up for.

Regular readers of this newsletter will now be familiar that I didn’t grow up in the best household. I grew up in an abusive household. I also grew up poor. And when I look back on my childhood, growing up poor wasn’t really a big deal. It was just a fact of life. And to be clear, poor is very subjective. We always had a roof over our head. We didn’t miss meals. I knew we were poor because every Sunday my parents would pile us in the car and go for a drive around the rich neighborhoods in town, getting progressively more upset about our own circumstances, and blaming each other—and their kids—for not being able to live in one of those fancy houses. Meanwhile, my brothers and I sat in the back seat, being as quiet as possible so as to not draw my father’s growing anger. We didn’t know we were poor until my father started hitting us for being poor.

I’ll tell you a story, but first—some cultural background: in Portugal, where my parents grew up, if you had a house for rent you’d make a paper cutout and tape it to the windows. (This was pre-internet, obviously.) The cutout could be any of a number of things, probably made by whichever kid the landlord deemed to be “the artistic one.” No, I don’t know how this started, and it’s not the point of our story so I’m not looking it up.

One Sunday afternoon, we’re driving around doing our routine wealth tourism on The Mail Line, and my dad stops the car. He pulls over.

“Go see if that house is for rent.”

I turn towards the house he’s pointing at. This thing was an old-school two-story mansion. Very old-Philadelphia money. Whoever built it probably has their name on a hospital now. Anyway, I ask him why he thinks the house (that we obviously cannot afford) is for rent.

“You see the cut-outs on the window?”

“Yeah, it’s Christmas. Those are snowflakes.”

The slap came before I finished the sentence. Followed by the scream to get the fuck out of the car and do what I was told. So off I went, crying. I rang the doorbell. Some unsuspecting stranger opened the door, wondering why some crying kid was standing there and asking if the house was for rent, even though I knew it was not. He seemed understandably confused, but politely told me it was not, then closed the door. Receding, I’m sure, to a nearby curtain that he could peek out of. (Or possibly straight to the phone to call the police about immigrants in the neighborhood.) I walked back to the car, knowing what was coming. And when I told him the house wasn’t for rent, sure enough—it came. Right across the face. We drove home in silence, where he dropped us all off and went off to do something else with people who were not his family, who he hated.

So yeah, when I think back on growing up, it’s not the lack of anything—except the lack of love—that I think about. Love and safety. Made all the more worse because every once in a while I’d get a glimpse of what those things were like. Sometimes he’d come home in a good mood. Sometimes he’d muss my hair on the way in. But those times were rare, but the fact that they existed at all let me know that they were possible, which made it that much crueler.

Fast forward decades to a therapist’s office where my therapist—who I’m sure isn’t reading this—is telling me that my own relationships are falling apart because how am I supposed to love anyone else when I never learned what love was like growing up. (Yes, my therapist is RuPaul.) If you were raised in a similar environment, please believe me when I tell you that it is never too late to learn how to love. You don’t have to carry your parents’ sins into your relationship with your own children.

Every kid should grow up knowing they are loved.

Telling a child you love them is free.

Also, while I by no means an expert in the field, and my opinions should be treated with much salt, I tend to believe that children are born good. They’re born full of love. They’re born full of confidence. (How fucking confident do you have to be to take that first step?!) They’re born curious. They’re born wanting to be part of a community. It’s not so much that we need to teach them these things, as much as we need to encourage them to keep believing these things. And protect them from people who would work to destroy those things.

Yes, this is about AI. The AI industry can only succeed if it separates people from their joy and their confidence. An industry run by people who were not raised with love, attempting to steal it from others.

I’ve written about this before, but every child is born loving to draw. They draw on everything. They demand crayons in restaurants. They draw on your walls. You should let them do so. Fuck your walls. It’s easier to eventually paint over a wall, than to rebuild a child’s confidence.

It’s wild to me that we parent our children to fit into society, then get together with our friends and talk about how broken society is. I’ve seen people rail against our broken educational system, then demand their children get straight As in school. I’ve seen people complain about not having any time to themselves and then schedule every minute of their kid’s life.

There is more we can learn from children than they can learn from us.

Mostly we need to support children and let them know that they are loved. Children are so ready to love you back. For every cruel thing my father did to me, anytime he walked through the door and mussed my hair I was ready to give him another chance. I was so ready to love him.

Congratulations on your daughter turning three. The fact that you’re worried about this stuff is usually a sign that you’re on the right path. The funny thing about parenting is that the people who are most worried about messing it up, are the ones most likely to get it right. I’m old enough that I’ve seen a lot of my friends have kids, and those kids are now adults in their own right. And one of the first things I noticed was that the folks who were the most chaotic, the most fly-by-the-seat-of-their-pants, the most worried about fucking things up… they were the ones who ended up incorporating their kids into their messy lives, encouraging them to be themselves, giving them the space to be curious, to climb trees, to draw on the walls, to ask their neighbors for help. And ultimately, hold everything together with love. While the friends who made plans, and spreadsheets, and made lists of goals, and fretted about their kids not being able to tie their shoes yet, or read at a certain level yet—and by the way, I totally understand wanting to do these things, and worrying about these things—they were so concerned with how things were supposed to be going that they totally missed how things were actually going. Which is that this new amazing human was unfolding before your eyes, and while it might not be the human you were expecting… aren’t they amazing?!? And if you don’t understand them, well child what happened to your curiosity?!

Your kid is going to be alright. With enough love, your kid is going to be alright.

Don’t judge your children, love them. Because they will, in turn, love you back. And when they do—holy fucking shit, it’s just amazing.

My daughter’s coming over for dinner tonight. I can’t wait to hug her and tell her I love her.

I love you for asking this question.


🙋 Got a question for me? Ask it!

📕 My new book, How to die (and other stories), is now available for pre-order! It’s stories from this newsletter. It’s very handsome. Yes, you want it!

📆 Related, but secret… if you’re in the Bay Area, please circle May 21 on your calendar. All will be revealed in time.

📣 There’s a couple spots left in next week’s Presenting w/Confidence workshop. Sign up, we’ll have fun hanging out, we’ll make fun of AI slop, then I’ll help you get a job.

💰 If you’re enjoying this newsletter please consider joining the $2 Lunch Club! Writing is labor and labor gets paid, right?

🍉 Please donate to the Palestinian Children’s Relief Fund. The ceasefire is a lie.

🏳️‍⚧️ Please donate to Trans Lifeline, and for fuck sake if there is a trans child in your life PLEASE tell them you love them, they are SO ready to love you back.

Read the whole story
tante
5 days ago
reply
"Yes, this is about AI. The AI industry can only succeed if it separates people from their joy and their confidence. An industry run by people who were not raised with love, attempting to steal it from others."
Berlin/Germany
Share this story
Delete
Next Page of Stories