One of the features of “AI” is the diffusion of responsibility: “AI” systems are being put in all kinds of processes and when they fuck up (and they always fuck up) it was just the “AI”, or “someone should have checked things”. “AI” companies want to sell machines to solve every issue but give no warranties, take no responsibilities and the same dynamic often extends to organizations using “AI”: You get the support-chatbot to promise you a full refund and when you claim that you get a half-assed “oh but that was just the bot, those tell bullshit all the time”. That’s where human in the loop setups come into play: What if the company can just hire one sucker to “check” all the “AI” slop and when things fall apart that one person has to take the blame. Fun!
(Sidenote: It should be the law that when you offer or run an “AI” you are 100% liable for everything it does. Sure, that would kill the whole industry but who gives a shit?)
But let’s get to the actual topic here. ClawdBot Moltbot OpenClaw is all the rage these days. It promises to be (quoting the website):
“The AI that actually does things.
Clears your inbox, sends emails, manages your calendar, checks you in for flights.
All from WhatsApp, Telegram, or any chat app you already use.”
It has it’s own “social network” called Moltbook that “AI” influencers treat like it’s proof for emerging actual intelligence in LLM based systems, proof that we should take them seriously and whatnot. Sure, it looks like it’s mostly humans posting or directly triggering posts but that does not change anything, right?
OpenClaw is still very popular among a group of men1 who want to use it to run their life and sure. As long as you know very little about IT, security or risk that surely is a good idea. But everybody needs a dumb hobby.
OpenClaw was vibecoded by Peter Steinberger, an Austrian software developer. He’s very proud about the vibecoding part repeatedly posting how he happily releases code he has never seen or checked.2
In the end of January Steinberger posted something on the other facist social network besides truth.social:

Because people had been criticizing him for releasing OpenClaw (back then still called Moltbot): For releasing unchecked code and giving it to people to run. For allowing that code to interface with all kinds of relevant external services making purchases for people, posting as them, deleting their files and whatever. You know. Basic responsibility shit.
But OpenClaw is just small beans hobby pwoject. Peter just had some fun wif da computer. You cannot criticize him because he was just trying to inspire. For free! How dare people to expect even the base line of responsibility. HOW DARE THEY!
So I had a quick look at the OpenClaw website. You know to look at this hobby project and be inspired.

Hobby project just to inspire people. Sure thing.
OpenClaw presents itself like a mature and usable product, with testimonials and a convenient “curl | bash” install command: that’s how you know that it is quality software. (For the non software people: curl $URL|bash just downloads some code from the internet and runs it. No checks, no rollbacks. It can just fuck up your whole home directory for shits and giggles. Upload all your private keys and files to a server somewhere. Anything you could do it can do.)
And here we see another kind of diffusion of responsibility that the “AI” wave is creating: People just releasing whatever software they generated into the wild for others to run. Often with huge promises: OpenClaw “ACTUALLY DOES THINGS” as the website says. No “this is experimental”, “this is potentially dangerous”, “this code has not been checked by anyone and running it is the software equivalent of digging a half eaten kebap out of the trash can and eating it”.
Steinberger did not just generate some shit code for himself to do whatever. No: He needed to release it. And not for “inspiration” but for people to run it. He’s doing the “right-wing tech podcast tour” currently going on Lex Fridman’s horror show and talking to startups and whatnot. He wants something and it’s not really to inspire: It’s to be “the inventor” of OpenClaw. He wants the reputation boost you get from running a popular open source project whose name people might actually know. He wants to be important.
What he does not want is the work. The work that made “having a well known open source project” mean something. The reputation that people got from being good stewards of responsible projects that made sure that people’s digital existence was as safe as possible. That software had as few security issues as possible.
I was wondering why this made me so fucking furious and then I remember that I did actually talk about that before: In my FluCon talk last year. Because while formally one could argue that Steinberger did create something Open Source (because you can download whatever code his chatbots generated and it has some open license [might might be irrelevant cause LLM generated code is not under copyright]) that cannot, no must not, be enough. It just shows how “having some code and an open license” is not a sufficient set of requirements for building a sustainable, resilient digital landscape for everyone.
In this case “uwu little open source pwoject” is just by Steinberger to absolve himself from any responsibility for the thing he explicitly put out into the world for people to use. And we have been accepting this kind of behavior for way too long.
I don’t want to focus on Steinberger too much. He’s a random tech bro who wants to impress his other Elon Musk wannabe friends. Fine. But this is a pattern that the whole “AI” acceptance movement is establishing that preys on our experience of being able to rely on open source projects who take their product, their work and their users’ safety seriously and invalidates decades of hard work establishing that kind of trust.
Because up to now trusting open source was – heuristically – not a bad idea. Especially bigger, more mature projects are very professional and take great care about their users’ and smaller, younger projects talk explicitly about being early stage software with flaws and warn against certain use cases.
But now we have “AI” and everyone can generate some code. That might work. Or might mine some crypto or give your laptop an STI. Decades of collective work proving that “open source” is not less but at least as secure as commercial offerings now slowly going down the drain. Because a bunch of men – and it is always all men – just don’t want to be responsible for their actions. Which is fine if you are 5. But after 18 it gets old really fucking fast.
We deserve good software in a world where participation is often connected to having access to a computer, to software, etc. We should push towards more reliable software, more secure software, software that is accessible, that protects people against misuse and allows them to be as safe as possible in doing what they want to do.
What do we get? Slop. Slop generated by guys who – when called out for their irresponsible behavior – just start crying about how they only wanted to “share” or “inspire” or “educate” while handing out running chainsaws to kids.
And that was what makes me fucking furious. Not just these dudes being spineless but the disrespect to those who have run serious projects for decades to build a more humane stack.
And it reminds me that “Open Source” is not enough. Open source code can still be harmful to you and your digital existence, can put you in danger without you realizing it. We need something better. Something more.
We need to be willing to take responsibility for and care of one another. “AI” generated software is the opposite of that.
Coda: Never forget. Nothing only men like is cool.
- it’s 99% men. Look at any picture from the OpenClaw meetups. It’s more dudes than in an incel forum. Well. Let’s not speculate about the overlap here
- There is a weird parallel between Chrossfit people and Vibecoders: Both cannot just do their thing but need to make that their personality, tell you about it constantly. As if anyone cares.




