Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2610 stories
·
143 followers

How to do the work

1 Comment
Small black panels filled with screaming ducks in white outline on a pegboard wall.
I’m paintings ducks.

This week’s question comes to us from Tony:

How do you keep doing a thing you love, that you’ve done for decades, when you hate what the industry for it has become (and your continued health insurance and ability to sleep indoors depends on it?)

Find somewhere else to do what you love.

Look, I don’t mean to sound glib. I could write a couple of pages about how hard it is to move into another industry, and that would of course be true. Starting over sucks, especially when you’ve dedicated decades of your life to something. I could write a couple of pages about how we need to be pragmatic in making decisions, and that would also be mostly true, especially in America where your health insurance is currently tied to your employment and you might be carrying around substantial educational debt. I could write a couple of pages about how complicated the situation actually is, and that would also be pretty much true (although more likely than not when people tell you something is complicated, they’re just not wanting to accept the fallout for what they know the right decision is).

(Let’s take a moment here, pull up a chair, and just think about the phrase “educational debt” for a bit.)

I could tell you to be patient, but in all honesty, this industry has exhausted our patience. And second chances. And all benefits of doubt. Also, I’m assuming you’re talking about the tech industry here because my inbox is half-full of similar emails from folks like you. People who’ve been working in this industry for decades, who’ve put in the blood and the sweat and the tears. But it’s also half-full of people who just started in this industry, having taken on the debt needed to walk through the front door only to realize they were walking into the Willy Wonka Experience, if the Willy Wonka Experience was run by Nazis. And of course all of these emails are about the tech industry. And a million think pieces have already been written about the tech industry’s heel turn that I don’t feel like I need to add to that, or have anything to add to that. And I don’t think your email is asking me to do that, thankfully.

Your email is about doing the thing that you love, and I like to think there is always room to do the thing that we love. Somewhere.

Very early in my career I was lucky enough to have a boss who gave me the gift of telling me I was a terrible employee. They didn’t mean it as a gift, of course, and to be honest, I didn’t recognize it as a gift at the moment. But it stuck with me, and in time I had to acknowledge that it was a correct assessment. I am a terrible employee. I don’t like being told what to do. I have a very hard time not calling out bullshit coming out of someone’s mouth. I don’t like having my time monitored. But the thing that really made me a terrible employee is that I like to work. Honestly, I love working. I love doing things. Making things. Solving problems. I fold socks for fun, man! And working for other people was more about the appearance of work, and making sure certain people saw you putting on the appearance of work. And being put in situations where I was kept from working. None of this is meant to disparage people who enjoy being good employees. This is just how my brain works.

But I needed money for things like rent and food and records, so I had to figure out a way to earn that money without getting a job. So Erika and I built ourselves a little design company that worked the way we wanted to work. And while I’m not saying that was easy, it meant that we were in charge of our own decisions, both good and bad ones. And whenever we tried to blame management for something, well it was just a Super-Man meme. The first decade of our existence was tethered to tech, because that’s not only where the work was, but the work was good. We were working for people who were at least attempting to do something positive. But we’d go in, we’d do the work, and we’d get out. It was basically a series of heists, except we left something beneficial behind. And while client services is mostly about relationships, and absolutely gets you involved in the inner entanglements of your client, there was something about coming in as an outsider, for a limited time, that works for me. I can get along with anyone for a few months.

As tech changed, so did our relationship with it. But we’ve never stopped doing the thing we love doing. We’re a design shop. And while we may not have the same relationship with the industry as we once did, our hearts will always be with helping other designers. Some of whom still have a relationship with the industry. Some of whom believe they can still change things from the inside (although that number is dropping hard), some of whom are stuck in the industry because of debt, or visa issues. Some of whom are still clinging to hope that the industry will go back to what they hoped it would be. Some of whom have convinced themselves “it’s complicated.” And some of whom are beginning to look for lifeboats.

My love was never for an industry. My love was always for design, those who practice it, and the people whose lives we can improve with it.

What I’m saying is that there is the thing you love, and no one can take that away. And there’s the place where you were once able to do the thing you love, and that place is gone. And while it may be time to find a new place to practice that craft—which I acknowledge is hard as fuck—that place you’re leaving was never yours, and there is nothing you could’ve done to keep that place from dying.

I can’t stress this enough: there is nothing you can do to save the tech industry, and that is not your job.

The cruelest thing the tech industry ever did was to tell you that they cared about you. They built you nice campuses, they called you family, they gave you clothes with their name on it. They fed you, they washed your clothes, they got you to ride in their Pride floats. They made you feel like you had not just a job, but a community. And yes, they paid you well. The stupidest thing we ever did—and I say this with nothing but love for you in my heart—but the stupidest thing we ever did was to believe it. IT was neither true, nor never-ending.

The same industry that once called you family is now using the fruits of your labor to commit war crimes. The same industry whose leaders once posted front-page missives to their sites about doing a better job in terms of diversity and inclusion are now selling their technology to fascists who use it to bomb schools.

The industry has decided what it wants to be.

At one point we all gravitated towards this industry because we wanted to be useful. And for a while we got to be useful. We got to design useful things. We got to build useful things. And it was amazing. We can, and should take a moment to mourn that time because it was great! But that time appears to be over.

The good news—the very good news—is that our dismay, our frustration, comes from a desire to continue being useful. That desire to continue being useful is a feeling to hold onto, and to cherish, and to honor. That desire to continue being useful is what makes us human, and it’s incompatible with an industry that wants to exploit and murder other humans to maximize profit. And despite the savage way in which the tech industry is casting its workers aside, I’ve found that the percentage of those workers that want to continue being useful is high. Which begs the question, where can we be useful now?

There are still people out there building useful things. There are still people out there designing useful things. And, there are still companies out there making useful things. They may be small, they may be unglamorous, there may be less amenities, and they may not pay as well, but there is always a premium for doing abattoir work for butchers who didn’t look too closely at where the meat was coming from.

The work we need to do, and want to do, hasn’t gone away, it’s still there. And the need for useful people certainly hasn’t gone away. There’s plenty of misery in the world that useful people certainly have their work cut out for them. Your town still needs you. Your city still needs you. Your neighbors still need you. Your kids’ school still needs you.

I don’t know what those needs are because those needs are very specific to where you are, and how you want to interact with those around you. But I think it starts with talking to people, because it always starts with talking to people. And let people know you want to continue being useful. Ask your neighbors what help they might need. Ask your developer friends what they’ve always wanted to build. Ask your designer friends what they’ve always wanted to make. Find out what’s missing in people’s lives. (By the way: I’ll give you a freebie here, from my own conversations in our local dogpark. What people want most is the shit they used to find useful, before it all got enshittified. They want Google to work again. They want to watch TV without having to upgrade devices. They want a news source they can trust. They want a security camera that’s not an ICE agent. Kids are listening to vinyl, for fuck sake. That’s an amazing repudiation of the future the tech industry laid out for us. Vinyl!)

I am trying beyond all hope to end this newsletter on a positive note. But fuck. You brought up healthcare and housing costs. Healthcare has been a problem in this country forever. Housing costs have been a problem in this country for a long time. And the only way to fix either of those issues is to understand that we have more in common with our neighbors than we ever had with the assholes running the tech industry. And to work hand in hand with our neighbors to demand that those things improve for all. And we need to be useful enough to do this work with the understanding that it will be hard, it will take time, and we may never benefit from it ourselves. Because every time we decide that we’re willing to stay at the abattoir, no matter how bad it gets, we end up punting that problem further into a future which may no longer be there.

Fuck I want to end this on a happy note. I will try.

I’m sorry this industry took a heel turn. The shittiest of heel turns. It absolutely sucks. But you should take solace in the fact that whatever it might have done to you, it didn’t take away your desire to be useful. It didn’t kill your desire to help others.

You get to keep doing the thing you love.

Now do it for people who will love you back. This is the work.


❤️ At the very bottom of last week’s newsletter I told people I’d had a shit week and asked them to say hi. I got so many emails! And EVERY 👏 SINGLE 👏 ONE 👏 OF 👏 THEM 👏MEANT 👏 THE 👏 WORLD. Thank you. This week was much better, btw.


🙋 Got a question? Ask it. And please send me some questions about donuts or Saturday morning cartoons.

📚 I am beyond excited about this announcement: On May 11, Annalee Newitz, one of my very favorite writers, has agreed to chat about my new book at one of my favorite local bookstores. Space is limited, so please RSVP. And if you don’t live in SF I very much expect you to fly in.

📓 Speaking of my new book, you can now buy it from your book monger of choice. And if you’re not in the US (congrats) that means you can prolly find it locally and not pay heinous international shipping.

📣 If you need whistles hit me up.

🍉 Please donate to the Palestinian Children’s Relief Fund. Israel is insane.

🏳️‍⚧️ Please donate to Trans Lifeline, and for fuck sake if there is a trans person in your life please let them know they are loved.

Read the whole story
tante
5 hours ago
reply
"The same industry that once called you family is now using the fruits of your labor to commit war crimes. [...]

The industry has decided what it wants to be."
Berlin/Germany
Share this story
Delete

Karsten Wildberger: Digitalminister warnt vor dramatischen Jobverlusten durch KI

1 Comment
Ganze Branchen würden durch KI verändert - es gebe aber auch Wachstumspotenzial, sagt Karsten Wildberger. (Karsten Wildberger, KI)
Read the whole story
tante
5 hours ago
reply
Es ist ein Problem, dass der Digitalminister so wenig Ahnung vom Thema hat.

Massenarbeitslosigkeit durch "KI" ist kein realistishces Szenario
Berlin/Germany
Share this story
Delete

Marc Andreessen is a philosophical zombie

1 Comment
A photo of Marc Andreessen’s head opened up, with nothing inside.
What inner life? | Image: Cath Virginia / The Verge, Getty Images

I admit, this is an innovation I did not see coming: Silicon Valley has invented the philosophical zombie from the classic thought experiment "lol how crazy would it be if there were a philosophical zombie."

Until recently, the philosophical zombie was a concept closely associated with Australian philosopher David Chalmers, who defines it as "someone or something physically identical to me (or to any other conscious being), but lacking conscious experiences altogether." Chalmers' zombie twin is identical to him functionally and psychologically - except that he feels nothing. This is different from a Hollywood zombie, which has "little capac …

Read the full story at The Verge.

Read the whole story
tante
1 day ago
reply
"I admit, this is an innovation I did not see coming: Silicon Valley has invented the philosophical zombie from the classic thought experiment "lol how crazy would it be if there were a philosophical zombie.""
Berlin/Germany
Share this story
Delete

Companies go full AI — then the bill comes due

1 Comment and 2 Shares

Around the world, the enterprise AI revolution rockets forth at full speed! Get rid of those annoying and expensive employees! Replace them with the magical truth machine!

And the huge push for Claude Code in the past few months! You can hardly log into Mastodon without seeing yet another tech luminary who’s chosen to replace his brain with a clockwork mouse. CLAUDE IS A GAME CHANGER. CLAUDE HAS TURNED THE CORNER. THE WORLD IS DIFFERENT NOW. CLAUDE IS A NEW PARADIGM. Yeah, thanks.

Unfortunately, software as a service costs money. The end of the quarter’s coming up — and a few companies aren’t so happy at the bill. This stuff is expensive, and maybe you can’t actually afford to go full Gas Town.

Consultants have been talking up AI cost control since last year. But companies weren’t worrying so much about AI costs in the far distant past of six months ago.

The Wall Street Journal ran the headline yesterday: “You’ve Finally Figured Out AI at Work — Now Comes the Bill.” They’re still very gung-ho about the AI revolution — but they’ve just noticed this stuff does, in fact, have a price tag. One that goes up when you use more of it. [WSJ, archive]

Ed Zitron has been talking to people at Microsoft and seen documents. Even Microsoft is worrying about AI usage. You know, one of the AI vendors: [Bluesky, archive]

hearing microsoft is reorganizing its AI team under the banner of “the Copilot System.” Also hearing that teams are under pressure to reduce AI token use, remit is that there needs to be “fiscal responsibility in AI ops” and that Claude Code usage is being reduced in favour of Copilot CLI.

If a company as large as Microsoft — the only hyperscaler building out AI from cashflow — is having to do token austerity, this shit must cost so much more than we think

Microsoft will gladly pay you tomorrow for a token today.

This is happening a lot further afield than Microsoft. Here’s some comments from the trenches:

  • “We’re getting pushed to use AI for coding a lot and even with paid licenses to Copilot, I’ve burnt through the monthly quota in a day multiple times.” [Bluesky, archive]
  • “Yep, at my work for more than a year they’ve been pushing ‘AI all the things!’ And now suddenly we’re hearing OMG the cost! Directives haven’t changed to me. Still AI all the things; I just hear grumbling from above.” [Mastodon, archive]
  • “That’s when the next email came. We are using AI too much. The bill is too high. So, the original directive stands (AI first!) but they’re capping us at a very, very low token limit. Literally about 10% of what we’d become accustomed to. Execs literally sold the company on 10x’ing our output then throttled us to 10% AI usage.” [Grumpy Gamer, archive]

Use AI or else! No, you’re using too much! Also, produce ten times the features anyway!

Compare when we all went cloud. Which was more useful than AI. But then we noticed that AWS does, in fact, cost money.

Let’s assume the corporations keep their AI spend under some sort of control. That’s fine for 2026. Probably.

If you follow Pivot to AI, you know what comes next. 2027 will be just a bit nastier. I stress I could be wrong on the precise timing, but I’m pretty sure 2027 is when the venture capital subsidy for the AI vendors runs completely dry.

That’s when prices go up about ten times so the vendors can even cover their running costs. If the vendors survive.

Imagine your SaaS vendor calls and says “hey matey, your bill’s ten times as much next month. Sorry, bro!” You should expect some squawking.

You’ll be pleased to know that Microsoft, the software company that started as a dev tools company, has a solution! Here’s what Ed Zitron found Microsoft is planning: [Bluesky, archive]

One of the solutions proposed — I am not kidding — is “writing scripts to automate repetitive tasks.” It’s really funny imagining a software engineer being like “woah … like automating the boring stuff, you might say?”

The AI bubble will pop — though not as soon as any of us would like — and there will be work in the surviving companies for people who can do things halfway properly instead. Where there’s muck, there’s brass. But there’s so, so much muck.

Read the whole story
tante
1 day ago
reply
Companies are realizing that pushing people to using "AI" is expensive (even at the subsidized pricing going on right now)
Berlin/Germany
Share this story
Delete

Verteuerte Hardware: KI-Konzerne versperren den Weg aus der Cloud

1 Comment
Eigene Hardware bedeutet Selbstbestimmung - und die wird durch hohe Kosten zum Luxus. Den Verursachern der Knappheit kommt das gerade recht. Ein IMHO von Jürgen Geuter (Digitale Souveränität, Server)
Read the whole story
tante
12 days ago
reply
Dass Unternehmen sich alle Hardware krallen macht nicht nur Computer teuer: Sie ist ein direkter Angriff auf unsere Fähigkeit, unabhängige Infrastrukturen zu bauen.

Uns werden die "Mittel der Computerisierung" entrissen um uns Zugang zu vermieten.
Berlin/Germany
Share this story
Delete

Nothing to Declare

1 Comment

Making big public statements is always fun and people who think themselves to be important love doing it as a way of trying to influence public opinion and/or politics. They are a way for institutions and individuals to organize and try to shine some light onto important issues.

We’ve seen many such things in the AI space, one of the more ludicrous examples being the big push to pause “AI” development (signed by a bunch of billionaires developing “AI”). With “AI” having sucked up all the air in the room what else could we declare things about?

Usually those declarations have a very short shelf-life, don’t really do much and are mostly harmless.

So there is a new declaration in town called the “The Pro-Human AI Declaration“. And before we go into the details, let’s look at who is pushing this for a second because the declaration keeps talking about how broad their coalition is.

And boy is that tent big. It’s big enough to proudly list Steve Bannon, one of the architects of the new wave of fascism as its second individual supporter. So it’s a declaration white nationalists and fascists can get on board with. Glenn Beck, another right wing talking head was also happy to put his name there.

But it’s not just right-wing grifters and fascists: Yoshua Bengio, Stuart Russell and many other academics from the field of computer science, they also got that part covered. Richard Branson of the Virgin Group is probably one of the bigger corporate names, there’s SAG-AFTRA leadership and a whole lot of religious organizations all sharing the stage with … well fascists.

There also is a lot of organizational support: Again SAG-AFTRA is joined by a whole bunch of ethical “AI” groups, religious groups and of course the Center for Humane Technology, known to jump on every bandwaggon that gets Tristan Harris some airtime. It’s also a vehicle for the Future of Life Institute, a longtermist secular church based on eugenic thinking. The Future of Life Institute shouldn’t be touched with a ten foot pole by anyone interested in actual human flourishing: FLI doesn’t care about you or anyone, they want to make sure that future digital beings somehow get spawned and their bit should be happy. They would be ridiculous if they didn’t weasel themselves into the discussion about “AI” and other technologies. Like they got themselves onto this declaration together with a bunch of fascists.

But maybe they case is just too good. Too important. I wrote about being able to form alliances lately, after all, didn’t I. (Just a quick note on that: There can never be an alliance with fascists. It doesn’t matter what they ask for. What they fight for. All you do with fascists is anything to stop them by all means necessary.)

So what are the important statements here? They are not too many so let’s have a quick look, shall we?

1. Keeping Humans in Charge

Human Control Is Non-Negotiable: Humanity must remain in control. Humans should choose how and whether to delegate decisions to AI systems.

Meaningful Human Control: Humans should have authority and capacity to understand, guide, proscribe, and override AI systems.

No Superintelligence Race: Development of superintelligence should be prohibited until there is broad scientific consensus that it can be done safely and controllably, and there is strong public buy-in.

Off-Switch: Powerful AI systems must have mechanisms that allow human operators to promptly shut them down.

No Reckless Architectures: AI systems must not be designed so that they can self-replicate, autonomously self-improve, resist shutdown, or control weapons of mass destruction.

Independent Oversight: Highly autonomous AI systems where controllability is not obvious require pre-development review and independent oversight: genuine authority to understand, prohibit, and override, not industry self-regulation.

Capability Honesty: AI companies must provide clear, accurate and honest representations of their systems’ capabilities and limitations.

Ahh the good old “human in the loop” shebang. Or as we who have lived in this world call it: The “take the fall for the decisions an automated machine made” trick.

There’s not much to this: “Humans” must be in control. What this omits is the question of which humans. Say today a few people (basically all of them very rich men) who are considered human are in control. Is that what they mean? Because sure as shit I am not. Maybe you are, I don’t know you. The claim that “humans” need control pretends that we are all one homogeneous mass but there are differences in power and access. Every monstrosity humanity did was under human control. Human control feels great because everyone reading it thinks it means them. It does not.

Then there’s the Sci-Fi angle. Nobody is allowed to build a machine god and it needs an off-switch so we can kill it before [INSERT RANDOM SCI FI MOVIE REFERENCE]. Sure whatever. We could write the same passage about rabit Unicorns because we have basically the same ability to bring those to life than we have with “Superintelligence”. The “self-replicating” thing goes into the same direction. This is just science fiction references to make everyone feel afraid. But that’s not real. It’s not addressing any actual harms that “AI” systems do – which there are many of – and shifts the whole discourse to talking about nothing really. Sure I also watched The Matrix, it was a cool movie. But we shouldn’t base our politics on it (except for understanding hat trans rights are human rights of course!).

I am a bit more charitable with the last two aspects: Sure oversight is good. All companies need way more oversight. But not by setting up a comfy oversight board for a few handpicked researchers, investors and activists who get flown to nice meetings debating whatever. Actual oversight needs teeth, needs the willingness to use them. And demanding that companies must be honest would be neat. Good luck with an “AI” sector where basically every CEO is a compulsive liar. But sure.

2. Avoiding Concentration of Power

No AI Monopolies: AI monopolies that concentrate power, stifle innovation, and imperil entrepreneurship must be avoided.

Shared Prosperity: The benefits and economic prosperity created by AI should be shared broadly.

No Corporate Welfare: AI corporations should not be exempted from regulatory oversight or receive government bailouts.

Genuine Value Creation: AI development should prioritize solving real problems and creating authentic value.

Democratic Authority Over Major Transitions: Decisions about AI’s role in transforming work, society, and civic life require democratic support, not unilateral corporate or government decree.

Avoid Societal Lock-In: AI development must not severely limit humanity’s future options or irreversibly limit our agency over our future.

This one is fun. Because if we took it seriously this would be the demand for democratic socialism or even democratic communism. Which I am totally on board with. But that’s probably not what these folks are aiming for.

TBH the fact that all these paragraphs add “AI” is cute because if you strip it out of the text this is a list of demands to reign in the tech sector and corporations in general. But that’s not on the agenda.

“No AI monopolies”. So other monopolies are okay? “Genuine Value Creation” and “solving real problems”. Yeah sure. But that also applies to fucking Microsoft or whoever invented vapes.

This whole part is pointing out things that are wrong with capitalism. Welcome comrades (no, not you Steve Bannon and your fascist buddies).

But it’s very explicitly not saying that. Because this is “AI”. “AI” is special. All the things labeled as problematic here happen every day. It has absolutely nothing to do with “AI”. This is just to get some buy in by connecting the actual experience of people having to live in late stage capitalism somehow to “AI”. But the thing breaking us is capitalism.

Another question: Who gets to decide what “genuine value” is? The dude who invented vapes can point at how he’s making all that money and how it creates economic activity. Is the advertising industry “creating authentic value”? What does that even mean? Are they arguing for full democratic control of the economy? Again: Big fan, let’s go, but in their reading is that not a contradiction to not “stifle innovation, and imperil entrepreneurship”?

You gotta pick one: Democratic control or unchained innovation and markets. Those things have more tension than me reading that smart and famous people sign a declaration with Steve fucking Bannon.

  1. Protecting the Human Experience

Defense of Family and Community Bonds: AI should not supplant the foundational relationships that give life meaning—family, friendship, faith communities, and local connections.

Child Protection: Companies must not be allowed to exploit children or undermine their wellbeing with AI interactions creating emotional attachment or leverage.

Right to Grow: AI companies should not be allowed to stunt children’s physical, mental or social growth or deprive them of essential experiences for healthy development during critical periods.

Pre-Deployment Safety Testing: Like drugs, chatbots must undergo pre-deployment testing for increased suicidal ideation, exacerbation of mental health disorders, escalation of acute crisis situations, and other known harms.

Bot-or-Not Labeling: AI-generated content that could reasonably be mistaken for human-generated must be clearly labeled as such.

No Deceptive Identity: AI should clearly and correctly identify itself as artificial, nonhuman, and not a professional, and it should not claim experiences it lacks.

No Behavioral Addiction: AIs should not cause addiction or compulsive use through manipulation, sycophantic validation, or attachment formation.

I could almost repeat what I wrote about the last segment: Yeah sure, but this has very little to do with “AI”. “AI” is built to create an addiction/dependency but most other digital systems at least try to do that, too. Sure, with “AI” it’s not longer Meta who is your dealer but OpenAI but in the end both companies see you just as resource to exploit. Nothing “AI” here.

Yes, companies should not build systems exploiting children. Who’s with me burning Roblox to the ground or every free to play game in existence?

But there is something where I can calm the signatories (who all thought that putting their name next to Steve Bannon was a good idea): The idea that “AI” systems need to be labeled is already the law in the EU. It’s interesting though that their demands are way less strict that existing laws: A bot only needs to label its output if it could be mistaken as human. So a bot making hiring decisions in some backend doesn’t need to be disclosed? That’s super weak.

And just a small nitpick here: “it should not claim experience it lacks”: No “AI” has any experience. They are buckets of floating point numbers with delusions of grandeur. Living beings existing in the world have experiences. Numbers do not.

4. Human Agency and Liberty

No AI Personhood: AI systems must not be granted legal personhood, and AI systems should not be designed such that they deserve personhood.

Trustworthiness: AI must be transparent, accountable, reliable, and free from perverse private or authoritarian interests.

Liberty: AI must not curtail individual liberty, freedom of speech, religious practice, or association.

Data Rights and Privacy: People should have power over their personal data, with rights to access, correct, and delete it from active systems, AI training sets, and derived inferences.

Psychological Privacy: AI should not be allowed to exploit data about the mental or emotional states of users.

Avoiding Enfeeblement: AI systems should be designed to empower, rather than enfeeble their users.

I also think that “AI” systems deserve no personhood. And since I believe that “the corporation” is for all intends and purposes one of the first forms of “AI” we should strip them of a lot of person rights that we have given them. Who’s with me? You cannot say that “AI” shouldn’t get personhood and then claim that corporations can. Personhood is for people. But that’s me being radical again.

Now I don’t know where “perverse” found its way into this document (probably one of the many, many, many religious groups involved here) but it surely reads a bit as if they are afraid that their “AI” might be too queer. But maybe I am too uncharitable here.

“AI” is supposed to be accountable. But who is accountable? A person is accountable for their actions. But we already said that “AI” is not a person. What does it mean for an “AI” to be “accountable” then?

We go on with “free speech”. Of course. But which understanding of that? In Germany denying the existence of the holocaust is illegal. Should your “AI” deny the existence of the holocaust? For freeze peach?

But it’s cute that under “data rights and privacy” they added a claim for GDPR.

“Psychological privacy”: Here’s the kicker. “AI” doesn’t exploit. The people who built and run it, do. But you do not talk about them.

“Avoiding enfeeblement”: Yeah big fan of that. But if you look at studies about cognitive offloading and its effects if you actually mean it, you can join me and my army of friends of Ned Ludd burning down a whole bunch of data centers. “AI”s are built to take your agency and capability.

5. Responsibility and Accountability for AI Companies

No Liability Shield: AI must not be able to act as a liability shield, preventing those deploying it from being legally responsible for their actions.

Developer Liability: Developers and deployers bear legal liability for defects, misrepresentation of capabilities, and inadequate safety controls, with statutes of limitation that account for harms emerging over time.

Personal Liability: There should be criminal penalties for executives responsible for prohibited child-targeted systems or ones causing catastrophic harm.

Independent Safety Standards: AI development shall be governed by independent safety standards and rigorous oversight.

No Regulatory Capture: AI companies must not be allowed undue influence over rules that govern them.

Failure Transparency: If an AI system causes harm, it should be possible to ascertain why as well as who is responsible.

AI Loyalty: AI systems performing functions in professions with fiduciary duties, such as health, finance, law, or therapy, must fulfill all of those duties, including mandated reporting, duty of care, conflict of interest disclosure, and informed consent.

Now. Here’s a surprise: I agree with a lot of what is written here. I have actually written about “that asterisk” before. We accept a level of “AI” companies delivering absolute garbage and putting it on us to “check” it. We would not accept that dynamic anywhere else. If I go to the supermarket to buy milk and all cartons had a post-it stuck to them saying “might be full of rat poison, you better check”, that store would just be closed.

“AI” currently is in the status of a supermarket that doesn’t know (and doesn’t care enough to check) if it’s delivering rat poison or not. Awesome.

I agree that “AI” companies should be liable for their product. I also think that people deploying and integrating these products should be liable.

And I also agree that we mustn’t let “AI” companies write or influence legislation.

This is the least bad part of the document. (You know the document that fascists also support and signed and respectable people not having a problem with that.)


The document itself isn’t that interesting. A bunch of half-assed, a bit contradicting demands that read a bit as if a chatbot helped writing them. A whole bunch of problems being correctly identified and then quickly attributed to “AI” instead of capitalism. Sure, it’s hard to challenge one’s beliefs but still: Aggregating this should have lead to some insight and thinking. Unless … this was written with a chatbot, right?

More interesting is definitely who signed it. Who thought that it was good to integrate fascists and white nationalists whose whole aim is to destroy the rule-based order that the world used to be in.

We see many organizations trying to show their relevance by being on this – sorta empty – paper. But they are also legitimizing a lot of problematic stuff here – The Fascists and The Future of Life Institute being just the first things that caught my eye and I am scared to look into the religious organizations and all the “ethical AI” orgs because usually when you start poking around something filthy comes up.

The whole document and activity is based on the assumption that “AI” is special. Needs special rules. Special approaches. But that ain’t true.

We need to regulate tech. Need to regulate how it is being used against us, our wellbeing, our rights in this world. But it doesn’t matter if the abuse commited by corporations and governments is done through a stochastic model, some other form of automation or just people: “AI”s mustn’t stifle our development and freedoms? Yeah, neither should people.

The grown up approach to regulating tech does not lie in building special rules for special technologies. That’s a sucker’s game that tech loves. Because it can drown the discourse in jargon and get a large influence because “only tech bros understand that stuff”. Fuck that.

Regulate outcomes. The tech doesn’t matter. For the person hurt the tool that was used to hurt them isn’t that relevant. We need to make sure that we are reducing, ideally eliminating the hurt.


Coda: I can’t get over how the whole document basically argues for democratic socialism/communism and the abolishment of capitalism. Looking at the organizations who pushed it and probably wrote most of it it shows a lack of self-awareness and critical thinking that is kinda cute. Like a baby right before it learns object permanence.

Read the whole story
tante
16 days ago
reply
"Regulate outcomes. The tech doesn’t matter. For the person hurt the tool that was used to hurt them isn’t that relevant. We need to make sure that we are reducing, ideally eliminating the hurt."
Berlin/Germany
Share this story
Delete
Next Page of Stories