Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2608 stories
·
142 followers

Marc Andreessen is a philosophical zombie

1 Comment
A photo of Marc Andreessen’s head opened up, with nothing inside.
What inner life? | Image: Cath Virginia / The Verge, Getty Images

I admit, this is an innovation I did not see coming: Silicon Valley has invented the philosophical zombie from the classic thought experiment "lol how crazy would it be if there were a philosophical zombie."

Until recently, the philosophical zombie was a concept closely associated with Australian philosopher David Chalmers, who defines it as "someone or something physically identical to me (or to any other conscious being), but lacking conscious experiences altogether." Chalmers' zombie twin is identical to him functionally and psychologically - except that he feels nothing. This is different from a Hollywood zombie, which has "little capac …

Read the full story at The Verge.

Read the whole story
tante
6 hours ago
reply
"I admit, this is an innovation I did not see coming: Silicon Valley has invented the philosophical zombie from the classic thought experiment "lol how crazy would it be if there were a philosophical zombie.""
Berlin/Germany
Share this story
Delete

Companies go full AI — then the bill comes due

1 Comment

Around the world, the enterprise AI revolution rockets forth at full speed! Get rid of those annoying and expensive employees! Replace them with the magical truth machine!

And the huge push for Claude Code in the past few months! You can hardly log into Mastodon without seeing yet another tech luminary who’s chosen to replace his brain with a clockwork mouse. CLAUDE IS A GAME CHANGER. CLAUDE HAS TURNED THE CORNER. THE WORLD IS DIFFERENT NOW. CLAUDE IS A NEW PARADIGM. Yeah, thanks.

Unfortunately, software as a service costs money. The end of the quarter’s coming up — and a few companies aren’t so happy at the bill. This stuff is expensive, and maybe you can’t actually afford to go full Gas Town.

Consultants have been talking up AI cost control since last year. But companies weren’t worrying so much about AI costs in the far distant past of six months ago.

The Wall Street Journal ran the headline yesterday: “You’ve Finally Figured Out AI at Work — Now Comes the Bill.” They’re still very gung-ho about the AI revolution — but they’ve just noticed this stuff does, in fact, have a price tag. One that goes up when you use more of it. [WSJ, archive]

Ed Zitron has been talking to people at Microsoft and seen documents. Even Microsoft is worrying about AI usage. You know, one of the AI vendors: [Bluesky, archive]

hearing microsoft is reorganizing its AI team under the banner of “the Copilot System.” Also hearing that teams are under pressure to reduce AI token use, remit is that there needs to be “fiscal responsibility in AI ops” and that Claude Code usage is being reduced in favour of Copilot CLI.

If a company as large as Microsoft — the only hyperscaler building out AI from cashflow — is having to do token austerity, this shit must cost so much more than we think

Microsoft will gladly pay you tomorrow for a token today.

This is happening a lot further afield than Microsoft. Here’s some comments from the trenches:

  • “We’re getting pushed to use AI for coding a lot and even with paid licenses to Copilot, I’ve burnt through the monthly quota in a day multiple times.” [Bluesky, archive]
  • “Yep, at my work for more than a year they’ve been pushing ‘AI all the things!’ And now suddenly we’re hearing OMG the cost! Directives haven’t changed to me. Still AI all the things; I just hear grumbling from above.” [Mastodon, archive]
  • “That’s when the next email came. We are using AI too much. The bill is too high. So, the original directive stands (AI first!) but they’re capping us at a very, very low token limit. Literally about 10% of what we’d become accustomed to. Execs literally sold the company on 10x’ing our output then throttled us to 10% AI usage.” [Grumpy Gamer, archive]

Use AI or else! No, you’re using too much! Also, produce ten times the features anyway!

Compare when we all went cloud. Which was more useful than AI. But then we noticed that AWS does, in fact, cost money.

Let’s assume the corporations keep their AI spend under some sort of control. That’s fine for 2026. Probably.

If you follow Pivot to AI, you know what comes next. 2027 will be just a bit nastier. I stress I could be wrong on the precise timing, but I’m pretty sure 2027 is when the venture capital subsidy for the AI vendors runs completely dry.

That’s when prices go up about ten times so the vendors can even cover their running costs. If the vendors survive.

Imagine your SaaS vendor calls and says “hey matey, your bill’s ten times as much next month. Sorry, bro!” You should expect some squawking.

You’ll be pleased to know that Microsoft, the software company that started as a dev tools company, has a solution! Here’s what Ed Zitron found Microsoft is planning: [Bluesky, archive]

One of the solutions proposed — I am not kidding — is “writing scripts to automate repetitive tasks.” It’s really funny imagining a software engineer being like “woah … like automating the boring stuff, you might say?”

The AI bubble will pop — though not as soon as any of us would like — and there will be work in the surviving companies for people who can do things halfway properly instead. Where there’s muck, there’s brass. But there’s so, so much muck.

Read the whole story
tante
6 hours ago
reply
Companies are realizing that pushing people to using "AI" is expensive (even at the subsidized pricing going on right now)
Berlin/Germany
Share this story
Delete

Verteuerte Hardware: KI-Konzerne versperren den Weg aus der Cloud

1 Comment
Eigene Hardware bedeutet Selbstbestimmung - und die wird durch hohe Kosten zum Luxus. Den Verursachern der Knappheit kommt das gerade recht. Ein IMHO von Jürgen Geuter (Digitale Souveränität, Server)
Read the whole story
tante
11 days ago
reply
Dass Unternehmen sich alle Hardware krallen macht nicht nur Computer teuer: Sie ist ein direkter Angriff auf unsere Fähigkeit, unabhängige Infrastrukturen zu bauen.

Uns werden die "Mittel der Computerisierung" entrissen um uns Zugang zu vermieten.
Berlin/Germany
Share this story
Delete

Nothing to Declare

1 Comment

Making big public statements is always fun and people who think themselves to be important love doing it as a way of trying to influence public opinion and/or politics. They are a way for institutions and individuals to organize and try to shine some light onto important issues.

We’ve seen many such things in the AI space, one of the more ludicrous examples being the big push to pause “AI” development (signed by a bunch of billionaires developing “AI”). With “AI” having sucked up all the air in the room what else could we declare things about?

Usually those declarations have a very short shelf-life, don’t really do much and are mostly harmless.

So there is a new declaration in town called the “The Pro-Human AI Declaration“. And before we go into the details, let’s look at who is pushing this for a second because the declaration keeps talking about how broad their coalition is.

And boy is that tent big. It’s big enough to proudly list Steve Bannon, one of the architects of the new wave of fascism as its second individual supporter. So it’s a declaration white nationalists and fascists can get on board with. Glenn Beck, another right wing talking head was also happy to put his name there.

But it’s not just right-wing grifters and fascists: Yoshua Bengio, Stuart Russell and many other academics from the field of computer science, they also got that part covered. Richard Branson of the Virgin Group is probably one of the bigger corporate names, there’s SAG-AFTRA leadership and a whole lot of religious organizations all sharing the stage with … well fascists.

There also is a lot of organizational support: Again SAG-AFTRA is joined by a whole bunch of ethical “AI” groups, religious groups and of course the Center for Humane Technology, known to jump on every bandwaggon that gets Tristan Harris some airtime. It’s also a vehicle for the Future of Life Institute, a longtermist secular church based on eugenic thinking. The Future of Life Institute shouldn’t be touched with a ten foot pole by anyone interested in actual human flourishing: FLI doesn’t care about you or anyone, they want to make sure that future digital beings somehow get spawned and their bit should be happy. They would be ridiculous if they didn’t weasel themselves into the discussion about “AI” and other technologies. Like they got themselves onto this declaration together with a bunch of fascists.

But maybe they case is just too good. Too important. I wrote about being able to form alliances lately, after all, didn’t I. (Just a quick note on that: There can never be an alliance with fascists. It doesn’t matter what they ask for. What they fight for. All you do with fascists is anything to stop them by all means necessary.)

So what are the important statements here? They are not too many so let’s have a quick look, shall we?

1. Keeping Humans in Charge

Human Control Is Non-Negotiable: Humanity must remain in control. Humans should choose how and whether to delegate decisions to AI systems.

Meaningful Human Control: Humans should have authority and capacity to understand, guide, proscribe, and override AI systems.

No Superintelligence Race: Development of superintelligence should be prohibited until there is broad scientific consensus that it can be done safely and controllably, and there is strong public buy-in.

Off-Switch: Powerful AI systems must have mechanisms that allow human operators to promptly shut them down.

No Reckless Architectures: AI systems must not be designed so that they can self-replicate, autonomously self-improve, resist shutdown, or control weapons of mass destruction.

Independent Oversight: Highly autonomous AI systems where controllability is not obvious require pre-development review and independent oversight: genuine authority to understand, prohibit, and override, not industry self-regulation.

Capability Honesty: AI companies must provide clear, accurate and honest representations of their systems’ capabilities and limitations.

Ahh the good old “human in the loop” shebang. Or as we who have lived in this world call it: The “take the fall for the decisions an automated machine made” trick.

There’s not much to this: “Humans” must be in control. What this omits is the question of which humans. Say today a few people (basically all of them very rich men) who are considered human are in control. Is that what they mean? Because sure as shit I am not. Maybe you are, I don’t know you. The claim that “humans” need control pretends that we are all one homogeneous mass but there are differences in power and access. Every monstrosity humanity did was under human control. Human control feels great because everyone reading it thinks it means them. It does not.

Then there’s the Sci-Fi angle. Nobody is allowed to build a machine god and it needs an off-switch so we can kill it before [INSERT RANDOM SCI FI MOVIE REFERENCE]. Sure whatever. We could write the same passage about rabit Unicorns because we have basically the same ability to bring those to life than we have with “Superintelligence”. The “self-replicating” thing goes into the same direction. This is just science fiction references to make everyone feel afraid. But that’s not real. It’s not addressing any actual harms that “AI” systems do – which there are many of – and shifts the whole discourse to talking about nothing really. Sure I also watched The Matrix, it was a cool movie. But we shouldn’t base our politics on it (except for understanding hat trans rights are human rights of course!).

I am a bit more charitable with the last two aspects: Sure oversight is good. All companies need way more oversight. But not by setting up a comfy oversight board for a few handpicked researchers, investors and activists who get flown to nice meetings debating whatever. Actual oversight needs teeth, needs the willingness to use them. And demanding that companies must be honest would be neat. Good luck with an “AI” sector where basically every CEO is a compulsive liar. But sure.

2. Avoiding Concentration of Power

No AI Monopolies: AI monopolies that concentrate power, stifle innovation, and imperil entrepreneurship must be avoided.

Shared Prosperity: The benefits and economic prosperity created by AI should be shared broadly.

No Corporate Welfare: AI corporations should not be exempted from regulatory oversight or receive government bailouts.

Genuine Value Creation: AI development should prioritize solving real problems and creating authentic value.

Democratic Authority Over Major Transitions: Decisions about AI’s role in transforming work, society, and civic life require democratic support, not unilateral corporate or government decree.

Avoid Societal Lock-In: AI development must not severely limit humanity’s future options or irreversibly limit our agency over our future.

This one is fun. Because if we took it seriously this would be the demand for democratic socialism or even democratic communism. Which I am totally on board with. But that’s probably not what these folks are aiming for.

TBH the fact that all these paragraphs add “AI” is cute because if you strip it out of the text this is a list of demands to reign in the tech sector and corporations in general. But that’s not on the agenda.

“No AI monopolies”. So other monopolies are okay? “Genuine Value Creation” and “solving real problems”. Yeah sure. But that also applies to fucking Microsoft or whoever invented vapes.

This whole part is pointing out things that are wrong with capitalism. Welcome comrades (no, not you Steve Bannon and your fascist buddies).

But it’s very explicitly not saying that. Because this is “AI”. “AI” is special. All the things labeled as problematic here happen every day. It has absolutely nothing to do with “AI”. This is just to get some buy in by connecting the actual experience of people having to live in late stage capitalism somehow to “AI”. But the thing breaking us is capitalism.

Another question: Who gets to decide what “genuine value” is? The dude who invented vapes can point at how he’s making all that money and how it creates economic activity. Is the advertising industry “creating authentic value”? What does that even mean? Are they arguing for full democratic control of the economy? Again: Big fan, let’s go, but in their reading is that not a contradiction to not “stifle innovation, and imperil entrepreneurship”?

You gotta pick one: Democratic control or unchained innovation and markets. Those things have more tension than me reading that smart and famous people sign a declaration with Steve fucking Bannon.

  1. Protecting the Human Experience

Defense of Family and Community Bonds: AI should not supplant the foundational relationships that give life meaning—family, friendship, faith communities, and local connections.

Child Protection: Companies must not be allowed to exploit children or undermine their wellbeing with AI interactions creating emotional attachment or leverage.

Right to Grow: AI companies should not be allowed to stunt children’s physical, mental or social growth or deprive them of essential experiences for healthy development during critical periods.

Pre-Deployment Safety Testing: Like drugs, chatbots must undergo pre-deployment testing for increased suicidal ideation, exacerbation of mental health disorders, escalation of acute crisis situations, and other known harms.

Bot-or-Not Labeling: AI-generated content that could reasonably be mistaken for human-generated must be clearly labeled as such.

No Deceptive Identity: AI should clearly and correctly identify itself as artificial, nonhuman, and not a professional, and it should not claim experiences it lacks.

No Behavioral Addiction: AIs should not cause addiction or compulsive use through manipulation, sycophantic validation, or attachment formation.

I could almost repeat what I wrote about the last segment: Yeah sure, but this has very little to do with “AI”. “AI” is built to create an addiction/dependency but most other digital systems at least try to do that, too. Sure, with “AI” it’s not longer Meta who is your dealer but OpenAI but in the end both companies see you just as resource to exploit. Nothing “AI” here.

Yes, companies should not build systems exploiting children. Who’s with me burning Roblox to the ground or every free to play game in existence?

But there is something where I can calm the signatories (who all thought that putting their name next to Steve Bannon was a good idea): The idea that “AI” systems need to be labeled is already the law in the EU. It’s interesting though that their demands are way less strict that existing laws: A bot only needs to label its output if it could be mistaken as human. So a bot making hiring decisions in some backend doesn’t need to be disclosed? That’s super weak.

And just a small nitpick here: “it should not claim experience it lacks”: No “AI” has any experience. They are buckets of floating point numbers with delusions of grandeur. Living beings existing in the world have experiences. Numbers do not.

4. Human Agency and Liberty

No AI Personhood: AI systems must not be granted legal personhood, and AI systems should not be designed such that they deserve personhood.

Trustworthiness: AI must be transparent, accountable, reliable, and free from perverse private or authoritarian interests.

Liberty: AI must not curtail individual liberty, freedom of speech, religious practice, or association.

Data Rights and Privacy: People should have power over their personal data, with rights to access, correct, and delete it from active systems, AI training sets, and derived inferences.

Psychological Privacy: AI should not be allowed to exploit data about the mental or emotional states of users.

Avoiding Enfeeblement: AI systems should be designed to empower, rather than enfeeble their users.

I also think that “AI” systems deserve no personhood. And since I believe that “the corporation” is for all intends and purposes one of the first forms of “AI” we should strip them of a lot of person rights that we have given them. Who’s with me? You cannot say that “AI” shouldn’t get personhood and then claim that corporations can. Personhood is for people. But that’s me being radical again.

Now I don’t know where “perverse” found its way into this document (probably one of the many, many, many religious groups involved here) but it surely reads a bit as if they are afraid that their “AI” might be too queer. But maybe I am too uncharitable here.

“AI” is supposed to be accountable. But who is accountable? A person is accountable for their actions. But we already said that “AI” is not a person. What does it mean for an “AI” to be “accountable” then?

We go on with “free speech”. Of course. But which understanding of that? In Germany denying the existence of the holocaust is illegal. Should your “AI” deny the existence of the holocaust? For freeze peach?

But it’s cute that under “data rights and privacy” they added a claim for GDPR.

“Psychological privacy”: Here’s the kicker. “AI” doesn’t exploit. The people who built and run it, do. But you do not talk about them.

“Avoiding enfeeblement”: Yeah big fan of that. But if you look at studies about cognitive offloading and its effects if you actually mean it, you can join me and my army of friends of Ned Ludd burning down a whole bunch of data centers. “AI”s are built to take your agency and capability.

5. Responsibility and Accountability for AI Companies

No Liability Shield: AI must not be able to act as a liability shield, preventing those deploying it from being legally responsible for their actions.

Developer Liability: Developers and deployers bear legal liability for defects, misrepresentation of capabilities, and inadequate safety controls, with statutes of limitation that account for harms emerging over time.

Personal Liability: There should be criminal penalties for executives responsible for prohibited child-targeted systems or ones causing catastrophic harm.

Independent Safety Standards: AI development shall be governed by independent safety standards and rigorous oversight.

No Regulatory Capture: AI companies must not be allowed undue influence over rules that govern them.

Failure Transparency: If an AI system causes harm, it should be possible to ascertain why as well as who is responsible.

AI Loyalty: AI systems performing functions in professions with fiduciary duties, such as health, finance, law, or therapy, must fulfill all of those duties, including mandated reporting, duty of care, conflict of interest disclosure, and informed consent.

Now. Here’s a surprise: I agree with a lot of what is written here. I have actually written about “that asterisk” before. We accept a level of “AI” companies delivering absolute garbage and putting it on us to “check” it. We would not accept that dynamic anywhere else. If I go to the supermarket to buy milk and all cartons had a post-it stuck to them saying “might be full of rat poison, you better check”, that store would just be closed.

“AI” currently is in the status of a supermarket that doesn’t know (and doesn’t care enough to check) if it’s delivering rat poison or not. Awesome.

I agree that “AI” companies should be liable for their product. I also think that people deploying and integrating these products should be liable.

And I also agree that we mustn’t let “AI” companies write or influence legislation.

This is the least bad part of the document. (You know the document that fascists also support and signed and respectable people not having a problem with that.)


The document itself isn’t that interesting. A bunch of half-assed, a bit contradicting demands that read a bit as if a chatbot helped writing them. A whole bunch of problems being correctly identified and then quickly attributed to “AI” instead of capitalism. Sure, it’s hard to challenge one’s beliefs but still: Aggregating this should have lead to some insight and thinking. Unless … this was written with a chatbot, right?

More interesting is definitely who signed it. Who thought that it was good to integrate fascists and white nationalists whose whole aim is to destroy the rule-based order that the world used to be in.

We see many organizations trying to show their relevance by being on this – sorta empty – paper. But they are also legitimizing a lot of problematic stuff here – The Fascists and The Future of Life Institute being just the first things that caught my eye and I am scared to look into the religious organizations and all the “ethical AI” orgs because usually when you start poking around something filthy comes up.

The whole document and activity is based on the assumption that “AI” is special. Needs special rules. Special approaches. But that ain’t true.

We need to regulate tech. Need to regulate how it is being used against us, our wellbeing, our rights in this world. But it doesn’t matter if the abuse commited by corporations and governments is done through a stochastic model, some other form of automation or just people: “AI”s mustn’t stifle our development and freedoms? Yeah, neither should people.

The grown up approach to regulating tech does not lie in building special rules for special technologies. That’s a sucker’s game that tech loves. Because it can drown the discourse in jargon and get a large influence because “only tech bros understand that stuff”. Fuck that.

Regulate outcomes. The tech doesn’t matter. For the person hurt the tool that was used to hurt them isn’t that relevant. We need to make sure that we are reducing, ideally eliminating the hurt.


Coda: I can’t get over how the whole document basically argues for democratic socialism/communism and the abolishment of capitalism. Looking at the organizations who pushed it and probably wrote most of it it shows a lack of self-awareness and critical thinking that is kinda cute. Like a baby right before it learns object permanence.

Read the whole story
tante
15 days ago
reply
"Regulate outcomes. The tech doesn’t matter. For the person hurt the tool that was used to hurt them isn’t that relevant. We need to make sure that we are reducing, ideally eliminating the hurt."
Berlin/Germany
Share this story
Delete

Artisanal care

1 Comment

When we moved into our apartment we hired a contractor to build bespoke cupboards for a few niches that we wanted to use optimally. He built perfectly fitting, nice cupboards that make those areas look nice and clean while allowing us to store all kinds of stuff. And he took great care doing it.

Even after the job had started he kept explaining us why certain decisions we thought were smart might have been not so smart and offered solutions – often without charging extra. He wanted to build something that lasts and that makes us happy – while being fairly paid of course.

This s a privileged position: Being able to afford hiring an expert, finding one in your vicinity is not available to everyone. And not everything needs to be bespoke. Sometimes a well-thought, well-built off the shelf item is absolutely fine and a bespoke solution wouldn’t add anything but cost.

But what this extreme illustrates is the level of care that goes into building something well: The care a person takes in doing a good job but also understanding one’s work for someone else as a level of care towards that person. If you ask me to do something for you, me doing it means that I also take care of you to a certain degree. We have even codified that to a certain degree for many products where someone selling you something needs to give you warranty or other guaranties about whatever they sold you – mostly in order to try to protect people against capitalism’s tendency to make everything worse.


Software is an interesting product category in that a lot of software these days is no longer optional: It’s not a video game you can use or not but it’s the infrastructure that allows you to get paid, get government services, sign your kid up for school etc. These days we are mostly forced to use software whether we like to or not – often even software we cannot have any control over. This also means that building software should be increasingly more focused on people’s wellbeing and security: If you force people into something you need to make sure they are taken care of.

Of course we know that that’s not how things are. The quality of software we all interact with each day is perceivably worse than it used to be (we use MS 365 at work and boy is that a for of torture done to us by one of the biggest and most powerful companies in the world). Windows (and to a degree MacOs) is getting so bad that people are actively looking into running Linux. 1.0 versions of software neither mean that they work nor that they are feature complete, they are just whatever MVP compiled at the previously defined release date. You can always patch things later right? Well patches have gotten so bad that many people actively avoid installing them fearing what they might break.

It would be deeply unfair to just chalk that up to “AI” making software worse. Software was doing bad before, standards of quality being largely nonexistent. But “AI” and the promise that you can just magically create software is pouring gasoline on the fire: We are generating code way beyond our ability to ever review and ensure the security of. The developer of OpenClaw, the very hyped “AI” agent thing kept on proudly shouting on X, the everything Nazi app, that he’s constantly releasing code he never checked into the wild and into people’s personal infrastructure and data.

It’s the most visible rejection of care in software: People proudly saying how much the software they work on is “vibe coded” or “written by Claude” underlining how there is no pride in or responsibility for the work one not only put out but actually forces on people. It’s like a chef loudly stating that they didn’t even taste the menu they now want you to pay 200 EUR for. Like a doctor saying that they just describe you whatever medicine the computer tells them to without even looking at your test results. It’s a statement making clear that you do not give a shit. That you do not care.


Not every good can be produced artisanally: Furniture or clothes take a long time and skill to make, and in order to make them well you need good and expensive materials. It doesn’t scale up to the actual current need (which is way below the need that companies try to trick you into) and it’s too expensive for most people (That is a decision we as societies made when we forced people into jobs whose pay isn’t adequate to have a decent life. We could choose differently.)

But at least for software that is different. Digital goods (aside from “AI” which works differently) are characterized by 0 marginal cost meaning: Your costs don’t really go up when “producing” another copy of a software you already wrote.

This is where I see FLOSS – free and open source software – having their niche. Because in that space we can build situations and contexts where an artisanal mode of production, a mode based on care for the product and the people using it is feasible. Where not some millionaire or billionaire needs to squeeze out another few extra percent of profit every year by degrading the quality more and more.

I think FLOSS should embrace the small-scale, artisanal mindset. To build a stack of sustainable, high quality components while the rest of the world is trying to see how low the race to the bottom in software quality can go.

Writing software for people is important. It can also be fun. Especially when you get the time and (head)space to move carefully and intentionally. In that situation, software is just one way to show care for one another and act upon that display of love and respect.

We all deserve more software made with actual love for people.

Read the whole story
tante
15 days ago
reply
"People proudly saying how much the software they work on is “vibe coded” or “written by Claude” underlining how there is no pride in or responsibility for the work one not only put out but actually forces on people."
Berlin/Germany
Share this story
Delete

Anthropic is untrustworthy

1 Comment

Comments

Read the whole story
tante
16 days ago
reply
"Anthropic is a company started by people who left OpenAI. What did they do there, why did they leave, how was Anthropic supposed to be different, and how is it actually different?"
Berlin/Germany
Share this story
Delete
Next Page of Stories