Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2584 stories
·
139 followers

Digitaler-Einbettungs-Tag

1 Comment

Diverse deutsche NGOs und Unternehmen haben im Dezember den “Digital Independence Day” ausgerufen. Am DI.Day, wie es gerne abgekürzt wird, sollen Menschen nicht nur motiviert werden, die digitalen Aspekte ihrer Existenz von vor allem US-basierten Plattformen und Anbietern hin zu anderen Plattformen (z.B. in der EU) zu verschieben sondern es werden auch konkrete Hinweise und Howtos bereitgestellt, wie genau man sich von bestimmten Plattformen befreien kann.

Die Stoßrichtung kann ich absolut verstehen, ich habe vor einigen Tagen ja hier selbst beschrieben, welche Migrationen ich im vergangenen Jahr vorgenommen habe. Mehr solche konkreten Anleitungen sind in jedem Falle hilfreich, um Menschen und Gruppen die Möglichkeit zu geben, ihre digitale Lebenswelt besser zu gestalten. Überhaupt wieder zu gestalten. Sich nicht mehr nur als “user” sondern als gestaltend zu begreifen. Denn das Digitale ist ja an sich fast beliebig formbar, eine Eigenschaft, die zunehmend in Vergessenheit zu geraten scheint.

Allerdings ist das Framing problematisch: “Independence Day”. Man will sich also unabhängig machen. Aber von wem eigentlich?

Auf den ersten Blick gibt es hier einen leicht nationalistischen Spin: Man will sich nicht mehr auf ausländische/außereuropäische Anbieter verlassen müssen. Aber das wäre für den DI.day meiner Meinung nach eine unfaire Lesart. Es geht ja auch viel um Open Source und all das, also generell darum irgendwie unabhängig zu sein. Aber mal ne ketzerische Frage: Geht das überhaupt? Will man das überhaupt wirklich?

Als Mensch zu existieren bedeutet abhängig zu sein. Wir sind als Kinder abhängig von unseren Eltern und Erziehenden, später von unserem sozialen Umfeld, unserem Job, der Arbeit, die viele unbekannte um uns herum erledigen, damit der Laden überhaupt laufen kann: Wir sind niemals unabhängig.

Ich würde sogar weitergehen. Die Art, wie wir in Abhängigkeiten leben, macht uns stark. Bestimmte Arten von gegenseitiger Abhängigkeit sind die Basis von Solidarität und sozialem Zusammenhalt. Zu begreifen, das wir alle einander brauchen ist der erste Schritt um zu verstehen, wie sehr wir uns alle gegenseitig ein gutes Leben schulden.

Das heißt nicht, dass alle Abhängigkeiten gleich sind: Ich bin ein Fan davon, bewußter über Abhängigkeiten nachzudenken und bestimmte davon loszuwerden, falls irgendwie möglich. Aber das darf nicht dazu führen der liberalen Chimäre des komplett unabhängigen Individuums, das rational am Markt oder so operiert, nachzurennen.

Ein besseres Framing wäre meiner Meinung nach das der Einbettung. Mensch sein heißt eingebettet sein in Beziehungen und Abhängigkeiten. Und diese gilt es möglichst gut zu gestalten, spezielle Abhängigkeiten zu reduzieren und durch andere zu ersetzen, die fairer, menschlicher, repektvoller sind.

Wir sind niemals unabhängig. Aber das heißt nicht, dass wir bestimmte Abhängigkeiten akzeptieren sollten. Ein gutes Leben ist ein Leben in guten sozialen Verbindungen. Guten Abhängigkeiten halt.

Read the whole story
tante
37 seconds ago
reply
"Wir sind niemals unabhängig. Aber das heißt nicht, dass wir bestimmte Abhängigkeiten akzeptieren sollten. Ein gutes Leben ist ein Leben in guten sozialen Verbindungen. Guten Abhängigkeiten halt." (zum DI.day)
Berlin/Germany
Share this story
Delete

AI Bubble

1 Comment


This cartoon is drawn by new guest artist Jamie Sale, who did a terrific job.


TRANSCRIPT OF CARTOON

This cartoon has four panels. Each of the panels shows a businessman in a suit grinning as he speaks to us.

PANEL 1

A close up of a businessman grinning. In the background, a bright blue sky with fluffy clouds.

MAN: A.I. Is the defining tech of our time! Microsoft and amazon and facebook and google have spent almost a trillion dollars on A.I.!

PANEL 2

The camera has pulled back a little. We can see the man is holding a bubble blower, bubbles streaming from it.

MAN: Has A.I. made a profit? Not yet, but… Someday we’ll figure out something A.I. can do that actually makes money! It definitely might could happen!

PANEL 3

The man continues grinning, pumping his fist, as the air around him turns gray and forbidding and the bubbles stream out.

MAN: In the meantime, We have to prepare! By spending more billions building more A.I. data centers so we can spend trillions more so that someday A.I. can do… Um…

PANEL 4

We can now see that the man is talking to a huge bubble floating in the air. The bubble has been packed fill with ordinary looking people, shoved in like sardines in a can. They looked panicked and unhappy.

MAN: Anyway, A.I. is certainly possibly maybe not going to pop and take down the whole economy! You’ve got nothing to worry about!

CHICKEN FAT WATCH

“Chicken fat” is old-fashioned cartoonist lingo for little extras in the art.

Panel 2 – In a tiny window in a cloud is a tiny, teeny silhouette of a spy with binoculars.

Panel 3 – One of the bubbles has a mouse in it.

Panel 4 – One of the bubbles has a “for rent” sign.


The A.I. Bubble | Patreon

Read the whole story
tante
2 hours ago
reply
The AI Bubble
Berlin/Germany
Share this story
Delete

Hiding behind translations

1 Comment

After a lot of turmoil and their most vocal user base protesting about how Mozilla keeps pushing “AI” into the browser in many weird ways they now released the “AI Killswitch” they have been talking about for a while.

Which is good. Those features should have been “opt-in” from the start and it’s kinda weird how users of a Browser that frames itself as the resistance against “Big Tech” need to fight it tooth and nail to not push Big Tech’s vision of a slop future onto them. But I digress.

Mozilla’s blog post reminded me of why I always but “AI” in scarequotes: I do not think that “AI” mean any specific or defined technology or type of artifact but is mostly an empty signifier. It means whatever you want it to mean at some point. An LLM. An Excel macro. A bunch of people in a call center in India. A bunch of slides in a slide deck full of false promises.

(If “AI” means anything it means the assignment of agency to a supposedly existing piece of technology. So a disenfranchisement of human beings mostly.)

Mozilla outlines the different kinds of features (all called “AI”) that the kill switch allows you to disable:

Screenshot of the "AI Killswitch" post on the mozilla website. Text: "At launch, AI controls let you manage these features individually: Translations, which help you browse the web in your preferred language. Alt text in PDFs, which add accessibility descriptions to images in PDF pages. AI-enhanced tab grouping, which suggests related tabs and group names. Link previews, which show key points before you open a link. AI chatbot in the sidebar, which lets you use your chosen chatbot as you browse, including options like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini and Le Chat Mistral."

And this made me think. Because these features are very much not the same.

Sure, all might technology-wise have similar bases in neural networks but from the user perspective they are qualitatively different.

“Translations” and even “Alt text in PDFs” are basically accessibility features: They try to empower you the user to be able to access information you otherwise might not be able to, at the cost of it probably being a low quality version of it. I think that there is a good case to be made for integrating that kind of functionality into a browser: Browsers are tools for information retrieval. (Let’s leave the question of whether it is possible to built these systems on LLMs or similar models ethically aside for a second. Even though I don’t believe it is possible without exploiting the work of many Internet users against their consent.) But is that “AI”? The “wonderful” future of the “agentic web” and all the retro-futurist leanings about smart fridges that entails? It feels smaller, less grandiose. As I said, it’s a feature that in general makes sense. A button you can click to get some version of the web page in front of you that otherwise you wouldn’t have had. And it saves you from having to paste the URL into one of the many translation engines we’ve had on the web for decades.

Let’s jump to the last point: “AI Chatbot in the Sidebar”. Not much I need to tell you there, that is what many people will call “AI” and “intelligence” because the biggest marketing campaign ever has turned a stochastic word generator into the Avatar of what many people consider intelligence. There are very different statistics about how popular these things actually are, how much people really use it but let’s even say that these things are very popular and used a lot (which I am not convinced of looking at how people at work or in my social circles use “AI” – if they do at all): This is not a lot of work for Mozilla. Extensions that integrate other tools into the Firefox Sidebar have existed for a long, long time. This is just another one of those, just way more insecure than what’s usually going on there.

But “AI-enhanced tab grouping” and “AI link previews” are a bit of a different beast IMO. They are more deeply integrated into the browser itself and want to shape more of how you use it in general. But: I wonder who actually uses that kind of stuff? Who uses tab groups? It’s a very advanced feature that’s also not very discoverable. I use it and I know a few very technical people who do but most do not even know of its existence. And sure, you can try to make it a bit easier to slap a label on a tab group but is that worth the effort? For that small a user base? I also made a quick poll on mastodon (a very tech-savvy and -interested crowd) asking how many people even know that that “AI link previews” even exists and more than 60% had no idea.

So of the three “AI-ish” features one is basically just an embedded external tool but two are something Mozilla actually works on and does product development and implementation. But those are features that feel like they only target very small groups. And not in a positive “this is a marginalized group that we are trying to support” but in a “a few power users might even be aware that this exists” kind of way.

Now features sometimes grow and take time to find an audience. I still remember how long it took for people to embrace tabs in browsers for example. (And many people still have horrible workflows with them. Where the tab bar just grows with the same tabs and if I have to watch some people use a browser I need a few rounds of therapy afterwards.)

But I wondered, why all these things are being summarized as “AI” when they are so radically different. Firefox having a Sidebar that allows you to interact with Mastodon does not make Firefox a “Fediverse Browser” for example, why does a chatbot sidebar get to define what the browser is?

Of course it’s a bit of marketing. Mozilla hasn’t for a long time been able to stand firm against hype and focus on normal engineering and development. It’s FOMO to a degree.

But it also reminded me of the way that conversations with Mozilla about their “AI” focus keep going: Whether it’s on Mastodon or reddit or in some other very Firefoxy/open source aligned community the polls always point to users dominantly not wanting Mozilla to focus on “AI” but on improving the browser, on picking up the policy work that they dropped a few years ago. On wanting Mozilla to fight for the open web and the people living in it. The Mozilla response is usually: But the users want “AI”.

But do they? And if they do, what “AI”?

Do most users really ask Mozilla to build an “AI” label generator for a feature they don’t even know exits? Is that what their data shows?

Or is it some some people use the built-in translations and by labeling those “AI” Mozilla claims that people really want “AI link previews”?

This is another example of the term “AI” hiding more than it explains. But it also is a pattern I see more and more where “AI” companies point at one specific feature people actually might use (for better or worse) and using that to legitimize all kinds of other shit that has nothing to do with it but maybe implementation details.

If we want to have useful conversations about systems, features, their uses and their impacts we should just drop the “AI” label. It’s not useful, it’s only poisoning discourse and making us all dumber – even if we do not use chatbots. (But double if you do.)

Read the whole story
tante
3 hours ago
reply
The term "AI" is worse than useless because it hides more than it explains. Mozilla Firefox's "AI killswitch" shows how
Berlin/Germany
Share this story
Delete

The job losses are real — but the AI excuse is fake

1 Comment and 2 Shares

Both of these statements are true:

  1. Across the whole US economy, there’s not really a visible effect of AI on hiring and job mix;
  2. Some sectors are absolutely devastated directly by AI.

But also:

  1. Nobody cares if it was technically AI or not that took their job;
  2. The wider economy is visibly screwed already.

Even the most mainstream financial press is starting to admit that claiming your layoffs are AI at work is a fake excuse to sound good to investors.

Here’s a headline from Fortune: “AI layoffs are looking more and more like corporate fiction that’s masking a darker reality, Oxford Economics suggests”. That darker reality is that the economy is already screwed. But we’ll get to that: [Fortune, archive]

The primary motivation for this rebranding of job cuts appears to be investor relations. The report notes that attributing staff reductions to AI adoption “conveys a more positive message to investors” than admitting to traditional business failures, such as weak consumer demand or “excessive hiring in the past.” By framing layoffs as a technological pivot, companies can present themselves as forward-thinking innovators rather than businesses struggling with cyclical downturns.

… When asked about the supposed link between AI and layoffs, Cappelli urged people to look closely at announcements. “The headline is, ‘It’s because of AI,’ but if you read what they actually say, they say, ‘We expect that AI will cover this work.’ Hadn’t done it. They’re just hoping. And they’re saying it because that’s what they think investors want to hear.”

A report from Yale’s Budget Lab says there isn’t evidence of economy-wide effects from AI: [FT, archive]

The labour market doesn’t feel great, so it feels correct that AI is taking people’s jobs. But we’ve looked at this many, many different ways, and we really cannot find any sign that this is happening.

Broadly, across economic sectors, AI isn’t visibly affecting the job market. But it’s a good layoff excuse: [CNBC]

Stephany said there isn’t much evidence from his research that shows large levels of technological unemployment due to AI.

“Economists call this structural unemployment, so the pie of work is not big enough for everybody anymore and so people will lose jobs definitely because of AI, I don’t think that this is happening on a mass scale,” he said.

So if that’s true, why are there all these layoffs? It’s broader and long-running economic problems. You can start at the end of the zero interest rate policy.

From 2007 to 2009, we had the global financial crisis. The US economy was so damaged by the crash, the Fed lowered interest rates to near-zero for most of a decade just to keep the money moving. You could borrow money almost free! So companies went as big as they could on the free money. They over-hired just in case they needed the workers.

Then in 2022, inflation hit and the Fed put interest rates back up. Suddenly, things were not going so great. Come to 2024–25, and companies are throwing out employees they hired prospectively like they’re surplus unsold stock.

So people are in fact losing their jobs. But don’t say “no-one is losing their jobs to AI.” That’s not actually true. Some sectors really have been devastated by AI specifically.

Brian Merchant at Blood in the Machine – which you should all read – has been hammering on this theme. He’s got an ongoing project, “AI Took My Job,” talking to workers who were indeed fired directly for AI.

Translators in particular — businesses think bad machine translation is good enough. Duolingo’s AI-generated content quality is so bad that a lot of paying customers have left. Freelance translators can hardly find work these days, specifically because chatbots mash out translations. [Blood in the Machine]

Content moderators are an AI target too, because companies really do not care about the job at all. They’ll do it with any old rubbish, and now they are! [Blood in the Machine]

The vibe-coding push and computer software really just not working any more, I don’t have a smoking gun link, but it’s clear that quality is job number 55 or so.

MBAs loathe employees. Any employees. They despise you. AI promises the one thing MBAs want more than anything — firing people — so they’re all-in. That it doesn’t work does not matter.

MBAs also assume any job they don’t understand must be simple, so they put out sloppy trash that any consumer can see is obviously terrible. But the product won’t lose its customer base for the next year — probably — so that’s a problem for several quarters from now.

Sometimes managers forget AI’s just the excuse, and they fire entry level workers assuming AI can replace them — when it can’t. Sometimes they realise they shot themselves in the foot. [Register]

What will happen is that companies will realise the bots can’t do the jobs. But this will take a year or two. Then the companies that survive will rehire people. They’ll try to do it at lower pay, of course. [Register]

Liz Fong-Jones at Honeycomb, and formerly of Google, says: [Bluesky]

AI today is literally not capable of replacing the senior engineers they are laying off. corps are in fact getting less done, but they’re banking on making an example of enough people that survivors put their heads down and help them implement AI in exchange for keeping their jobs … for now.

When the AI bubble pops — which I’m still guessing at next year — that will mark the start of Great Depression 2. It’s going to be bad.

But it’s bad already — without a few huge tech companies swapping the same 10 billion dollars around, the economy numbers officially go into recession.

But the real economy where you and I live is already in trouble. The numbers in the real US economy are so bad, President Trump fired the US Commissioner of Labor Statistics because of a jobs report he didn’t like. That’s how you know it’s going great! [BBC]

For now, it’s mutual aid time. If you’re still in work, send some money to the people who aren/t. They need it.

And, of course, we must mention our good friends at Stop Gen AI, who help redistribute money to those affected by AI-related, or AI-excuse, job cuts. Go support Stop Gen AI. [Stop Gen AI]


It’s pledge week at Pivot to AI! If you enjoyed this post, and our other posts, please do put $5 into the Patreon. It helps us keep Pivot coming out daily. Thank you all.

Read the whole story
tante
3 days ago
reply
"“AI layoffs are looking more and more like corporate fiction that’s masking a darker reality, Oxford Economics suggests”. That darker reality is that the economy is already screwed."
Berlin/Germany
Share this story
Delete

Gastronomie-Mehrwertsteuer: Datenanalyse zeigt kaum Ersparnis für Gäste

1 Comment
Seit Januar zahlen Wirte deutlich weniger Mehrwertsteuer. Preissenkungen gab es trotzdem kaum, wie die SPIEGEL-Datenanalyse zeigt. Einige Filialen haben Gerichte sogar verteuert.

Read the whole story
tante
5 days ago
reply
Oh, die Mehrwertsteuersenkung in der Gastro wurde gar nicht an Kunden weitergeleitet? Wer hätte das nur kommen sehen können?
Berlin/Germany
Share this story
Delete

Cursor lies about vibe-coding a web browser with AI

1 Comment

Here’s an awesome tweet from Michael Truell, CEO of Anysphere, who make vibe code editor Cursor: [Twitter, archive]

We built a browser with GPT-5.2 in Cursor. It ran uninterrupted for one week.

It’s 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.

It kind of works! It still has issues and is of course very far from Webkit/Chromium parity, but we were astonished that simple websites render quickly and largely correctly.

What an achievement! Three million lines is a large application. And Cursor built it completely autonomously! Allegedly.

This is Cursor using a framework with multiple chatbot agents. Imagine a slightly saner version of Steve Yegge’s Gas Town. The bot herder, Wilson Lin from Cursor, says: [Cursor, archive]

The agents ran for close to a week, writing over 1 million lines of code across 1,000 files.

The vibe code blogs, and some excessively gullible tech press, were bowled over! [e.g., Fortune]

So there’s only a minor problem with that Cursor announcement — every claim in it is a lie.

Cursor made one fatal mistake: they put the code up where other developers could see it.

The browser’s dependencies include html5ever, a component of the experimental Servo web browser; cssparser, another component of Servo; and a JavaScript interpreter that was already coded beforehand by Wilson Lin and included in the project by hand — not written from scratch by AI agents for this project. And a pile of other components that were absolutely not written from scratch either. [GitHub, archive]

It’s not clear how many of the long list of dependencies are used. Where the code doesn’t include other projects as dependencies, there’s a lot of what looks very like it was copied from existing code. Original theft! Do not steal!

None of this is written from scratch. The bot is putting together existing parts like Lego.

The supplied code for Cursor’s browser didn’t compile. When someone finally got it to work, it did indeed have rendering issues! The same rendering issues Servo has. But Servo is entirely in Rust, so that’s where GPT went looking for some Rust browser code it could use.

Servo maintainer Gregory Terzian was not impressed: [Hacker News]

The actual code is worse; I can only describe it as a tangle of spaghetti. As a Browser expert I can’t make much, if anything, out of it. In comparison, when I look at code in Ladybird, a project I am not involved in, I can instantly find my way around the code because I know the web specs.

What Lin and Cursor achieved was to show that an AI agent can generate millions of lines of code that’s lifted from other projects, and that don’t compile, let alone work.

Cursor’s fake browser is a marketing fraud to sell you vibe coding. This is standard. The guys selling you this trash will assure you it’s the best thing there’s ever been for coding. They will tell you impossible things that fail to check out when you spend a minute looking into them.

The vibe bros’ evidence for the power of vibe coding is “I feel faster.” They skip over the bit where it doesn’t work. They’ll make a big headline claim and try to walk it back when they get caught.

The target audience is CEOs who don’t realise these people are, to a man, brazen liars, and venture capitalists who want some hype to sell.

Never believe the hype. Always demand numbers on how effective vibe coding is. They all turn tail when you ask for checkable numbers.


It’s pledge week at Pivot to AI! If you enjoyed this post, and our other posts, please do put $5 into the Patreon. It helps us keep Pivot coming out daily. Thank you all.

Read the whole story
tante
6 days ago
reply
"What Lin and Cursor achieved was to show that an AI agent can generate millions of lines of code that’s lifted from other projects, and that don’t compile, let alone work."
Berlin/Germany
Share this story
Delete
Next Page of Stories