Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2579 stories
·
139 followers

Cursor lies about vibe-coding a web browser with AI

1 Comment

Here’s an awesome tweet from Michael Truell, CEO of Anysphere, who make vibe code editor Cursor: [Twitter, archive]

We built a browser with GPT-5.2 in Cursor. It ran uninterrupted for one week.

It’s 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.

It kind of works! It still has issues and is of course very far from Webkit/Chromium parity, but we were astonished that simple websites render quickly and largely correctly.

What an achievement! Three million lines is a large application. And Cursor built it completely autonomously! Allegedly.

This is Cursor using a framework with multiple chatbot agents. Imagine a slightly saner version of Steve Yegge’s Gas Town. The bot herder, Wilson Lin from Cursor, says: [Cursor, archive]

The agents ran for close to a week, writing over 1 million lines of code across 1,000 files.

The vibe code blogs, and some excessively gullible tech press, were bowled over! [e.g., Fortune]

So there’s only a minor problem with that Cursor announcement — every claim in it is a lie.

Cursor made one fatal mistake: they put the code up where other developers could see it.

The browser’s dependencies include html5ever, a component of the experimental Servo web browser; cssparser, another component of Servo; and a JavaScript interpreter that was already coded beforehand by Wilson Lin and included in the project by hand — not written from scratch by AI agents for this project. And a pile of other components that were absolutely not written from scratch either. [GitHub, archive]

It’s not clear how many of the long list of dependencies are used. Where the code doesn’t include other projects as dependencies, there’s a lot of what looks very like it was copied from existing code. Original theft! Do not steal!

None of this is written from scratch. The bot is putting together existing parts like Lego.

The supplied code for Cursor’s browser didn’t compile. When someone finally got it to work, it did indeed have rendering issues! The same rendering issues Servo has. But Servo is entirely in Rust, so that’s where GPT went looking for some Rust browser code it could use.

Servo maintainer Gregory Terzian was not impressed: [Hacker News]

The actual code is worse; I can only describe it as a tangle of spaghetti. As a Browser expert I can’t make much, if anything, out of it. In comparison, when I look at code in Ladybird, a project I am not involved in, I can instantly find my way around the code because I know the web specs.

What Lin and Cursor achieved was to show that an AI agent can generate millions of lines of code that’s lifted from other projects, and that don’t compile, let alone work.

Cursor’s fake browser is a marketing fraud to sell you vibe coding. This is standard. The guys selling you this trash will assure you it’s the best thing there’s ever been for coding. They will tell you impossible things that fail to check out when you spend a minute looking into them.

The vibe bros’ evidence for the power of vibe coding is “I feel faster.” They skip over the bit where it doesn’t work. They’ll make a big headline claim and try to walk it back when they get caught.

The target audience is CEOs who don’t realise these people are, to a man, brazen liars, and venture capitalists who want some hype to sell.

Never believe the hype. Always demand numbers on how effective vibe coding is. They all turn tail when you ask for checkable numbers.


It’s pledge week at Pivot to AI! If you enjoyed this post, and our other posts, please do put $5 into the Patreon. It helps us keep Pivot coming out daily. Thank you all.

Read the whole story
tante
6 hours ago
reply
"What Lin and Cursor achieved was to show that an AI agent can generate millions of lines of code that’s lifted from other projects, and that don’t compile, let alone work."
Berlin/Germany
Share this story
Delete

Insurers don’t want to cover AI

1 Comment and 2 Shares

AI everywhere is — as you know — the future of business and civilisation! Except your company may have issues insuring against AI causing problems.

AIG, Great American, WR Berkley, and other insurers are asking permission from regulators to just … not cover AI. [FT, archive]

Berkley wants to exclude: [National Law Review, 2025]

any actual or alleged use, deployment, or development of Artificial Intelligence by any person or entity.

The list includes almost any way in which a business could touch AI. Berkley’s definition of AI seems to mean machine learning or generative AI:

any machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments, including, without limitation, any system that can emulate the structure and characteristics of input data in order to generate derived synthetic content, including images, videos, audio, text, and other digital content.

You can buy AI-specific coverage. Armilla offer AI coverage that’s remarkably difficult to actually claim on. But you can tell your boss you got coverage!

OpenAI and Anthropic have business insurance — but they both have multiple huge lawsuits in progress, which insurers don’t want to cover.

OpenAI is looking at putting investor funds aside into a special liability fund. Anthropic is paying its copyright settlement out of general company funds. [FT, archive]

If you want to pursue the fabulous AI future — have fun out there! But it’ll be your own backside on the line.

Read the whole story
tante
1 day ago
reply
Of course insurers won't cover your genAI experiments. That just too much risk.
Berlin/Germany
HarlandCorbin
1 day ago
Good! Let the companies pushing this crap assume their own risk!
tpbrisco
1 day ago
You know that we taxpayers will wind up footing the cost, right?
Share this story
Delete

Winning the wrong game

1 Comment

With studies upon studies showing that actual measurable productivity gains through “AI” (which these days basically means chatbots) are really hard to come by and that “workslop” (meaning the extra work created for the rest of the organization by one person using “AI” lowering their work quality) eats up a lot of what might have been productivity gains many critics feel somewhat relieved: When the main narrative of “AI” (massively increased productivity) fails, this surely marks the end of this bubble and we might be able to get back to talking about actual problems.

I am not so sure.

Not because I think that some tweak to some LLM will make them suddenly that much more reliable or useful but because it was never about productivity really.

It’s something I alluded to in my “AI” talk at 2023’s Re:Publica: It doesn’t matter how good these systems are in reality, because that’s not what your boss cares about.

“AI” is a tool to disenfranchise labor.

That’s the job. If “AI” is actually more expensive than paying actual people actual wages that’s still a good investment for capital because it is about breaking up the structures, networks and organizations that help workers organize and fight for labor standards and fairer wages.

When CoCa Cola creates another bad ad using “AI” the fact that it’s garbage and expensive isn’t the point. It’s all just an investment into no longer needing to pay people for their expertise, work and time.

In December 2024 Ali Alkhatib wrote:

“I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.”

This is the macro-level view. It’s about Google, Microsoft and Amazon digging themselves even deeper into not only your personal life but also every economic workflow and process in order to charge rent using the established dependency.

But on a smaller (a bit short-sighted) level CEOs are looking towards “AI” not so much as actual replacement for labor but as the leverage to push down cost in the long term by crushing labor power.

So showing that productivity doesn’t measurably increase is winning the wrong game. It’s not an attack on our capabilities, skills or experience, it’s an attack on our collective and individual power.

Read the whole story
tante
1 day ago
reply
"If “AI” is actually more expensive than paying actual people actual wages that’s still a good investment for capital because it is about breaking up the structures, networks and organizations that help workers organize and fight for labor standards and fairer wages."
Berlin/Germany
Share this story
Delete

Generative AI is an expensive edging machine

1 Comment and 2 Shares

Huffing Gas Town, Pt. 2: If I Could, I Would Download Claude Into A Furby And Beat It To Death With A Hammer

Years ago, I decided I was going to cover the world of cryptocurrency with a fairly open mind. If you are part of an emerging tech industry, you should be very worried when I start doing this lol. Because it only took me a few weeks of using crypto, talking to people who work in the industry, and covering the daily developments of that world to end up with some very specific questions. And the answer to those questions boiled down to crypto being a technology that was, on some level, deeply evil or deeply stupid. Depending on how in on the scam you are.

While I don’t think AI, specifically the generative kind, is a one-to-one with crypto, it has one important similarity: It only succeeds if they can figure out a way to force the entire world to use it. I think there’s a word for that!

(If you want, tell me the kind of dystopia you’re trying to create and I can help build it for you.)

And so I have tried over the last few years to thread a somewhat reasonable middle ground in my coverage on AI. Instead of immediately throwing up my hands and saying, “this shit sucks ass.” I’ve continually tried to find some kind of use for it. I’ve ordered groceries with it, tried to use it to troubleshoot technical problems, to design a better business plan for Garbage Day, used it as a personal coach, as a therapist, a video editor. And I can confidently say it has failed every time. And I’ve come to realize that it fails in the exact same way every single time. I’m going to call this the AI imagination gap.

I don’t think I’m more creative than the average person, but I can honestly say I’ve been making something basically my entire life. As a teenager I wrote short stories, played in bands, drew cartoons for the school paper, and did improv (#millennial), and I’ve been lucky enough to be able to put those interests to use either personally or professionally in some way ever since. If I’m not writing, I’m working on music or standup, if I’m not doing those things, I’m podcasting (it counts!), or cooking, or some other weird little hobby I’m noodling on. Jack of all trades, etc.

Every time I’ve tried to involve AI in one of my creative pursuits it has spit out the exact same level of meh. No matter the model, no matter the project, it simply cannot match what I have in my head. Which would be fine, but it absolutely cannot match the fun of making the imperfect version of that idea that I may have made on my own either. Instead, it simulates the act of brainstorming or creative exploration, turning it into predatory pay-for-play process that, every single time, spits out deeply mediocre garbage. It charges you for the thrill of feeling like you’re building or making something and, just like a casino — or online dating, or pornography, or TikTok — cares more about that monetizable loop of engagement, of progress, than it does the finished product. What I’m saying is generative AI is a deeply expensive edging machine, but for your life.

My breaking point with AI started a few months ago, after I spent a week with ChatGPT trying to build a synth setup that it assured me over and over again was possible. Only on the third or fourth day of working through the problem did it suddenly admit that the core idea was never going to actually work. Which, from a business standpoint is fine for OpenAI, of course. It kept me talking to it for hours. And, similarly, last night, after another fruitless round of vibe coding an app with Claude, I kept pressing it over and over to think of a better solution to a problem I’m having. I knew, in my bones, that it was missing a more obvious, easier solution and after the fifth time I reframed the problem it actually got mad at me!

(You can’t be talking to me like that, Claude.)

If we are to assume that this imagination gap, this life edging, this progress simulator, is a feature and not a bug — and there’s no reason not to, this is how every platform makes money — then the “AI revolution” suddenly starts to feel much more insidious. It is not a revolution in computing, but a revolution in accepting lower standards. I had a similar moment of clarity, watching a panel at Bitcoin Miami in in 2022, where the speakers started waxing philosophically on what they either did or did not realize was a world run on permanent, automated debt slavery. In the same way, if AI succeeds, we will have to live in a world where the joy of making something has turned into something you have to pay for. And if it really succeeds, you won’t even care that what you’re using an AI to make is total dog shit. Most frightening of all, these AI companies already don’t care about how dangerous a world like this would be.

OpenAI head Sam Altman is having another one of his spats with Elon Musk this week. And responding to a post Musk made highlighting deaths related to ChatGPT-psychosis, Altman wrote, “Almost a billion people use it and some of them may be in very fragile mental states. We will continue to do our best to get this right.” Continuing in his cutest widdle tech CEO voice, “It is genuinely hard; we need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools.”

It’s hard, guys. All OpenAI wants is to make a single piece of software that can swallow the entire internet, and devour the daily machinations of lives, and make us pay to interface with our souls, and worm its way into the lives of everyone on Earth. They can’t be blamed when it starts killing a few of its most vulnerable users! And they certainly can’t be blamed for not understanding that all of this is connected. Learning, creativity, self-discovery, pride in our accomplishments, that’s what makes human. And if we lose that — or worse, give up willingly — we lose everything.


Subscribe to Premium to read the rest.

Become a paying subscriber of Premium to get access to this post and other subscriber-only content.

A subscription gets you

  • Paywalled weekend issue
  • Monthly Garbage Intelligence report
  • Discord access
  • Discounts on merch and events
Read the whole story
tante
6 days ago
reply
"What I’m saying is generative AI is a deeply expensive edging machine, but for your life."

Garbage Day nails it.
Berlin/Germany
Share this story
Delete

Artist

2 Comments and 4 Shares

The post Artist appeared first on The Perry Bible Fellowship.

Read the whole story
tante
6 days ago
reply
PBF with a banger again
Berlin/Germany
Share this story
Delete
1 public comment
jlvanderzwan
5 days ago
reply
Nicholas Gurewitch don't miss

Das Wesen des (KI-)Hypes: Betäubungsmittel für den Verstand

1 Comment
KI wird uns alle retten - oder doch zerstören? Wie Hype-Debatten wie diese uns das Hirn vernebeln. Ein IMHO von Jürgen Geuter (KI, IMHO)
Read the whole story
tante
8 days ago
reply
Für Golem habe ich über "Hype" nachgedacht und wie er uns durch seine "ALLES IST SUPER KRASS" Narration das Denken über Technologien und ihren Einsatz erschwert
Berlin/Germany
Share this story
Delete
Next Page of Stories