Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2555 stories
·
139 followers

Will A.I. writing ever be good?

1 Comment

Greetings from Read Max HQ! In today’s issue, responding to some good recent human-generated writing on A.I.-generated writing.

A reminder: Read Max is a subscription newsletter whose continued existence depends on the support and generosity of paying readers. If you find the commentary at all enlightening, entertaining, or otherwise important to your weekly life, please consider upgrading to a paid subscription, an act which will allow you to enjoy Read Max’s weekly recommendations for overlooked books, movies, and music, and give you a sense of pride and accomplishment and supporting independent journalism, such as it is.

Subscribe now

Will A.I. writing ever be good?

Sam Kriss has an excellent new piece in The New York Times Magazine examining the actual style of “A.I. voice.” It’s a great rundown of the many formal quirks of A.I.-generated text, and I’m glad that Kriss was able to appreciate the strangeness of even the ever-more-coherent writing produced by frontier large language models, which is “marked by a whole complex of frankly bizarre rhetorical features”:

Read any amount of A.I.-generated fiction, you’ll instantly notice an entirely different vocabulary. You’ll notice, for instance, that A.I.s are absolutely obsessed with ghosts. In machine-written fiction, everything is spectral. Everything is a shadow, or a memory, or a whisper. They also love quietness. For no obvious reason, and often against the logic of a narrative, they will describe things as being quiet, or softly humming.

This year, OpenAI unveiled a new model of ChatGPT that was, it said, “good at creative writing.” As evidence, the company’s chief executive, Sam Altman, presented a short story it wrote. In his prompt, he asked for a “metafictional literary short story about A.I. and grief.” The story it produced was about 1,100 words long; seven of those words were “quiet,” “hum,” “humming,” “echo” (twice!), “liminal” and “ghosts.” That new model was an early version of ChatGPT-5. When I asked it to write a story about a party, which is a traditionally loud environment, it started describing “the soft hum of distant conversation,” the “trees outside whispering secrets” and a “quiet gap within the noise.” When I asked it to write an evocative and moving essay about pebbles, it said that pebbles “carry the ghosts of the boulders they were” and exist “in a quiet space between the earth and the sea.” Over 759 words, the word “quiet” appeared 10 times. When I asked it to write a science-fiction story, it featured a data-thief protagonist called, inevitably, Kael, who “wasn’t just good—he was a phantom,” alongside a love interest called Echo and a rogue A.I. called the Ghost Code.

Even as L.L.M.s get better at producing fluid and plausibly human text, these persistent stylistic tics remain interestingly abrasive--in a single short answer, presented to you in a vacuum, A.I. text is as smooth as can be, but when you’re confronted with an overwhelming amount of it, the strangeness that’s been fine-tuned out really begins to re-assert itself. Kriss argues (in part) that one reason A.I. writing remains so (in aggregate) weird and waffly is that L.L.M.s “can’t ever actually experience the world”:

This puts a lot of the best writing techniques out of reach. Early in “To the Lighthouse,” Virginia Woolf describes one of her characters looking out over the coast of a Scottish island: “The great plateful of blue water was before her.” I love this image. A.I. could never have written it. No A.I. has ever stood over a huge windswept view all laid out for its pleasure, or sat down hungrily to a great heap of food. They will never be able to understand the small, strange way in which these two experiences are the same. Everything they know about the world comes to them through statistical correlations within large quantities of words.

A.I. does still try to work sensory language into its writing, presumably because it correlates with good prose. But without any anchor in the real world, all of its sensory language ends up getting attached to the immaterial. In Sam Altman’s metafiction about grief, Thursday is a “liminal day that tastes of almost-Friday.” Grief also has a taste. Sorrow tastes of metal. Emotions are “draped over sentences.” Mourning is colored blue. […] This is a cheap literary effect when humans do it, but A.I.s can’t really write any other way. All they can do is pile concepts on top of one another until they collapse.

But I wonder if it’s true that the lack of a “world model” is what pushes L.L.M. text toward metaphorical drivel: It seems just as likely that chatbots over-rely on this kind of sensory-immaterial conjunction because, as Kriss says, it’s a “cheap literary effect” that impresses people passing superficially over a text--exactly the kind of fake-deep crowd-pleaser for which L.L.M. output is being fine-tuned.

These satisfyingly plausible folk-technical explanations come up often when people are trying to describe the limitations of A.I.-generated writing. One well-rehearsed account blames A.I.’s stylistically uninteresting output on next-token prediction: Large language models, this argument goes, intrinsically cannot generate truly great writing, or truly creative writing, because they’re always following paths of less resistance, and regurgitating the most familiar and most probable formulations. This is a satisfying argument, not least because it’s easily comprehensible, and for all we know it’s even a true one.

But we don’t actually know that it’s right, because we’ve never really tried to make an L.L.M. that’s great at writing. I appreciated Nathan Lambert’s recent piece at Interconnects “Why AI writing is mid,” which argues that the main roadblocks to higher-quality writing are as much economic as technical: There simply isn’t enough demand for formally ambitious (or even particularly memorable) writing to be worth the expense or resources necessary to train a model to produce it.

Some models makers care a bit about this. When a new model drops and people rave about its creative writing ability, such as MoonShot AI’s Kimi K2 line of model, I do think the team put careful work into the data or training pipelines. The problem is that no model provider is remotely ready to sacrifice core abilities of the model such as math and coding in pursuit of meaningfully better writing models.

There are no market incentives to create this model — all the money in AI is elsewhere, and writing isn’t a particularly lucrative market to disrupt. An example is GPT 4.5, which was to all reports a rather light fine-tune, but one that produced slightly better prose. It was shut down almost immediately after its launch because it was too slow and economically unviable with its large size.

As Lambert points out, much of what we dislike about A.I.-generated text from a formal perspective--it’s generally cautious, inoffensive, anodyne, predictable, neutral, unmemorable and goes down smooth--is a product not of some inherent L.L.M. “voice” but of the training and fine-tuning processes imposed by A.I. companies, which are incentivized to make their chatbots sound as annoying and bland as possible. No one is out there actually trying to create Joycebot (or whatever), and for good reason: The saga of Microsoft’s Bing and its “alter-ego,” Sydney, is in a broad sense the best fictional story yet produced by an L.L.M. chatbot, but it was also an unmitigated disaster for the company.

To the extent that their output is pushed into “mid-ness” by economic circumstance, L.L.M.s are not unprecedented. In a real sense, “why A.I. is writing mid” and “why most professional writing is mid” have the same explanation: “Good writing,” whether authored wholly by humans or generated by an L.L.M., requires capacious resources (whether in time and education and editing or in compute and training and fine-tuning) to create an idiosyncratic (and likely polarizing) voice for which there usually isn’t economically sufficient demand.1

I sometimes think that it’s more helpful to think about large language models as equivalent not to individual writers in the specific but to whole systems or institutions of which writing is an end-product. A given L.L.M. is less akin to, say, a replacement-level magazine writer than it is to “the entire magazine industry at its peak,” if you imagine the magazine industry as a giant, complex, unpredictable machine for producing a wide variety of texts. Just as that industry, as a whole, once was able to generate text to a certain degree of predictability and at a relatively high floor of quality, to varying client specifications and structured by its own internal systems and incentives, so too do Claude or ChatGPT.2

I bring up magazines in particular as a point of comparison because I’ve been struck for a while at the similarity between the voice deployed in the latest generation of chatbots and what a friend calls “F.O.B. voice,” or the smooth, light, savvy, vaguely humorous tone that once reigned in magazine front-of-book sections:

In better times for the magazine industry, there was higher demand for a particular kind of glib (but not actually humorous), knowing (but not actually smart), fluid (but not actually stylish) text--what my friend Mahoney calls “F.O.B. voice,” for front of book, the pre-features section of a magazine for which, depending on the magazine, editors might end up cranking out 150-to-500-word nuggets of smooth blurb prose about new books, movies, news stories, gadgets, restaurants, or whatever.

F.O.B. voice is as a rule smooth and clichéd and often only semi-coherent, because it needs to be reproduced quickly and without much effort by overworked writers and editors on deadline. It’s also superficially impressive to most readers, thanks both to the packaging that surrounds it, and to their standards for impressiveness, which are quite a bit lower than professionals. For all these reasons, and additionally because it’s obviously been trained on archives of magazines written in F.O.B. voice, it’s unsurprising that ChatGPT takes naturally to producing in F.O.B. voice.

“Mid,” as the magazine industry knew, and as L.L.M.s “know,” is a rewarding zone to be in: It’s what people find easiest to consume, and what advertisers feel most comfortable appearing adjacent. Of course, the magazine industry generated more than just reams and reams of smooth placeholder text; it also produced New Journalism, the modern short story, “Eichmann in Jerusalem,” the Hillary planet, etc. But these were positive externalities, not inevitabilities, driven more by cultural prerogatives than by financial necessity. To get something similar from an L.L.M. would likely require a lot of not-necessarily-profitable groundwork.

1

Another way of thinking about it might be: A parallel timeline where an A.I. was pumping out great novels would be an improvement to our own, because it’d suggest that there was enough demand for genuinely great novels to make it worth training an L.L.M. to do so.

2

Not to get too whatever about it, but it’s good to note that that magazine articles (or, even moreso, Hollywood movies) are the products of many humans operating within larger systems and frameworks. Do we think of those articles as “magazine industry-generated,” or major-studio movies as being “Hollywood-generated”? I’m not saying we should, necessarily, but I suspect that if and whenever A.I. is able to create great (or even non-slop) writing, we will come to think of it less as “A.I.-generated” and more as authored by the prompter, or the prompter in concert with the model creators at various levels.



Read the whole story
tante
11 hours ago
reply
"chatbots over-rely on this kind of sensory-immaterial conjunction because[IT] impresses people passing superficially over a text--exactly the kind of fake-deep crowd-pleaser for which L.L.M. output is being fine-tuned."
Berlin/Germany
Share this story
Delete

Microsoft drops AI sales targets in half after salespeople miss their quotas

1 Comment

Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.

AI agents are specialized implementations of AI language models designed to perform multistep tasks autonomously rather than simply responding to single prompts. So-called “agentic” features have been central to Microsoft’s 2025 sales pitch: At its Build conference in May, the company declared that it has entered “the era of AI agents.”

The company has promised customers that agents could automate complex tasks, such as generating dashboards from sales data or writing customer reports. At its Ignite conference in November, Microsoft announced new features like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for building and deploying agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to deliver than the company expected.

Read full article

Comments



Read the whole story
tante
4 days ago
reply
But if AI is so userful and everyone wants it, why does Microsoft have to cut its AI sales targets in half?
Berlin/Germany
Share this story
Delete

Spotify Haters Club

1 Comment
Spotify Haters Club

You know something that is absolutely worth considering? Planning an exit from the culture-destroying, weapons-investing, ICE-advertisement platform known as Spotify, a place where music is seen only as a tool to destroy people's imaginations and flatten art into a homogenised, AI-friendly, beige churn of endless content to immiserate us all.

It's not that any of the other streaming services are good, but Spotify is almost certainly the worst. Do not trust it. Do not let it turn your enjoyment of music into stats and algorithmic playlists. Do not let it quietly funnel AI-slop into your daily mixes. Do not let it sort you into woefully inaccurate micro-demographics to better sell your data to advertisers (what’s the chances it shares this arbitrary-human-classify tech to its weapons company interests to decide who’s allowed to not get bombed?). Do not willingly give it free advertising by sharing league tables of bands on your socials. Don't do it. It is appropriating all that is great about loving music, all that is great about being part of the communities that surround music, and weaponising it against us all! If you don't believe me, read this book and see if it changes your mind.

Also, Brian Merchant recently wrote a fairly comprehensive how-to guide for leaving Spotify which is more practically useful than my contempt for them is. (Apologies for the Substack link... there's just no escaping these abhorrent tech companies is there).

A year or so ago I took the relatively nerdy option of building my own Plex server. I filled it with mp3s of my old cd collection I’ve carried around forever on hard drives, bandcamp purchases, and songs foraged from various corners of the internet. It was the best music-related decision I have made in years.

Plex offers only the vaguest of stats really. Just 'top ten most listened to artists'. Last year that chart was topped by Lana Del Rey followed by Ryuchi Sakamoto. This year it turned out to be... exactly the same. Which at first felt strange because I felt like I had listened to a lot more new music than last year.

But then, like a not-particularly-profound-thing hitting me at a sensible speed, I realised both of these things could be true. My Lana playlist remains a go-to for many moments, and I still think she has written some all time bangers, but that playlist serves a particular function for me. It is not quite background listening (or 'lean back' listening as they'd call it internally at Spotify), but it also isn't exactly active listening. I guess it is wandering around or commuting music for when my brain is elsewhere.

I still love listening to music I know inside out, that has travelled with me through large parts of my life. But it doesn't follow that music I only listen to a few times or even just once cannot also be impactful. I only read most books once, only see most films once. And — always wary of nostalgia — no matter how hard I might try, I will not be able to hear music for the first time and have the same reaction to it I did hearing new music in my teens or twenties. I bring too much to the text. I have seeped myself in noise for decades. This relationship has become different. I am still learning how to lean into that. But clearly, for me at least, not trusting any new music discovery to corporate algorithms is a step in the right direction.

So abandoning that one stat Plex offered, I made a filter to show a playlist of tracks that I had not heard before 2025 and that I had listened to at least once during this year and it became a much more interesting selection. And so, with some light editorialising and removing things that I immediately decided were rubbish (I am looking directly at you, latest Taylor Swift record), here are some cool albums I actively listened to this year, as opposed to musical anaesthetic that I lazily wrapped around myself to block out the rising existential horror of existing in 2025. Bandcamp links where possible. Happy Spotify Wrapped Season!

SOME GOOD MUSIC

Ben Lukas Boysen - Alta Ripa // Just gorgeous synth work. Ben's stuff always sounds like being wrapped in analog silk.

C Reider - The Mending Battle // What 'computer music' ought to sound like in a world where computers haven't become mostly awful and terrible.

Calum Gunn - Eroder // Really, really good. Wrote about it HERE.

Carly Rae Jepsen - E•MO•TION // Somehow I had never heard the whole album before. It's great.

Clark - Steep Stims // Clark absolutely back on top form with microtonal weirdness and clanging bangers.

Deftones - Private Music // It's another Deftones album. You already know exactly what it sounds like and what it does.

Emptyset - Dissever // Another band who always sound reassuringly like themselves. Rarely do I listen to this kind of thing these days but nice to know it's still there.

Grails - Miracle Music // Wrote about this HERE. Didn't stick with me as much as I thought it might, but that's my fault rather than the record's.

Greet Death - Die in Love // Fantastic, heart-on-sleeve stuff inspired by all your 90's alt-rock favourites.

JISOO - AMORTAGE // I just love this. Wrote about it HERE.

Jungle Fatigue Vol. 1 // Will give you jungle fatigue. Two thumbs up.

Jungle Fatigue Vol. 3 // As above.

Kendu Bari - Drink For Your Machine // Some solid drum'n'bass production.

Ledley - Ledley // Curious contemporary jazz. Sort of a bit like if Squarepusher were a brass-centric jazz band?

Native Soul - Teenage Dreams // Can't remember how I stumbed across this. Electronic deep house from South Africa. Very good. It makes writing beats that people will want to dance to seem effortless.

Noneless - A Vow of Silence // Some really great glitch production but it is sometimes overshadowed by occasional dubstep tangents that veer a little too close to Skrillex for me to be able to gel with.

Papé Nziengui - Kadi Yombo // Lively folk (harp-based?) energy from Gabon. I bet this is great to see live.

Paul Jebanasam - mātr // A gorgeous bruise of a record. Full of noise fluttering on the edge of distortion.

Polygonia - Da Nao Tian Gong // Some pretty techno. Easier said than done.

Tentacles of Destruction - Tentacles of Destruction // An old punk cassette I found on archive.org. It is VERY GOOD if you like mysterious old punk cassettes. The internet suggests they are from early 2000s New Zealand. Also the chorus of the first track sounds a lot like 'Perfect Teenhood' by ...And You Will Know Us by the Trail of Dead.

TROVARSI x ALX-106 - Frequencies EP // Some tough, utilitarian techno. Easier said than done.

Underworld - Strawberry Hotel // They've still got it, huh?

Ψ - Again There is Nothing Here // End of the world synth growls. That kind of analog broadcast that is cold, clinical yet simultaneously bursting with a warm of kind of hope(lessness).

Takashi Yoshimatsu - Symphony no. 2 // I know nothing about this and can't remember how or where I found it, but it is an absolutely sublime, really spectacular piece of work.

Read the whole story
tante
4 days ago
reply
"It's not that any of the other streaming services are good, but Spotify is almost certainly the worst. Do not trust it. Do not let it turn your enjoyment of music into stats and algorithmic playlists."
Berlin/Germany
Share this story
Delete

AI data centres — in SPACE! Why DCs in space can’t work

1 Comment and 2 Shares

Spending all the money you have and all the money you can get and all the money you can promise has a number of side effects, such as gigantic data centres full of high-power chips just to run lying chatbots. These are near actual towns with people, and people object to things like noise, rising power bills, and AI-induced water shortages.

So what if, right, what if, we put the data centres in … space!

This idea has a lot of appeal if you’ve read too much sci-fi, and it sounds obvious if you don’t know any practical details.

Remember: none of this has to work. You just have to convince the money guys it could work. Or at least make a line go up.

A lot of people who should know better have been talking up data centres in space over the past couple of years. Jeff Bezos of Amazon wants Blue Origin to do space manufacturing. Google has scribbled a plan for a small test network of AI chips on satellites. [Reuters; Google]

But what’s the attraction of doing data centers on hard mode like this? They want to do their thing with no mere earthly regulation! Because people are a problem.

Space is unregulated the same way the oceans are unregulated — that is, it’s extremely highly regulated and there’s a ton of rules. But rules are for the peons who aren’t venture capitalists.

Startups are on the case, setting venture cash on fire. Lonestar Data Systems sent a computer the size of a book, riding along with someone else’s project, to the moon! The lander tipped over and it died. Oh well. [Grist]

Starcloud is targeting the AI bros directly. They’ve got a white paper: “Why we should train AI in space.” [Starcloud, 2024, PDF]

Last month, Starcloud sent up a satellite, Starcloud-1, containing one Nvidia H-100 processor. It didn’t die on launch, so that’s something! [Data Center Dynamics]

Starcloud-1 was a test. Starcloud-2 is the big deal: [Starcloud]

Our first commercial satellite, Starcloud-2, features a GPU cluster, persistent storage, 24/7 access, and proprietary thermal and power systems in a smallsat form factor.

That’s written in the present tense about things that do not exist. It’s a paper napkin scribble that got venture funding.

A good friend who writes under the pen name Taranis is an actual ex-NASA expert who has personally built electronics to go into space. Taranis also worked at Google on deploying AI systems. And Taranis has written an excellent blog post on this stupidity: “Datacenters in space are a terrible, horrible, no good idea.” [blog post]

You can send a toy system into the sky, and it might work a while before it breaks. You can’t send up a data centre, with tens of thousands of expensive Nvidia chips, with any economic feasibility, any time in the near future.

Firstly, you don’t actually have abundant power. The solar array for the International Space Station delivers 200 kilowatts and it took several trips to get it all up there. You could power about 200 Nvidia H-100 cards with that 200 kilowatts.

Secondly, cooling in space is an absolute arse. Space is an excellent insulator for heat. That’s why a thermos works. In space, thermal management is job number one. All you can use is radiators. Getting rid of your 200 kilowatts will need about 500 square metres.

Thirdly, a chip in space needs radiation tolerance. Cosmic rays zap it all the time.The chips degrade at best and short out at worst.

If your GPUs are cutting edge, they’re fragile already — they burn out all the time running in their optimum environment on Earth. Space is nastier:

GPUs and TPUs and the high bandwidth RAM they depend on are absolutely worst case for radiation tolerance purposes. Small geometry transistors are inherently much more prone both to SEUs [single-event upsets] and latch-up. The very large silicon die area also makes the frequency of impacts higher, since that scales with area.

If you want chips that work well in space, you’re working with stuff that’s 20 years behind — but built to be very robust.

And finally, your network is slow. You have at most a gigabit per second by radio to the ground. (Compare Starlink, which is on the order of one-tenth of that.) On Earth, the links inside data centres are 100 gigabit.

I’ve seen a lot of objections to the Taranis post — and they’re all gotchas that are already answered in the post itself, from people who can’t or won’t read. Or they’re idiots going, “ha, experts who’ve done stuff! What do they know? Possibility thinking!” Yeah, that’s great, thanks.

If you really want to do space data centres, you can treat the Taranis post as a checklist — this is every problem you’re going to have to solve.

So space is a bit hard. A lot of the sci-fi guys suggest oceans! We’ll put the data centres underwater and cooling will be great!

Microsoft tried data centres in the ocean a few years ago, putting a box of computers underwater off the coast of Scotland from 2018 to 2020. They talked about how it would be “reliable, practical and use energy sustainably” — but here in 2025, Microsoft is still building data centres on land. [Microsoft]

Microsoft admitted last year that the project was dead. The only advantage of going underwater was cooling. Everything else, like maintenance or updating, was a massive pain in the backside and underwater data centres were just not practical. [IT Pro, 2024]

Space is going to be just like that — only cooling’s going to suck too. This is unlikely to slow down the startup bros for one moment.

Read the whole story
tante
6 days ago
reply
"So what if, right, what if, we put the data centres in … space!

This idea has a lot of appeal if you’ve read too much sci-fi, and it sounds obvious if you don’t know any practical details."
Berlin/Germany
Share this story
Delete

Hand and Hand

1 Comment and 3 Shares

Two hands have an idea to give each other matching tattoos. Lefty gets a beautiful eagle tattoo rendered on their arm. Now it’s Righty’s turn to be inked, but he looks scared as Lefty wields the tattoo gun to draw a childishly-drawn sloppy eagle

The post Hand and Hand appeared first on The Perry Bible Fellowship.

Read the whole story
tante
6 days ago
reply
Matching tattoos
Berlin/Germany
Share this story
Delete

AI for evil — hacked by WormGPT!

1 Comment and 2 Shares

A chatbot is a wrapper around a large language model, an AI transformer model that’s been trained on the whole internet, all the books the AI vendor can find, and all the other text in the world. All of it. The best stuff, and the worst stuff.

So the AI vendors wrap the model in a few layers of filters as “guard rails.” These are paper-thin wrapper on the input and the output. The guard rails don’t work. They’re really easy to work around. All the “bad” text is right there in the training. It’s more or less trivial to make a chatbot spew out horrible content on how to do bad things.

As I’ve said before: the AI vendors are Daffy Duck running around frantically nailing a thousand little filters on the front, then Bugs Bunny casually strolls through.

We know that how to make bombs, hack computers, and do many other bad things are just there in the training. So they’re in the model. Can we get to those? Can we make an evil chatbot?

Yes we can! The Register has a nice article on the revival of the WormGPT brand — a chatbot put together by a hacking gang. For $220, you can get a chatbot model that will happily tell you how to vibe-code an exploit. “Your key to an AI without boundaries.” Sounds ’l33t. [Register]

The original WormGPT came out in June 2023. It was supposedly based on the perfectly normal GPT-J 6B open weights model — but the creator said he’d fine-tuned it on a lot of hacker how-to’s and malware info.

WormGPT was mostly for writing convincing phishing emails — to talk someone into thinking you were someone they should send all their money to. WormGPT got a lot of media coverage and the heat got a bit much for its creator, so WormGPT was shut down in August 2023. [Abnormal]

Brian Krebs interviewed WormGPT’s creator, Rafael Morais, also known as Last. Morais insisted he’d only wanted to write an uncensored chatbot, not one for crooks. Never mind that Morais was selling black-hat hacking tools just a couple of years earlier. He said he’d stopped now, though. [Krebs On Security]

Other hacker chatbots sprang up, with names like FraudGPT. The market for these things was suckers — script kiddies who wanted to write phishing emails and would pay way too much to get a chatbot to write the messages for them. The new chatbots were usually just wrappers around ChatGPT at a higher price. The smarter crooks realised they could just prompt-inject the commercial chatbots if they really wanted anything from one of these.

The WormGPT brand has returned, with WormGPT 4 out now! It came out on September 27th. They don’t say which model it’s based on. WormGPT 4 is only available via API access — $50 a month, up to $220 for a “lifetime” subscription. We don’t know if it’s Morais again.

WormGPT 4 can write your ransom emails and vibe-code some basic stuff — like a script to lock all PDFs on a Windows server! Once you get the script onto the server and run it.

You don’t have to spring for WormGPT, of course. There are free alternatives, like KawaiiGPT — “Your Sadistic Cyber Pentesting Waifu.” Because the world is an anime and everyone is 12.

The actual current user base for evil chatbots is the cyber security vendors, who scaremonger how only their good AI can possibly stop this automated hacker evil! Look at that terrible MIT cybersecurity paper from earlier this month. (They still haven’t put that one back up, by the way.)

The vendor reports have a lot of threats with “could” in them. Not things that are actually happening. They make these tools sound way more capable than they actually are.

None of these evil chatbots actually anything new. It’s a chatbot. It can vibe-code something that might work. It can write a scary email message. The bots may well lead to more scary emails clearly written by a chatbot. But y’know, the black-hat hackers themselves think the hacker-tuned chatbots are a scam for suckers.

I’m not seeing anything different in kind here. I mean, tell me I’m wrong. But AI agents still don’t work well at all, the attacks are old and well known, hacking attacks have been scripted forever, and magic still doesn’t happen. Compare Anthropic’s scary stories about alleged Chinese hackers abusing Claudebot a couple of weeks ago.

It’s vendor hype. Don’t believe the hype, do keep basic security precautions, and actually listen to your info security people — that’ll put you ahead of 95% of targets right there.

Read the whole story
tante
11 days ago
reply
This is so much "AI" reporting: Claims about potentials and/or threads. I'd just like to have grown-up conversations about tech again :(

"The actual current user base for evil chatbots is the cyber security vendors, who scaremonger how only their good AI can possibly stop this automated hacker evil!"
Berlin/Germany
Share this story
Delete
Next Page of Stories