Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2558 stories
·
139 followers

Something from nothing

1 Comment and 3 Shares

I am not a talented person, I’ve never been called “gifted” or anything like that. Anything I can do, anything I achieved took a lot of work and stubbornness to achieve. Don’t get me wrong. I am not saying that I’m “selfmade” and that my position in society, my access to resources, etc. had nothing to do with it – quite the opposite. As a white heterosexual cis-man in Germany I have started live on easy mode. But I do not come from a wealthy background or one with large networks and access to power. I am not associated to any organization that gives me “respectability” or “relevance”. Just a dude with a website who sometimes writes a few things that luckily people read and that got me some opportunities.

But those opportunities did take a long time to materialize. Like the first 5 to 10 years of me writing anything nobody gave a fuck. Which is probably good, there must be many dumb takes in there. But you learn and grow (hopefully) and today I have a modicum of visibility and a handful of people who read what I write and sometimes even pass it on to others. But it took 15 to 20 years to get there. It was a lot of work.

When we talk about “AI” these days, we usually are not really talking about material systems and their actual properties. We talk about a vision, a narrative. About a hope maybe? The hope that we have machines now to get something from nothing.

In the beginning of 2025 Mikey Shulman (CEO of Suno, one of those websites where you can generate AI muzak) guested in a podcast and – maybe accidentally – framed the conversation on “AI” perfectly in two terms. Shulman was talking about his service and how it was democratizing music and whatnot. Let’s look at a quote from the interview.:

“And so that is first and foremost giving everybody the joys of creating music and this is a huge departure from how it is now. It’s not really enjoyable to make music now“

Did you see the two relevant words? Here’s another quote:

“I think the majority of people don’t enjoy the majority of the time they spend making music.“

Now you might think that this statement is a bit … ridiculous. Most people make music for fun (because very few people can live off of it). They make it because they like it or because they just need to express something and music is their language. Music is a business – sure – but it’s also a Hobby (remember those: Things people do just cause they enjoy them without making money off of them?). But joy/fun is not the word I mean. It’s the distinction of creating and making.

Shulman calls using his slop-machine creating (which brings joy) and what other people do making (which nobody likes). I find this distinction revealing but also in a way counterintuitive: Isn’t it this big cultural norm to take pride in your work? That “hard work” is something dignified? Is that not the whole foundation of the “founder”/”entrepreneur” (I always feel like I need to take a shower after having typed that word) and how important they are narrative?

Creating in this understanding is exactly the idea to get something from nothing. Just think it and it exists. There is no process, no obstacle, no materiality, no challenge or struggle. Just the pure joy of creation. I won’t even get into the religious undertones of that distinction and how using “AI” is framed as godlike even though that would probably be fruitful as well. In this reading the “joy” comes from having the artifact and all that comes from that (like being able to sell it, use it, present it to others in order to gain social recognition, etc.). The joy is maximum unbound productivity. Because that’s what everything we do is for, right? Producing. Making a number go up. Creating means decoupling objects from the process of their making, decoupling them from the resources needed for them to be made, decoupling from the work that others needed to put into the systems enabling the creation. It’s the whole “do not let the real world, the other people in it tough me” thing that defines so much of tech bro logic.

In contrast making is about the process. Sure, there’s also something at the end, and artifact, something you might want to have. But the process itself is always part of it. Making is not just about having a thing it’s about the transformative experience of being in this world, interacting with its objects, properties, with other people in order to bring something into existence that means something to it. And every one of those processes leaves a mark on you. Something you learned, something you’ll remember. That cool trick you found in doing something smarter. Or (as it often was and is in my case) a wound or scar of where you fucked up. Making is part of what allows you to become you, to be you. You are in the process, not just the object at the end and its potential use or sale.

And I think this distinction shows why “AI” and capitalism are so deeply intertwined and why there probably is not really a leftist version of it: Capitalism wants to produce more and more with as little cost as possible to create growth and therefore value for those owning capital. “AI” is that: Don’t put anything in really and get something that might be passable to sell. You cannot get better than that really (from a capitalist viewpoint).

But even when thinking about how there is a leftist case to be made for automation (which I think there is) does that fit? Is the leftist case for automation “a lot of shitty products”? No – the point would be to produce the high quality goods people need in a way that is sustainable while still giving people more time to do things they enjoy. Like making music.

“AI” claims that making things is for suckers. But a leftist case for automation only makes sense if it opens up time and space for people to spend their time on things they enjoy. To participate in processes that enhance their lives, their connections to others.

I think that the focus on creating is just the little capitalist devil sitting on our shoulders telling us to produce more.

I think the most radical act today is to just make something. Especially if you are not good at it or if it’s a bit of a struggle. Draw if you’re not good at it. Play the piano even though you’re not great. Make something just for the fun of being in this world, touching it, being in it. Becoming you. Let this radicalize you a bit.

Fuck creation. Love making.

Read the whole story
tante
1 day ago
reply
A few remarks on "AI" usage and the narrative surrounding it. I think it's about the difference between disconnected creating and embodied making.
Berlin/Germany
Share this story
Delete

Using “AI” to manage your Fedora system seems like a really bad idea

1 Comment

IBM owns Red Hat which in turn runs Fedora, the popular desktop Linux distribution. Sadly, shit rolls downhill, so we’re starting to see some worrying signs that Fedora is going to be used a means to push “AI”. Case in point, this article in the Fedora Magazine:

Generative AI systems are changing the way people interact with computers. MCP (model context protocol) is a way that enables generate AI systems to run commands and use tools to enable live, conversational interaction with systems. Using the new linux-mcp-server, let’s walk through how you can talk with your Fedora system for understanding your system and getting help troubleshooting it!

↫ Máirín Duffy and Brian Smith at Fedora Magazine

This “linux-mcp-server” tool is developed by IBM’s Red Hat, and of course, IBM has a vested interest in further increasing the size of the “AI” bubble. As such, it makes sense from their perspective to start pushing “AI” services and tools all the way down to the Fedora community, ending up with articles like this one. What’s sad is that even in this article, which surely uses the best possible examples, it’s hard to see how any of it could possibly be any faster than doing the example tasks without the “help” of an “AI”.

In the first example, the “AI” is supposed to figure out why the computer is having Wi-Fi connection issues, and while it does figure that out, the solutions it presents are really dumb and utterly wrong. Most notably, even though this is an article about running these tools on a Fedora system, written for Fedora Magazine, the “AI” stubbornly insists on using apt for every solution, which is a basic, stupid mistake that doesn’t exactly instill confidence in any of its other findings being accurate.

The second example involves asking the “AI” to explain how much disk space the system is using, and why. The “prompt” (the human-created “question” the “AI” is supposed to “answer”) is bonkers long – it’s a 117 words long monstrosity, formatted into several individual questions – and the output is so verbose and it takes such a scattershot approach that following-up on everything is going to take a huge amount of time. Within that same time frame, it would’ve been not only much faster, but also much more user-friendly to just open Filelight (installed by default as part of KDE), which creates a nice diagram which instantly shows you what is taking up space, and why.

The third example is about creating an update readiness report for upgrading from Fedora 42 to Fedora 43, and its “prompt” is even longer at 190 words, and writing that up with all those individual questions must’ve taken more time than to just… Do a simple dry-run of a dnf system upgrade which gets you like 90% of the way there. Here, too, the “AI” blurts out so much information, much of which entirely useless, that going through it all takes more time than just manually checking up on a dnf dry run and peaking at your disk space usage.

All this effort to set all of this up, and so much effort to carefully craft complex “prompts”, only to end up with clearly wrong information, and way too much superfluous information that just ends up distracting you from the task you set out to accmplish. Is this really the kind of future of computing we’re supposed to be rooting for? Is this the kind of stuff Fedora’s new “AI” policy is supposed to enable?

If so, I’m afraid the disconnect between Fedora’s leadership and whatever its users actually use Fedora for is far, far wider than I imagined.

Read the whole story
tante
1 day ago
reply
"IBM owns Red Hat which in turn runs Fedora, the popular desktop Linux distribution. Sadly, shit rolls downhill, so we’re starting to see some worrying signs that Fedora is going to be used a means to push “AI”."
Berlin/Germany
Share this story
Delete

Here are 12 photographs of eggs... you can bet on

1 Comment

Think About Supporting Garbage Day!

We’re currently giving away a 30-day free trial of Garbage Day—tell your friends! Regular support is $5 a month or $45 a year and you get Discord access, two additional weekly issues, and monthly trend reports. It’s normally a bargain, but now even more so since it’s free! Hit the button below to find out more or claim the free trial.

Financialize Everything

You are, no doubt, being inundated with news about “prediction markets” right now. The two buzziest being Kalshi and Polymarket. Last month, both markets hit new volume records and the former recently raised $1 billion and inked a partnership with CNN, while the latter just got permission from the Commodity Futures Trading Commission to relaunch in the US after being banned here for the last four years.

A big part of the Kalshi and Polymarket push right now is thanks to the genuinely clever “prediction market” branding, making them both sound like some kind of actual scientific polling platform. But really they just offer event contract betting. Kalshi’s homepage has popular pool running right now called, “Who will be the first to leave the Trump Cabinet?” Secretary of Homeland Security Kristi Noem is leading. Weird, I’d probably go with Hegseth. Hmm maybe I should put some money down… wait no. And Polymarket has a big one at the moment called, “Time 2025 Person of the Year.” The popular bet is it’s going to be “artificial intelligence.” Really? Over Charlie Kirk. Hmm…

There are all kinds of allegations flying around that Polymarket is being used for insider trading. And its main X account has realized that posting apocalyptic financial news is great for engagement and, one would assume, perfect for inspiring people to gamble big, hoping to outrun the market collapse they keep posting on X about. But you know you’ve entered supervillain territory when even Grimes is calling you evil, writing on X yesterday, “You should not be able to bet on human suffering. The nihilistic gambling arena should not be allowed to publish their own headlines to spread [fear, uncertainty, and doubt] on the wellbeing of the people.”

Morals aside, these prediction markets are the dream of the post-COVID NFT mania. Unlike the NFT frenzy, though, they aren’t trying to turn JPGs into digital assets, they’re trying to commodify our opinions. But yes, crypto is involved here. Kalshi is a traditional betting market, but is launching a bridge to the Solana blockchain ecosystem soon. Polymarket runs on the Polygon blockchain and pays out in the USDC stablecoin. And just like the early 2020s crypto boom, there is an entire ecosystem of influencers that want to convince you to buy in. Kalshi has a whole army of users that post their wins to X and even livestream them. And 60 Minutes did an interview last week with a Polymarket user that goes by Domer, a former professional poker player, who has made around $3 million last year. Polymarket also seems to have quietly acquired the @NewsWire_US X account, which has more than 120,000 followers and now links to the platform’s “breaking news” tab. “Kalshi versus Polymarket is the first time we’ve seen armies of loosely affiliated paid influencers battling on behalf of their companies,” Neeraj Agrawal, the communications director for crypto nonprofit Coin Center, wrote on X last week.

(Citadel Securities Conference)

Last month, Tarek Mansour, the co-founder of Kalshi, gave the audience at the Citadel Securities conference a chilling glimpse of where this is all headed (if we let it). “The long-term vision is to financialize everything and create a tradable asset out of any difference in opinion,” he said on stage to a crowd of poor souls who, I guess, think that sounds dope.

Mansour’s “financialize everything” line is, in many ways, a condensed version of something Meta CEO Mark Zuckerberg said on a podcast last spring. A comment I come back to often because I believe he accidentally stated the fundamental driving philosophy of Big Tech. A perfect, succinct, unfathomably embarrassing snapshot of how a bunch of very wealthy losers view themselves:

“There’s this stat that I always think is crazy. The average American has three friends, three people they consider friends. And the average person has demand for meaningfully more. I think it’s like 15 friends or something,” he told podcast host Dwarkesh Patel, while talking about the rise of AI companions. “I think that there are all these things that are better about physical connections when you can have them, but the reality is that people just don't have the connection and they feel more alone a lot of the time than they would like.”

Researcher Paul Fairie, on X at the time, had an even tighter summary of Zuckerberg’s worldview, “The average American has three eggs, but has demand for 15. So here are 12 photographs of eggs. I am a business man.”

The “here are 12 photographs of eggs” philosophy is everywhere you look. Not just at AI companies, but every large tech service. All of these platforms have inserted themselves into the cracks of modern life and want you to pay them — with your time, data, or actual money — for a hollow digital imitation of something we used to get from the other human beings in our lives. Or as X user r0sylns wrote recently, “Groceries? Get em delivered. Books? Buy em on amazon. Fuck libraries and bookstores. Stop buying CDs, vinyls, and DVDs, it's all on the cloud! Movie theaters? Obsolete. Subscribe to 10 different platforms instead! Stay inside. Be afraid of your neighbors. Work til you die.”

These “prediction markets” take Zuckerberg’s “here are 12 photographs of eggs” philosophy to its logical endpoint. A way to capture one of the few parts of the human experience they haven’t been able to ingest into their mega-platforms. Here are 12 photographs of opinions, bet on which ones will come true. It’s hard to imagine a better metaphor for late-stage Silicon Valley: Pay us a cut to imagine the future for us. An industry completely devoid of new ideas asking users to gamble on what might happen next.


The following is NOT a paid ad. It’s a good ol’ fashioned promo4promo. If you’re interested in promo swaps OR paid advertising, email us at josh@garbageday.email and let’s talk. Thanks!

While you’re still figuring out what the hell is happening on Instagram, you and your competitors are already testing it.

That’s why 30,000+ social pros, including those from Apple, Disney, and Netflix, read Geekout — the free weekly drop from Matt Navarra.

Here’s what you’ll get:

  • 📰 All the breaking social media news — No more finding out late on LinkedIn

  • ⚙️ Platform updates decoded fast — Know exactly what Meta, TikTok, and X just changed

  • 🧠 Expert takes from Matt Navarra — Get the inside line from the consultant brands trust

  • 🔍 Real insight, not recycled headlines — Commentary that helps you act, not just skim

  • 📬 1 Friday email = total catch-up — Less scrolling, more knowing what matters

Smart, funny, zero fluff. Geekout turns social chaos into your competitive edge.

Try Geekout. Also, it's free.


A Good TikTok

@emmamastone

the “just a reminder that this is not realistic” and it’ll be able like someone making hot cocoa and watching a movie before bed


Minnesota Was Promised To Somalis 3,000 Years Ago

Last week, President Donald Trump lashed out at the Somali community in Minnesota. During a cabinet meeting, he told reporters, “I don’t want them in our country. I’ll be honest with you, okay? Somebody said, 'Oh, that’s not politically correct.' I don’t care. I don’t want them in our country. Their country is no good for a reason. Their country stinks, and we don’t want them in our country.”

Conservatives are fixated on Minnesota right now, both for the brewing scandal over COVID fraud and Minneapolis Mayor Jacob Frey’s showdown with Immigration and Customs Enforcement. White House Press Secretary Karoline Leavitt doubled down on Trump’s rant, accusing the Minnesota Somali community of stealing over $1 billion in taxpayer funds. As with everything the Trump White House says, take that with a massive, massive grain of salt.

As racist as this all is, the Somali community has found a very, very funny way to respond. TikTok, Instagram, and X are filling up with videos claiming that Minnesota was actually promised to Somalians centuries ago. Example embedded below:

@empireusa

THE PROMISE LAND#somalitiktok #usa #minnesota

The trend is a double-whammy, both making fun of Trump and, also, working as a pretty searing parody of Israeli propaganda. Did you know that the word “Minnesota” actually comes from two Somali words?


Yes, Heated Rivalry Did Start As A Stucky Fic

—by Adam Bumas

Just to remind everyone, a sizable chunk of the publishing industry is being supported by Wattpad and Archive of Our Own fanfics that have been flipped into original fiction. Now it’s spreading into Hollywood too. HBO Max’s new sleeper hit is Heated Rivalry, a show based on books about two hockey players who are secretly in love. There’s finally a major production tapping into the enormous, frenzied hockey “real people fandom” — but the actual relationship between the show and fanfic is a bit more complicated.

(Archive of Our Own)

Fans noticed that last week’s episode of Heated Rivalry dressed one of its main characters in strikingly similar clothes as Steve Rogers (aka Chris Evans’ Captain America). Over the weekend, it led to a debate over whether the entire story started out as a fanfic of “Stucky” (Steve Rogers and Bucky Barnes, one of AO3’s most popular pairings). It only got more contentious when Salon asked the books’ author Rachel Reid about it, with the headline “Sorry, the Heated Rivalry gay Marvel fanfic origin story isn’t true”.

Confusingly, the article confirms it is true. Dedicated fans already have archived links to the Stucky fanfic, and Reid wrote on her blog about the process of adapting the fanfic into an original novel. The headline says otherwise because Reid claims she wrote the book as an original work first, then changed the male leads to Steve and Bucky because she “thought it had to be fanfic” to be published on AO3. Some fans are dubious, but it’s still a hell of an official story to say, “this hockey romance needed MCU superheroes to find an audience.”


The Fascinating Implications Of The Rizzler’s AI Ad

@itztherizzler

The future is now. It’s Air time. @Air #therizzler #air #aura #fyp #rizz

OK, yes, AI bad. We all agree. But here’s an interesting use case I hadn’t really considered. The Rizzler posted an ad for the creative operations software Air last month. The video may have some real footage at the beginning, the bulk of the minute-long sponsored post is just The Rizzler deepfaked or AI-inserted into various movies. It’s pretty dumb, but it did half a million views, so I’m sure everyone involved feels pretty good about it.

The interesting dimension here, the thing I hadn’t totally considered, is that The Rizzler exists at a really weird intersection of fame. Where he is both a very famous child, but, also, definitely not making the kind of money serious movie stars make (though, from what I’ve heard recently, movie stars aren’t making serious money these days either). So, in this very specific instance, AI is, with consent, outsourcing what would have been child labor. A serious concern for child influencers like The Rizzler where the only real thing they can actually monetize is their likeness.


This 6’9” Looksmaxxer With A LMTN Facecard Is Actually A 6’6” Fraudmaxxer

@chadified_ggm8

Here’s why I wear height inserts when going out as a 6’6 man. This is my perspective and my experience not advice. (Satire) #ltn #mtn #htn... See more

Last week, TikTok user @chadified_ggm8 revealed that he’s not actually 6’9”, but actually just 6’6” and wears lifts because he has a “LMTN facecard.” “LMTN” means “low to mid tier normie.” For those of you who do not have internet-induced psychosis, this is a very tall man experiencing such extreme body dysmorphia that he think he needs to be nearly seven-feet tall to attract women.

But @chadified_ggm8 is not the only guy on TikTok who is making content like this. One of the bigger accounts for this stuff is a user that goes by @syrianpsycho or K. Shami, who makes all kinds of videos targeting insecure men (he is also selling them supplements, obvz).

What’s notable here is that incels don’t typically post face online. And I’m actually kind of optimistic about the fact that young men steeped in red pill theory are opening their cameras because it’s possible other people will tell them that, one, they’re talking like crazy people and, two, they look fine and don’t need to smash their jaws or wear three-inch lifts.


Jokers Grinky


Did you know Garbage Day has a merch store?

You can check it out here!



P.S. here’s the only place you should go when your wife becomes a sex addict and starts cheating on you with everybody.

***Any typos in this email are on purpose actually***

Read the whole story
tante
4 days ago
reply
"These “prediction markets” take Zuckerberg’s “here are 12 photographs of eggs” philosophy to its logical endpoint. A way to capture one of the few parts of the human experience they haven’t been able to ingest into their mega-platforms. Here are 12 photographs of opinions, bet on which ones will come true."
Berlin/Germany
Share this story
Delete

Will A.I. writing ever be good?

1 Comment

Greetings from Read Max HQ! In today’s issue, responding to some good recent human-generated writing on A.I.-generated writing.

A reminder: Read Max is a subscription newsletter whose continued existence depends on the support and generosity of paying readers. If you find the commentary at all enlightening, entertaining, or otherwise important to your weekly life, please consider upgrading to a paid subscription, an act which will allow you to enjoy Read Max’s weekly recommendations for overlooked books, movies, and music, and give you a sense of pride and accomplishment and supporting independent journalism, such as it is.

Subscribe now

Will A.I. writing ever be good?

Sam Kriss has an excellent new piece in The New York Times Magazine examining the actual style of “A.I. voice.” It’s a great rundown of the many formal quirks of A.I.-generated text, and I’m glad that Kriss was able to appreciate the strangeness of even the ever-more-coherent writing produced by frontier large language models, which is “marked by a whole complex of frankly bizarre rhetorical features”:

Read any amount of A.I.-generated fiction, you’ll instantly notice an entirely different vocabulary. You’ll notice, for instance, that A.I.s are absolutely obsessed with ghosts. In machine-written fiction, everything is spectral. Everything is a shadow, or a memory, or a whisper. They also love quietness. For no obvious reason, and often against the logic of a narrative, they will describe things as being quiet, or softly humming.

This year, OpenAI unveiled a new model of ChatGPT that was, it said, “good at creative writing.” As evidence, the company’s chief executive, Sam Altman, presented a short story it wrote. In his prompt, he asked for a “metafictional literary short story about A.I. and grief.” The story it produced was about 1,100 words long; seven of those words were “quiet,” “hum,” “humming,” “echo” (twice!), “liminal” and “ghosts.” That new model was an early version of ChatGPT-5. When I asked it to write a story about a party, which is a traditionally loud environment, it started describing “the soft hum of distant conversation,” the “trees outside whispering secrets” and a “quiet gap within the noise.” When I asked it to write an evocative and moving essay about pebbles, it said that pebbles “carry the ghosts of the boulders they were” and exist “in a quiet space between the earth and the sea.” Over 759 words, the word “quiet” appeared 10 times. When I asked it to write a science-fiction story, it featured a data-thief protagonist called, inevitably, Kael, who “wasn’t just good—he was a phantom,” alongside a love interest called Echo and a rogue A.I. called the Ghost Code.

Even as L.L.M.s get better at producing fluid and plausibly human text, these persistent stylistic tics remain interestingly abrasive--in a single short answer, presented to you in a vacuum, A.I. text is as smooth as can be, but when you’re confronted with an overwhelming amount of it, the strangeness that’s been fine-tuned out really begins to re-assert itself. Kriss argues (in part) that one reason A.I. writing remains so (in aggregate) weird and waffly is that L.L.M.s “can’t ever actually experience the world”:

This puts a lot of the best writing techniques out of reach. Early in “To the Lighthouse,” Virginia Woolf describes one of her characters looking out over the coast of a Scottish island: “The great plateful of blue water was before her.” I love this image. A.I. could never have written it. No A.I. has ever stood over a huge windswept view all laid out for its pleasure, or sat down hungrily to a great heap of food. They will never be able to understand the small, strange way in which these two experiences are the same. Everything they know about the world comes to them through statistical correlations within large quantities of words.

A.I. does still try to work sensory language into its writing, presumably because it correlates with good prose. But without any anchor in the real world, all of its sensory language ends up getting attached to the immaterial. In Sam Altman’s metafiction about grief, Thursday is a “liminal day that tastes of almost-Friday.” Grief also has a taste. Sorrow tastes of metal. Emotions are “draped over sentences.” Mourning is colored blue. […] This is a cheap literary effect when humans do it, but A.I.s can’t really write any other way. All they can do is pile concepts on top of one another until they collapse.

But I wonder if it’s true that the lack of a “world model” is what pushes L.L.M. text toward metaphorical drivel: It seems just as likely that chatbots over-rely on this kind of sensory-immaterial conjunction because, as Kriss says, it’s a “cheap literary effect” that impresses people passing superficially over a text--exactly the kind of fake-deep crowd-pleaser for which L.L.M. output is being fine-tuned.

These satisfyingly plausible folk-technical explanations come up often when people are trying to describe the limitations of A.I.-generated writing. One well-rehearsed account blames A.I.’s stylistically uninteresting output on next-token prediction: Large language models, this argument goes, intrinsically cannot generate truly great writing, or truly creative writing, because they’re always following paths of less resistance, and regurgitating the most familiar and most probable formulations. This is a satisfying argument, not least because it’s easily comprehensible, and for all we know it’s even a true one.

But we don’t actually know that it’s right, because we’ve never really tried to make an L.L.M. that’s great at writing. I appreciated Nathan Lambert’s recent piece at Interconnects “Why AI writing is mid,” which argues that the main roadblocks to higher-quality writing are as much economic as technical: There simply isn’t enough demand for formally ambitious (or even particularly memorable) writing to be worth the expense or resources necessary to train a model to produce it.

Some models makers care a bit about this. When a new model drops and people rave about its creative writing ability, such as MoonShot AI’s Kimi K2 line of model, I do think the team put careful work into the data or training pipelines. The problem is that no model provider is remotely ready to sacrifice core abilities of the model such as math and coding in pursuit of meaningfully better writing models.

There are no market incentives to create this model — all the money in AI is elsewhere, and writing isn’t a particularly lucrative market to disrupt. An example is GPT 4.5, which was to all reports a rather light fine-tune, but one that produced slightly better prose. It was shut down almost immediately after its launch because it was too slow and economically unviable with its large size.

As Lambert points out, much of what we dislike about A.I.-generated text from a formal perspective--it’s generally cautious, inoffensive, anodyne, predictable, neutral, unmemorable and goes down smooth--is a product not of some inherent L.L.M. “voice” but of the training and fine-tuning processes imposed by A.I. companies, which are incentivized to make their chatbots sound as annoying and bland as possible. No one is out there actually trying to create Joycebot (or whatever), and for good reason: The saga of Microsoft’s Bing and its “alter-ego,” Sydney, is in a broad sense the best fictional story yet produced by an L.L.M. chatbot, but it was also an unmitigated disaster for the company.

To the extent that their output is pushed into “mid-ness” by economic circumstance, L.L.M.s are not unprecedented. In a real sense, “why A.I. is writing mid” and “why most professional writing is mid” have the same explanation: “Good writing,” whether authored wholly by humans or generated by an L.L.M., requires capacious resources (whether in time and education and editing or in compute and training and fine-tuning) to create an idiosyncratic (and likely polarizing) voice for which there usually isn’t economically sufficient demand.1

I sometimes think that it’s more helpful to think about large language models as equivalent not to individual writers in the specific but to whole systems or institutions of which writing is an end-product. A given L.L.M. is less akin to, say, a replacement-level magazine writer than it is to “the entire magazine industry at its peak,” if you imagine the magazine industry as a giant, complex, unpredictable machine for producing a wide variety of texts. Just as that industry, as a whole, once was able to generate text to a certain degree of predictability and at a relatively high floor of quality, to varying client specifications and structured by its own internal systems and incentives, so too do Claude or ChatGPT.2

I bring up magazines in particular as a point of comparison because I’ve been struck for a while at the similarity between the voice deployed in the latest generation of chatbots and what a friend calls “F.O.B. voice,” or the smooth, light, savvy, vaguely humorous tone that once reigned in magazine front-of-book sections:

In better times for the magazine industry, there was higher demand for a particular kind of glib (but not actually humorous), knowing (but not actually smart), fluid (but not actually stylish) text--what my friend Mahoney calls “F.O.B. voice,” for front of book, the pre-features section of a magazine for which, depending on the magazine, editors might end up cranking out 150-to-500-word nuggets of smooth blurb prose about new books, movies, news stories, gadgets, restaurants, or whatever.

F.O.B. voice is as a rule smooth and clichéd and often only semi-coherent, because it needs to be reproduced quickly and without much effort by overworked writers and editors on deadline. It’s also superficially impressive to most readers, thanks both to the packaging that surrounds it, and to their standards for impressiveness, which are quite a bit lower than professionals. For all these reasons, and additionally because it’s obviously been trained on archives of magazines written in F.O.B. voice, it’s unsurprising that ChatGPT takes naturally to producing in F.O.B. voice.

“Mid,” as the magazine industry knew, and as L.L.M.s “know,” is a rewarding zone to be in: It’s what people find easiest to consume, and what advertisers feel most comfortable appearing adjacent. Of course, the magazine industry generated more than just reams and reams of smooth placeholder text; it also produced New Journalism, the modern short story, “Eichmann in Jerusalem,” the Hillary planet, etc. But these were positive externalities, not inevitabilities, driven more by cultural prerogatives than by financial necessity. To get something similar from an L.L.M. would likely require a lot of not-necessarily-profitable groundwork.

1

Another way of thinking about it might be: A parallel timeline where an A.I. was pumping out great novels would be an improvement to our own, because it’d suggest that there was enough demand for genuinely great novels to make it worth training an L.L.M. to do so.

2

Not to get too whatever about it, but it’s good to note that that magazine articles (or, even moreso, Hollywood movies) are the products of many humans operating within larger systems and frameworks. Do we think of those articles as “magazine industry-generated,” or major-studio movies as being “Hollywood-generated”? I’m not saying we should, necessarily, but I suspect that if and whenever A.I. is able to create great (or even non-slop) writing, we will come to think of it less as “A.I.-generated” and more as authored by the prompter, or the prompter in concert with the model creators at various levels.



Read the whole story
tante
5 days ago
reply
"chatbots over-rely on this kind of sensory-immaterial conjunction because[IT] impresses people passing superficially over a text--exactly the kind of fake-deep crowd-pleaser for which L.L.M. output is being fine-tuned."
Berlin/Germany
Share this story
Delete

Microsoft drops AI sales targets in half after salespeople miss their quotas

1 Comment

Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.

AI agents are specialized implementations of AI language models designed to perform multistep tasks autonomously rather than simply responding to single prompts. So-called “agentic” features have been central to Microsoft’s 2025 sales pitch: At its Build conference in May, the company declared that it has entered “the era of AI agents.”

The company has promised customers that agents could automate complex tasks, such as generating dashboards from sales data or writing customer reports. At its Ignite conference in November, Microsoft announced new features like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for building and deploying agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to deliver than the company expected.

Read full article

Comments



Read the whole story
tante
9 days ago
reply
But if AI is so userful and everyone wants it, why does Microsoft have to cut its AI sales targets in half?
Berlin/Germany
Share this story
Delete

Spotify Haters Club

1 Comment
Spotify Haters Club

You know something that is absolutely worth considering? Planning an exit from the culture-destroying, weapons-investing, ICE-advertisement platform known as Spotify, a place where music is seen only as a tool to destroy people's imaginations and flatten art into a homogenised, AI-friendly, beige churn of endless content to immiserate us all.

It's not that any of the other streaming services are good, but Spotify is almost certainly the worst. Do not trust it. Do not let it turn your enjoyment of music into stats and algorithmic playlists. Do not let it quietly funnel AI-slop into your daily mixes. Do not let it sort you into woefully inaccurate micro-demographics to better sell your data to advertisers (what’s the chances it shares this arbitrary-human-classify tech to its weapons company interests to decide who’s allowed to not get bombed?). Do not willingly give it free advertising by sharing league tables of bands on your socials. Don't do it. It is appropriating all that is great about loving music, all that is great about being part of the communities that surround music, and weaponising it against us all! If you don't believe me, read this book and see if it changes your mind.

Also, Brian Merchant recently wrote a fairly comprehensive how-to guide for leaving Spotify which is more practically useful than my contempt for them is. (Apologies for the Substack link... there's just no escaping these abhorrent tech companies is there).

A year or so ago I took the relatively nerdy option of building my own Plex server. I filled it with mp3s of my old cd collection I’ve carried around forever on hard drives, bandcamp purchases, and songs foraged from various corners of the internet. It was the best music-related decision I have made in years.

Plex offers only the vaguest of stats really. Just 'top ten most listened to artists'. Last year that chart was topped by Lana Del Rey followed by Ryuchi Sakamoto. This year it turned out to be... exactly the same. Which at first felt strange because I felt like I had listened to a lot more new music than last year.

But then, like a not-particularly-profound-thing hitting me at a sensible speed, I realised both of these things could be true. My Lana playlist remains a go-to for many moments, and I still think she has written some all time bangers, but that playlist serves a particular function for me. It is not quite background listening (or 'lean back' listening as they'd call it internally at Spotify), but it also isn't exactly active listening. I guess it is wandering around or commuting music for when my brain is elsewhere.

I still love listening to music I know inside out, that has travelled with me through large parts of my life. But it doesn't follow that music I only listen to a few times or even just once cannot also be impactful. I only read most books once, only see most films once. And — always wary of nostalgia — no matter how hard I might try, I will not be able to hear music for the first time and have the same reaction to it I did hearing new music in my teens or twenties. I bring too much to the text. I have seeped myself in noise for decades. This relationship has become different. I am still learning how to lean into that. But clearly, for me at least, not trusting any new music discovery to corporate algorithms is a step in the right direction.

So abandoning that one stat Plex offered, I made a filter to show a playlist of tracks that I had not heard before 2025 and that I had listened to at least once during this year and it became a much more interesting selection. And so, with some light editorialising and removing things that I immediately decided were rubbish (I am looking directly at you, latest Taylor Swift record), here are some cool albums I actively listened to this year, as opposed to musical anaesthetic that I lazily wrapped around myself to block out the rising existential horror of existing in 2025. Bandcamp links where possible. Happy Spotify Wrapped Season!

SOME GOOD MUSIC

Ben Lukas Boysen - Alta Ripa // Just gorgeous synth work. Ben's stuff always sounds like being wrapped in analog silk.

C Reider - The Mending Battle // What 'computer music' ought to sound like in a world where computers haven't become mostly awful and terrible.

Calum Gunn - Eroder // Really, really good. Wrote about it HERE.

Carly Rae Jepsen - E•MO•TION // Somehow I had never heard the whole album before. It's great.

Clark - Steep Stims // Clark absolutely back on top form with microtonal weirdness and clanging bangers.

Deftones - Private Music // It's another Deftones album. You already know exactly what it sounds like and what it does.

Emptyset - Dissever // Another band who always sound reassuringly like themselves. Rarely do I listen to this kind of thing these days but nice to know it's still there.

Grails - Miracle Music // Wrote about this HERE. Didn't stick with me as much as I thought it might, but that's my fault rather than the record's.

Greet Death - Die in Love // Fantastic, heart-on-sleeve stuff inspired by all your 90's alt-rock favourites.

JISOO - AMORTAGE // I just love this. Wrote about it HERE.

Jungle Fatigue Vol. 1 // Will give you jungle fatigue. Two thumbs up.

Jungle Fatigue Vol. 3 // As above.

Kendu Bari - Drink For Your Machine // Some solid drum'n'bass production.

Ledley - Ledley // Curious contemporary jazz. Sort of a bit like if Squarepusher were a brass-centric jazz band?

Native Soul - Teenage Dreams // Can't remember how I stumbed across this. Electronic deep house from South Africa. Very good. It makes writing beats that people will want to dance to seem effortless.

Noneless - A Vow of Silence // Some really great glitch production but it is sometimes overshadowed by occasional dubstep tangents that veer a little too close to Skrillex for me to be able to gel with.

Papé Nziengui - Kadi Yombo // Lively folk (harp-based?) energy from Gabon. I bet this is great to see live.

Paul Jebanasam - mātr // A gorgeous bruise of a record. Full of noise fluttering on the edge of distortion.

Polygonia - Da Nao Tian Gong // Some pretty techno. Easier said than done.

Tentacles of Destruction - Tentacles of Destruction // An old punk cassette I found on archive.org. It is VERY GOOD if you like mysterious old punk cassettes. The internet suggests they are from early 2000s New Zealand. Also the chorus of the first track sounds a lot like 'Perfect Teenhood' by ...And You Will Know Us by the Trail of Dead.

TROVARSI x ALX-106 - Frequencies EP // Some tough, utilitarian techno. Easier said than done.

Underworld - Strawberry Hotel // They've still got it, huh?

Ψ - Again There is Nothing Here // End of the world synth growls. That kind of analog broadcast that is cold, clinical yet simultaneously bursting with a warm of kind of hope(lessness).

Takashi Yoshimatsu - Symphony no. 2 // I know nothing about this and can't remember how or where I found it, but it is an absolutely sublime, really spectacular piece of work.

Read the whole story
tante
9 days ago
reply
"It's not that any of the other streaming services are good, but Spotify is almost certainly the worst. Do not trust it. Do not let it turn your enjoyment of music into stats and algorithmic playlists."
Berlin/Germany
Share this story
Delete
Next Page of Stories