Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2458 stories
·
137 followers

This is Peak Featurecide

1 Comment

TechCrunch:

Substack continues to double down on video amid TikTok’s uncertain future in the U.S. The company announced on Monday that it’s rolling out a scrollable video feed in its app, making it the latest platform to introduce a TikTok-like feed.

Given the timing of the launch, Substack is likely aiming to capitalize on the potential void left by TikTok if it faces a ban in the United States.

The move comes a month after Substack announced that it would start allowing creators to monetize their videos on the platform and let them publish video posts directly from the Substack app.

Substack used to be about writing. Publishing. The kind of longform work that thrived because it dared to slow down. The pitch invited readers to sit with a thought, not swipe past it. But that Substack is vanishing in the eternal pivot to video - tiny, meaningless loops, endlessly scrolled and instantly forgotten.

Their new feature - a vertical scroll of bite-sized clips - isn't innovation. Please, for the love of all that's holy, don't call it that.

It’s Featurecide: the slow killing of a product’s soul in pursuit of every trend that moves the needle on engagement metrics, no matter how disconnected it is from the original mission.

Chasing TikTok users doesn’t build a better platform for writers. It builds a different platform entirely. The value proposition collapses when your infrastructure for thought becomes optimized for the attention economy. You can’t serve two masters. You either build a tool for writers or you build an app for dopamine hits. Once you choose the latter, you’ve already traded your audience.

We've heard it before: new formats, more discoverability, growth.

But what gets discovered when a platform devalues depth?

What grows when creators are nudged to repackage their ideas into 30-second loops?

It's the same noise. Over, and over again.

Sure, some writers use video. And some use podcasts. Multimedia isn’t the enemy. But when the interface tells users, constantly, that faster is better and shorter is smarter, you can’t pretend it's a neutral container for expression. Design is direction. And Substack is pointing away from writing.

Substack's success - for all my dislike of the platform - came from being different. From treating writers as more than just content creators, as something closer to public intellectuals, artists, craftspeople. But you can only hold that line if you're willing to say no to the algorithm. And it looks like they aren’t.

Substack was supposed to be a refuge from the noise. Now it wants to be the noise.

When you start designing for virality instead of value, the value bleeds out. Not overnight. Not right away. But inevitably...

🍕
My goal this year is to make Westenberg and my news site, The Index, my full-time job. The pendulum has swung pretty far back against progressive writers, particularly trans creators, but I'm not going anywhere.

I'm trying to write as much as I can to balance out a world on fire. Your subscription directly supports permissionless publishing and helps create a sustainable model for writing and journalism that answers to readers, not advertisers or gatekeepers.

Please consider signing up for a paid monthly or annual membership to support my writing and independent/sovereign publishing.
Read the whole story
tante
9 hours ago
reply
"The value proposition collapses when your infrastructure for thought becomes optimized for the attention economy. You can’t serve two masters. You either build a tool for writers or you build an app for dopamine hits. Once you choose the latter, you’ve already traded your audience."
Berlin/Germany
Share this story
Delete

The water's fine?

1 Comment

Remember the Online Design Community was a supportive one? No I’m not sure if I remember it anymore either.

Thank you for reading folks! If you know someone you think also might enjoy my comics, please share this with them and help spread the word!

Share DESIGN THINKING!

For paying subscribers only, today’s post also comes with a bonus first-look at a brand new DT! comic not scheduled to be shared publicly for a little while. You also get to claim your own DT! avatar. Why not upgrade and join us?

Subscribe now

Read more

Read the whole story
tante
1 day ago
reply
This comic on the online design community applies 100% to the tech sector.
Berlin/Germany
Share this story
Delete

How to make a book

1 Comment
Painting of three weird Victorian children playing with a puppet. Pink text above and below that says GIVE TRANS PEOPLE DRUGS AND MONEY.
This is Give Trans People Drugs and Money. Just finished it. It’s made of wax and razor blades and feelings. It’s 46x30. It’s born of the idea that we are here to help those who need us, without question, without imposing our morals or ideology on them, and without adding additional hurdles.

This week’s question comes to us from anonymous:

I’ve got a really good idea for a book, but how do I find a publisher?

TL;DR: Mirrors are $10 at your local hardware store.

There are five books out there with my name on them (and you better believe I’m linking to them at the end of this newsletter). The first two were originally done with a publisher, and I ended up buying the rights back years later. The third was about to be done with a publisher, but they changed their minds. I wasted some time trying to find another publisher, and ended up publishing it myself. It sold better than the first two. And the last two, I just did myself.

But let’s start at the beginning.

Back in 2010 I pitched my first book to a publisher. I was pretty happy, and a little bit shocked, when they said yes. Looking back, there were two big reasons why.
First, hurray, a book. I knew I wanted to write a book, I was fairly sure I could string words together into sentences, then sentences in paragraphs, and paragraphs into chapters, but I had no idea of how to turn all that into a book, with a proper cover, and an ISBN number, and things like an index. I knew how to make zines, but a book felt… proper. So it was exciting to have someone show up who knew how to do those things.

Secondly, the validation of having an actual publisher of books, an arbiter of quality, want to publish my book felt incredible. At that point, I was still fairly green around the gills, very uncomfortable thinking of myself as a writer, and very self-conscious about whether I even deserved to take up precious time on a press. So having someone with a publishing pedigree show up and grant me these things, like Oz the Great Wizard granting the scarecrow a diploma, was incredibly validating. But like Oz the Great Wizard granting the scarecrow a diploma, it was also bullshit.

I was submitting to authority for protection. In my mind, a publisher would take care of me. They’d help me make my book better. They’d help me find my audience. They’d make sure the book got in front of people who’d buy it. They’d do all of the things that authors generally don’t like doing, and allow me to focus on what I wanted to do, which was writing. Getting that gold star from an authority figure feels really good. This goes beyond authors and publishers, of course. We get jobs at big organizations because it feels there’s a sense of protection within a large organization. (It’s false.) We elect authoritarian leaders because we want someone to protect us from what scares us. (They don’t, and we shouldn’t be.) We go to church because it’s satisfying to think there’s a big dad in the sky looking out for us. (If there is, motherfucker is asleep.)

But the reality was that a publisher doesn’t really help you do any of those things. An editor helps you make your book better, a proofreader helps you make it legible, an indexer (should you need one) helps your readers find things in your book, a designer helps you make it legible, and a printer will help you make it an actual book. All a publisher does is gather all of those workers under one roof so they can exploit them.

Are all publishers assholes? As with everything, there are exceptions. If you’re a publisher, and it makes you feel better to believe that you’re the exception, I implore you to behave in a way that makes it true.

As far as finding your audience goes—this part will hurt—connecting with your audience is on you, regardless. The first question from every publisher I’ve talked to has been about my follower count, which was a sign that I’d be doing the marketing. I honestly believe this is how it should be, though. No one knows your audience like you. But if you think entering into a relationship with a publisher will relieve you of that burden, you are wrong. You will still have to hunt for your own food, but with a publisher in tow, they’ll be demanding 75–80% of that meal. Publishers don’t want to hunt, but they demand to be fed, and they eat first.

You don’t need a publisher to tell you your book is worthwhile. You never did. You already told me you had a really good idea for a book. I’m guessing I’m not the first person you’ve told this to. I’m also guessing the first person you told, verified what you already knew—that this was a good idea. As did the second, third, and fourth person. A publisher will only needlessly add to the pile of information you already have, and do you the favor of taking most of your money for the privilege. If you want more verification that your book is worth writing, go to a bookstore. See who else has written a book. Bill Clinton wrote seven. Henry Kissinger has written over a dozen. Bono has written a book. Kara Swisher has written two. Child, Steven Seagal has written a novel. Go to your boss’ desk and see what book is sitting on it. I guarantee it’s shit. (Unless it’s mine.) Worthwhile doesn’t come into play. Write your book.

Every Drag Race fan is familiar with RuPaul’s sign-off phrase, “If you can’t love yourself, how’re you gonna love anyone else?” A minor variation on that phrase might be “If you can’t love yourself, you’re gonna spend your life looking for somebody to tell you your book is good enough.” So let me save you some time—your book is good enough. Humility is expensive. Love yourself. Go make it.

I want everyone to write a book. But I want everyone to write the book they’ve always actually wanted to write, not the one they thought they had to write, or would help their career. That’s just another form of appealing to authority for protection and validation. If you’re writing a book to prove how smart you are, you’re gonna have a miserable time of it. Write a book that makes people feel smart for reading it. Write a book that makes people feel joy and pain. Write a book that tells the stories that need to be told, lest they be forgotten.

The silver lining on the current everything dilemma is that we can all stop writing books about KPIs, managing teams, leveling up, and biohacking your bloodboy. As writers of books, can now freely admit that we never wanted to write books about KPIs, managing teams, leveling up, and biohacking your bloodboy. As readers of books, we can now freely admit that we never wanted to read books about KPIs, managing teams, leveling up, and biohacking your bloodboy. We are free to write trash. (You have always been free to write trash.) You are free to read trash. (You have always been free to read trash.
I don’t want to read about affinity marketing, I want to read about raccoons taking over the federal government. I want to read about how Laika, the Soviet space dog, didn’t really die in space but instead landed on the far side of the moon, met a moon dog, started a family, only to have it all fall apart because moon dogs are non-monogamous and Laika couldn’t handle it. I want to read about the guy who owned a bouncy castle rental business who set arson to all his rivals to improve his Google rankings. I want to read about how T-girls hacked their way into a police station and turned it into a dance club that also made really good grilled cheeses. I want to read books about robots opening noodle shops. I want to read about the day that all of the billionaires mysteriously disappeared and we tried to figure out why for maybe five minutes before moving on.

Books about science, real science, are still ok. Please keep writing and reading those.

All that said, let’s get practical about how you can make a book. And since my brain and your brain work differently, I’m going to tell you how I do, and you can take what works for you, discard what doesn’t, and fill in your own joyous blanks.

First off, get yourself an editor. A good editor is someone who helps you shape your book, takes it apart, puts it back together and isn’t afraid to be honest with you. A good editor is on the same page (ha ha, pun) about your goal as you are. They need to be willing to have tough conversations with you. Your BFF cannot be your editor, even if your BFF is an editor. The good, and also bad, news is that in the year of our skylord 2025, you won’t have to work too hard to find an editor who needs work. Search on Linkedin (I know, buddy) for “freelance editor” or just get on Bluesky and ask “Who wants to edit a book for money?” Lord, you will get replies. Yes, you are paying this person, you are paying all these people. You will know you’ve found the right editor not when you feel like you could be friends, but when you’ve found someone you’re a little bit afraid of letting down.

Additional good news, editors tend to hang out with the rest of the other book nerds you’ll need, such as a proofreader, an indexer (if it’s that type of book), and a designer. (Full disclosure, as a designer, I’ve never had to hire a designer to do this, which is great because designers are… difficult. This also means that I’m prone to rewriting things as I lay them out, which is an insane way to work, and I don’t recommend it.) You may also need an illustrator. These should not be the same person. Again, you’ll be able to find them on Linkedin or Bluesky. Or, here’s an idea… go to a bookstore and find a book you think is well laid-out. Two or three pages in you’ll see a list of people who worked on it. Look them up. Odds are they are all unemployed now, or at least underemployed and happy for the work. Hit ‘em up.

Obviously, this means you’ll need a little bit of money up front to pay for these people. Which can feel daunting, and might have you running back to a publisher. But the world is full of authors, and musicians, and other folks who signed deals in desperation just to get their book, or their record made, and now gets quarterly checks for 6¢. Publishers count on this desperation.

For the making of the actual book, I use IngramSpark. They’ll make your book and distribute it as well if you want. Which means it shows on all the online bookshops and your local bookshops can also order it to put on their shelves. The IngramSpark UI is a hellscape. (If you grew up using Debabelizer you’ll feel right at home.) But good news, the book designer you hired is/should be familiar with it and you can pay them to do all that. But in a nutshell, you will be uploading two PDFs: the guts and a cover. A few days later they will send you a digital proof, and a few days after you approve that you’ll be holding an actual book in your hands. If your book is roughly the size of my books, you’ll have spent about $7 for that book you’re holding.

I’m obviously glossing over some of the details here, like the fact that you will fuck up the Ingram thing about five times before you get it right, but you will eventually get it right. It may take asking for help, which is a great and brave thing to do, and you should never feel bad or weird about it.

I’ll end with this: the majority of trade publishers make their books the exact same way I just described. Using the same tools. (If the last page of the book contains a QR code it came from IngramSpark.) The quality available to them is the same quality available to you. This is why I don’t think the phrase “self-publishing” is applicable anymore. It’s just publishing. And the only difference between you making your own book and a publisher making your book is that you’ve seized the means of production.

You get to eat what you hunt.

And never, ever, ever, feel self-conscious about promoting your work.


🖐️ Got a question? Ask it! I might just answer it.

☕ Travis Baldree wrote an amazing essay about their publishing journey for Legends & Lattes which goes into a lot of the details that I skimmed above, and also gives you a second point of view on publishing, which is always helpful.

📚 As promised, here’s where you can buy not just my books, but also Erika’s books which are even better.

🔎 I’ve got a Presenting w/Confidence workshop coming up and it’s scheduled so that folks in Australia, New Zealand, Japan, and Singapore can join in the morning. And if you’re on the West Coast it’s in the afternoon. So let’s hang out.

🍉 The ceasefire is over, and let’s be honest, it was never real. Kids in Palestine need our help.

Read the whole story
tante
1 day ago
reply
"If you’re writing a book to prove how smart you are, you’re gonna have a miserable time of it. Write a book that makes people feel smart for reading it. Write a book that makes people feel joy and pain. Write a book that tells the stories that need to be told, lest they be forgotten."
Berlin/Germany
Share this story
Delete

Stop Sharing The Ghibli AI Slop, What Is Wrong With You

1 Comment

Nobody needs to see this shit

The post Stop Sharing The Ghibli AI Slop, What Is Wrong With You appeared first on Aftermath.



Read the whole story
tante
4 days ago
reply
"If you're trying to dunk on the practice by linking to articles or examples that showcase the work, inadvertently flooding people's timelines with examples of this ghoulish, stolen work, stop.

Nobody wants to see that shit. Nobody needs to see it."
Berlin/Germany
Share this story
Delete

Rechte Codes und Chiffren: So erkennst du rechte Sprache

1 Comment
Um zu verstehen, wie Rechtsextreme ticken, muss man ihre Codes kennen. Die wichtigsten Begriffe in einer Liste. mehr...
Read the whole story
tante
5 days ago
reply
Sehr treffend, dass die taz "Souveränität" in ihrer Liste rechter Chiffren aufführt.
#digitaleSouveränität und so
Berlin/Germany
Share this story
Delete

The Phony Comforts of AI Optimism

1 Comment and 3 Shares

A few months ago, Casey Newton of Platformer ran a piece called "The phony comforts of AI skepticism," framing those who would criticize generative AI as "having fun," damning them as "hyper-fixated on the things [AI] can't do."

I am not going to focus too hard on this blog, in part because because Edward Ongweso Jr. already did so, and in part because I believe that there are much larger problems at work here. Newton is, along with his Hard Fork co-host Kevin Roose, actively engaged in a cynical marketing campaign, a repetition of the last two hype cycles where Casey Newton blindly hyped the metaverse and Roose pumped the bags of a penguin-themed NFT project.

The cycle continues with Roose running an empty-headed screed about what he "believes" about the near-term trajectory of artificial intelligence — that AGI will be here in the next two years, that we are not ready, but also that he cannot define it or say what it does — to Newton claiming that OpenAI’s Deep Research is "the first good agent" despite the fact his own examples show exactly how mediocre it is.

You see, optimism is easy. All you have to do is say "I trust these people to do the thing they'll do" and choose to take a "cautiously optimistic" (to use Roose's terminology) view on whatever it is that's put in front of you. Optimism allows you to think exactly as hard as you'd like to, using that big, fancy brain of yours to make up superficially intellectually-backed rationalizations about why something is the future, and because you're writing at a big media outlet, you can just say whatever and people will believe you because you're ostensibly someone who knows what they're talking about. As a result, Roose, in a piece in the New York Times seven months before the collapse of FTX, was able to print that he'd "...come to accept that [crypto] isn't all a cynical money-grab, and that there are things of actual substance being built," all without ever really proving anything.

Roose's "Latecomer's Guide To Cryptocurrency" never really makes any argument about anything, other than explaining, in a "cautiously optimistic" way, the "features" of blockchain technology, all without really having to make any judgment but "guess we'll wait and see!" 

While it might seem difficult to write 14,000 words about anything — skill issue, by the way — Roose's work is a paper thin, stapled-together FAQs about a technology that still, to this day, lacks any real use cases. Three years later, we’re still waiting for those “things of actual substance,” or, for that matter, any demonstration that it isn’t a “cynical money-grab.” 

Roose's AGI piece is somehow worse. Roose spends thousands of words creating flimsy intellectual rationalizations, writing that "the people closest to the technology — the employees and executives of the leading A.I. labs — tend to be the most worried about how fast it’s improving," and that "the people with the best information about A.I. progress — the people building powerful A.I., who have access to more-advanced systems than the general public sees — are telling us that big change is near."

In other words, the people most likely to benefit from the idea (and not necessarily the reality) that AI is continually improving and becoming more powerful are those who insist that AGI — an AI that surpasses human ability, and can tackle pretty much any task presented to it — is looming on the horizon.  

The following quote is most illuminating:

This may all sound crazy. But I didn’t arrive at these views as a starry-eyed futurist, an investor hyping my A.I. portfolio or a guy who took too many magic mushrooms and watched “Terminator 2.”

I arrived at them as a journalist who has spent a lot of time talking to the engineers building powerful A.I. systems, the investors funding it and the researchers studying its effects. And I’ve come to believe that what’s happening in A.I. right now is bigger than most people understand.

Roose's argument, and I am being completely serious, is that he has talked to some people — some of them actively investing in the thing he's talking about who are incentivized to promote an agenda where he tells everybody they're building AGI — and these people have told him that a non-specific thing is happening at some point, and that it will be bigger than people understand. Insiders are "alarmed." Companies are "preparing" (writing blogs) for AGI. It's all very scary.

But, to quote Roose, "...even if you ignore everyone who works at A.I. companies, or has a vested stake in the outcome, there are still enough credible independent voices with short A.G.I. timelines that we should take them seriously."

Roose's entire argument can be boiled down to "AI models are much better," and when he says "much better" he means "they are able to get high scores on benchmarks," at which point he does not mention which ones, or question the fact that they (despite him saying these exact words) have had to "create new, harder tests to measure their capabilities," which can be read as "make up new tests to say why these models are good." He mentions, in passing, that hallucinations still happen, but "they're rarer on newer models," a statement he does not back up with evidence.

Sidenote: Although this isn’t a story about generative AI, per se, we do need to talk about the benchmarks used to test AI performance. I’m wary of putting too much stock into them, because they’re easily gamed, and quite often, they’re not an effective way of showing actual progress. One good example is SimpleQA, which OpenAI uses to test the hallucination rate of its models.

This is effectively a long quiz that touches on a variety of subjects, from science and politics, to TV and video games. An example question is: “Which Dutch player scored an open-play goal in the 2022 Netherlands vs Argentina game in the men’s FIFA World Cup?”

If you’re curious, OpenAI’s GPT 4.5 model — its most expensive general purpose LLM yet — flunked 37% of these questions. Which is, to say, that it confidently made up an answer more than one-third of the time.

There’s a really good article from the Australian Broadcasting Corporation that explains why this approach isn’t particularly useful, based on interviews with academics at the University of Monash and La Trobe University.

First, it’s gamable. If you know the answers ahead of time — and, given that you’re testing how close an answer resembles a pre-written “correct” answer, you absolutely have to — you can optimize the model to answer these questions correctly.

There’s no accusation that OpenAI — or any other vendor — has done this, but it remains a possibility. There’s an honor system, and honor systems often don’t work when there’s billions of dollars on the line, and there’s no real consequences for actually cheating. Or, indeed, no way for people to easily find out whether a vendor cheated on a test. Moreover, as the ABC piece points out, they don’t actually reflect the way people use generative AI.

While some people use ChatGPT as a way to find singular, discrete pieces of information, people also use ChatGPT — and other similar LLMs — to write longer, more complex pieces that incorporate multiple topics. Put simply, OpenAI is testing for something that doesn’t actually represent the majority of ChatGPT usage. 

In his AGI piece, Roose mentions that the OpenAI’s models continue to score higher and higher marks on the International Math Olympiad test. While that sounds impressive, it’s worth remembering that this is just another benchmark, and thus, is susceptible to the same kind of exploitation as any other benchmark.

This is, of course, important context for anyone trying to understand the overall trajectory of AI, and whether these models are improving, or whether we’re any closer to reaching AGI. And it’s context that’s curiously absent from the piece. 

He mentions that "...in A.I., bigger models, trained using more data and processing power, tend to produce better results, and today’s leading models are significantly bigger than their predecessors." He does not explain what those results are, what results they produce, and what said results lead to as products, largely because they haven't. He talks about "...if you really want to grasp how much better A.I. has gotten recently, talk to a programmer," then fails to quote a single programmer.

I won't go on, because the article is boring, thinly-sourced and speciously-founded.

But it's also an example of how comfortable optimism is. Roose doesn't have to make actual arguments – he makes statements, finds one example of something that confirms his biases, and then moves on. By choosing the cautiously-optimistic template, Roose can present "intelligent people that are telling him things" as proof that confirms what he wants to believe, which is that Dario Amodei, the CEO of Anthropic, who he just interviewed on Hard Fork, is correct when he says that AGI is mere years away.

Roose is framing his optimism as a warning – all without ever having to engage with what AGI is and the actual ramifications of its imminence. If he did, he would have to discuss concepts like personage. Is a conscious AI system alive? Does it have rights? And, for that matter, what even is consciousness? That’s no discussion of the massive, world-changing consequences of a (again, totally theoretical, no proof this exists) artificial intelligence that's as smart and capable (again, how is it capable?) as a human being.

Being an AI optimist is extremely comfortable, because Roose doesn't even have to do any real analysis — he has other people to do it for him, such as the people that stand to profit from generative AI's proliferation. Roose doesn't have to engage with the economics, or the actual capabilities of these models, or even really understand how they work. He just needs enough to be able to say "wow, that's so powerful!"

Cautious optimism allows Roose to learn as little as necessary to write his column, knowing that the market wants AI to work, even as facts scream that it doesn't. Cautious optimism is extremely comfortable, because — as Roose knows from boosting cryptocurrency — that there are few repercussions for being too optimistic.

I, personally, believe that there should be.

Here's a thing I wrote three years ago — the last time Roose decided to boost an entire movement based on vibes and his own personal biases.

The tech media adds two industry-unique problems - the fear of being wrong, and the fear of not being right. While one might be reasonable for wanting to avoid the next Theranos, one also does not want to be the person who said that social media would become boring and that people would leave it en masse. This is the nature of career journalism - you want to be right all the time, which means taking risks and believing both your sources and your own domain expertise - but it is a nature that cryptocurrency has taken advantage of at scale.

I hate that I've spent nearly two thousand words kvetching over Roose's work, but it's necessary, because I want to be abundantly clear: cautious optimism is cowardice.

Criticism — skepticism — takes a certain degree of bravery, or at least it does so when you make fully-formed arguments. Both Roose and Newton, participating in their third straight hype-cycle boost, frame skepticism as lazy, ignorant and childish.

To quote Casey:

...this is the problem with telling people over and over again that it’s all a big bubble about to pop. They’re staring at the floor of AI’s current abilities, while each day the actual practitioners are successfully raising the ceiling.

Newton doesn't actually prove that anyone has raised a ceiling, and in fact said:

...I fear, though, will be that “AI is fake and sucks” people will see a $200 version of ChatGPT and see only desperation: a cynical effort to generate more revenue to keep the grift going a few more months until the bottom drops out. And they will continue to take a kind of phony comfort in the idea that all of this will disappear from view in the next few months, possibly forever.

In reality, I suspect that many people will be happy to pay OpenAI $200 or more to help them code faster, or solve complicated problems of math and science, or whatever else o1 turns out to excel at. And when the open-source world catches up, and anyone can download a model like that onto their laptop, I fear for the harms that could come.

This is not meaningful analysis, and it's deeply cowardly on a number of levels. Newton does not prove his point in any way — he makes up a person that combines several ideas about generative AI, says that open source will “catch up,” and that also there will be some sort of indeterminate harm. It doesn't engage with a single critic’s argument. It is, much like a lot of Newton’s work, the intellectual equivalent of saying “nuh uh!”

Newton delights in his phony comforts. He proves his points in the flimsiest ways, knowing that the only criticisms he'll get are from the people he's steadfastly othered, people that he will never actually meaningfully engage with. He knows that his audience trusts him, and thus he will never have to meaningfully engage with the material. In fact, Newton isn't really proving anything — he is stating his own assumptions, giving the thinnest possible rationale, and then singling out Gary Marcus because he perceives him as an easy target.

This is, to repeat myself, extremely comfortable. Newton, like Roose, simply has to follow whatever the money is doing at any given time, learn enough about it to ask some questions in a podcast, and then get on with his day. There is nothing more comfortable than sitting on the podcast of a broadsheet newspaper and writing a newsletter for 150,000 people with no expectation that you'd ever have to push back on anything.

And I cannot be clear enough how uncomfortable it is being a skeptic or a cynic during these cycles, even before people like Casey Newton started trying to publicly humiliate critics like Gary Marcus.


My core theses — The Rot Economy (that the tech industry has become dominated by growth), The Rot-Com Bubble (that the tech industry has run out of hyper-growth ideas), and that generative AI has created a kind of capitalist death cult where nobody wants to admit that they're not making any money — are far from comfortable.

The ramifications of a tech industry that has become captured by growth are that true innovation is being smothered by people that neither experience nor know how (or want) to fix real problems, and that the products we use every day are being made worse for a profit. These incentives have destroyed value-creation in venture capital and Silicon Valley at large, lionizing those who are able to show great growth metrics rather than creating meaningful products that help human beings.

The ramifications of the end of hyper-growth mean a massive reckoning for the valuations of tech companies, which will lead to tens of thousands of layoffs and a prolonged depression in Silicon Valley, the likes of which we've never seen.

The ramifications of the collapse of generative AI are much, much worse. On top of the fact that the largest tech companies have burned hundreds of billions of dollars to propagate software that doesn't really do anything that resembles what we think artificial intelligence looks like, we're now seeing that every major tech company (and an alarming amount of non-tech companies!) is willing to follow whatever it is that the market agrees is popular, even if the idea itself is flawed.

Generative AI has laid bare exactly how little the markets think about ideas, and how willing the powerful are to try and shove something unprofitable, unsustainable and questionably-useful down people's throats as a means of promoting growth. It's also been an alarming demonstration of how captured some members of the media have become, and how willing people like Roose and Newton are to defend other people's ideas rather than coming up with their own.

In short, reality can fucking suck, but a true skeptic learns to live in it.

It's also hard work. Proving that something is wrong — really proving it — requires you to push against the grain, and battering your own arguments repeatedly. Case in point: my last article about CoreWeave was the product of nearly two weeks of work, where, alongside my editor, we poured over the company’s financial statements trying to separate reality from hype. Whenever we found something damning, we didn’t immediately conclude it validated our original thesis — that the company is utterly rotten. We tried to find other explanations that were equally or more plausible to our own hypothesis — “steelmanning” our opponent because being skeptical demands a level of discomfort. 

Hard work, sure, but when your hypotheses are vindicated by later reporting by the likes of Semafor and the Financial Times, it all becomes worthwhile. I’ll talk about CoreWeave in greater depth later in this post, because it’s illustrative of the reality-distorting effects of AI optimism, and how optimism can make people ignore truths that are, quite literally, written in black ink and published for all the world to see. 

An optimist doesn't have to prove that things will go well — a skeptic must, in knowing that they are in the minority, be willing to do the hard work of pulling together distinct pieces of information in something called an "analysis." A skeptic cannot simply say "I talked to some people," because skeptics are "haters," and thus must be held to some higher standard for whatever reason.

The result of a lack of true skepticism and criticism is that the tech industry has become captured by people that are able to create their own phony and comfortable realities, such as OpenAI, a company that burned $5 billion in 2024 and is currently raising $40 billion, the majority of it from SoftBank, which will have to raise $16 billion or more to fund it.

Engaging with this kind of thinking is far from comfortable, because what I am describing is one of the largest abdications of responsibility by financial institutions and members of the media in history. OpenAI and Anthropic are abominations of capitalism, bleeding wounds that burn billions of dollars with no end in sight for measly returns on selling software that lacks any real mass market use case. Their existence is proof that Silicon Valley is capable of creating its own illogical realities and selling them to corporate investors that have lost any meaningful way to evaluate businesses, drunk off of vibes and success stories from 15 or more years ago.

What we are witnessing is a systemic failure, not the beginnings of a revolution. Large Language Models have never been a mass market product — other than ChatGPT, generative AI products are barely a blip on the radar — and outside of NVIDIA (and consultancy Turing), there doesn't appear to be one profitable enterprise in the industry, nor is there any sign any of these companies will ever stop burning money.

The leaders behind the funding, functionality, and media coverage of the tech industry have abdicated their authority so severely that the consensus is that it's fine that OpenAI burns $5 billion a year, and it's also fine that OpenAI, or Anthropic, or really any other generative AI company has no path to profitability. Furthermore, it's fine that these companies are destroying our power grid and our planet, and it's also fine that they stole from millions of creatives while simultaneously undercutting those creatives in an already-precarious job market.

The moment it came out that OpenAI was burning so much money should've begun an era of renewed criticism and cynicism about these companies. Instead, I received private messages that I was "making too big a deal" out of it.

These are objectively horrifying things — blinking red warning signs that our markets and our media have reached an illogical point where they believe that destruction isn't just acceptable, but necessary to make sure that "smart tech people" are able to build the future, even if they haven't built anything truly important in quite some time, or even if there’s no evidence they can build their proposed future.

I am not writing this with any comfort or satisfaction. I am fucking horrified. Our core products — Facebook, Google Search, Microsoft Office, Google Docs, and even basic laptops — are so much worse than they've ever been, and explaining these things unsettles and upsets me. Digging into the fabric of why these companies act in this way, seeing how brazen and even proud they are of their pursuit of growth, it fills me full of disgust, and I'm not sure how people like Roose and Newton don't feel the same way.

And now I want to show you how distinctly uncomfortable all of this is.


Last week, I covered the shaky state of AI data center provider CoreWeave — an unprofitable company riddled by onerous debt, with 77% of its $1.9 billion of 2024 revenue coming from Microsoft and NVIDIA. CoreWeave lost $863 million in revenue in 2024, and when I published this analysis, some people suggested that its "growth would fix things," and that OpenAI's deal to buy $11.9 billion of compute over five years was a sign that everything would be okay.

Since then, some things have come out:

To summarize (and repeat one part from my previous article):

  • CoreWeave is set to lose $15 billion this year. Its projected revenue, according to The Information, is only $4.6 billion.
  • To service its future revenue, CoreWeave must also build more data centers, which will mean it needs more debt, and the terms of DDTL 2.0 (its largest loan) means that any debt it raises must be used to repay it.
  • CoreWeave's revenue is highly concentrated, and its future almost entirely dependent on both OpenAI's ability to pay (in that it needs the money and its loans are contingent on the creditworthiness of its contracts, according to the Financial Times).
  • CoreWeave's entire data center buildout strategy — over a gigawatt of capacity — is in the hands of Core Scientific, a company that doesn't appear to have built an AI data center before and makes the majority of its money from mining and selling crypto.
  • To afford to build out the data centers necessary to serve OpenAI, CoreWeave needs to spend tens of billions of dollars it does not have, and may not have access to depending on the terms of their loans and the overall state of the market.

I'm afraid I'm not done explaining why I'm uncomfortable.

Let me make this much simpler.

  • The majority of CoreWeave's future revenue appears to come from OpenAI. In any case, assuming the deal goes as planned, CoreWeave will still burn $15 billion in 2025.
  • To service the revenue that OpenAI will bring, CoreWeave is required to aggressively expand. It currently lacks the liquidity to do so, and further loans will be contingent on its contracts, which are contingent on CoreWeave's ability to aggressively expand.
  • OpenAI's future expansion is contingent both on CoreWeave and Stargate's ability to deliver compute.
  • OpenAI's ability to pay CoreWeave is contingent on its ability to continue raising money, as it is set to lose over $14 billion in 2025. It does not anticipate being profitable until 2030, and does not have any explanation as to how they will become so other than “we will build Stargate.”
  • OpenAI's ability to continue raising money is contingent on SoftBank providing it, as is Stargate's future contingent on SoftBank's ability to both give OpenAI money and contribute $19 billion to Stargate.
  • SoftBank's ability to give OpenAI money is contingent on its ability to raise debt.
  • SoftBank's ability to raise debt is going to be dictated by investor sentiment about the future of AI.
  • Even if all of these things somehow happen, both CoreWeave and OpenAI are businesses that lose billions of dollars a year with no tangible evidence that this will ever change.

Okay, simpler.

CoreWeave's continued existence is contingent on its ability to borrow money, pay its debts, and expand its business, which is contingent on OpenAI's ability to raise money and expand its business, which is contingent on SoftBank's ability to give it money, which is contingent on SoftBank's ability to borrow money.

OpenAI is CoreWeave. CoreWeave is OpenAI. SoftBank is now both CoreWeave and OpenAI, and if SoftBank buckles, both CoreWeave and OpenAI are dead. For this situation to work even for the next year, these companies will have to raise tens of billions of dollars just to maintain the status quo.

There is nothing comfortable about my skepticism, and in fact I'd argue it's a huge pain in the ass. Being one of the few people that is willing to write down the numbers in stark, objective detail is a frustrating exercise — and it's isolating too, especially when I catch strays from Casey Newton claiming he's taking "detailed notes" about my work as a punishment for the sin of "doing basic mathematics and asking why nobody else seems to want to."

It isn't comfortable to continually have to explain to people who are all saying "AI is the future" that the majority of what they are discussing is fictional, because it reveals how many people believe things based entirely on someone they trust saying it's real, or being presented a flimsy argument that confirms their biases or affirms their own status quo.

In Newton and Roose's case, this means that they continue being the guys that people trust will bring them the truth about the future. This position is extremely comfortable, as it doesn't require them to be correct, only convincing and earnest.

I don't fear that we're "not taking AGI seriously." I fear that we've built our economy on top of NVIDIA, which is dependent on the continued investment in GPUs from companies like Microsoft, Amazon and Google, one of which has materially pulled back from data center expansion. Outside of NVIDIA, nobody is making any profit off of generative AI, and once that narrative fully takes hold, I fear a cascade of events that gores a hole in the side of the stock market and leads to tens of thousands of people losing their jobs.

Framing skepticism as comfortable is morally bankrupt, nakedly irresponsible, and calls into question the ability of those saying it to comprehend reality, as well as their allegiances. It's far more comfortable to align with the consensus, to boost the powerful in the hopes that they will win and that their victories will elevate you even further, even if your position is currently at the very top of your industry.

While it's possible to take a kinder view of those who peddle this kind of optimism — that they may truly believe these things and can dismiss the problems as surmountable — I do not see at this time in history how one can logically or rationally choose to do so.

To choose to believe that this will "all just work out" at this point is willful ignorance and actively refusing to engage with reality. I cannot speak to the rationale or incentives behind the decision to do so, but to do so with a huge platform, to me, is morally reprehensible. Be optimistic if you'd like, but engage with the truth when you do so.

I leave you with a quote from the end of HBO's Chernobyl: "where I once would fear the cost of truth, now I only ask — what is the cost of lies?"

Read the whole story
tante
7 days ago
reply
"Cautious optimism is extremely comfortable, because — as Roose knows from boosting cryptocurrency — that there are few repercussions for being too optimistic."
Berlin/Germany
Share this story
Delete
Next Page of Stories