Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2524 stories
·
139 followers

(DT! comic) A.I. FOMO can FO

1 Comment

This strip is also up on on Design Week!

Know someone who might enjoy this comic? Help spread the word and share this with them! I need all the help I can get… THANK YOU!

Share

For paying subscribers, today’s post also comes with extra stuff! Fridays are when I sometimes ramble on about random things, share behind the scenes stuff, show WIP versions of future strips, chat about other projects i’m working on, or dredge up some old stuff from my cartooning past. You also get some music recommendations and can claim your own DT! avatar. Why not upgrade and join us?

Subscribe now

Read more

Read the whole story
tante
1 day ago
reply
I feel this very deeply.
Berlin/Germany
Share this story
Delete

US-Stablecoin: EU erwägt öffentliche Blockchain für digitalen Euro

1 Comment
Die EU überdenkt ihre Strategie für den digitalen Euro, nachdem die USA umfassende Stablecoin-Gesetze verabschiedet haben. (Digitaler Euro, Wirtschaft)
Read the whole story
tante
8 days ago
reply
Das venn-Diagramm aus Crypto-Bros und AI-Bros ist ein Kreis. Im "KI" Hype kommt nun auch Blockchain Unfug wieder hoch (wie ich schon vorhersagte)
Berlin/Germany
Share this story
Delete

Moguls Moving Money Isn’t the Same as Building a Business

1 Comment

One key point about entrepreneurship (which I covered in my lengthy piece yesterday) is worth amplifying because society is often being lied to about what actually constitutes building a business.

Put simply: a person who moves money around is not the same as someone who actually makes something. It is not impossible that a money-mover is adding value — I have seen it happen! — but rearranging capital is not, in and of itself, the same thing as actually inventing, or being innovative, or building something from scratch.

I point this out because I’ve spent my career enabling creative people. Whether it’s artists and writers, or coders and makers, my heart is with the people who make things with a soul. Sometimes they make stuff just because that’s what makes their heart stir. Sometimes it’s so they can sell enough of their work to be able to pay the bills. And yep, sometimes it’s so they can start a big business! All of those things seem like valid reasons for creative people to exercise that urge to build something new, and to profit from their effort in doing so.

Frankly, I don’t give a shit what happens to the guys who move the money around to enable the makers. Get the dollars into the accounts and then get the hell out of the way. Try not to crash the economy while you do it.

But what makes me absolutely furious is that the greediest, most do-nothing cohort of the money-movers have spent decades creating the myth that now they are the builders. They think they are the creative ones, the inventors, the ones who see the future. They’ve taken to writing grand pronouncements about how society ought to run, and how they can see the future — based solely on what they might write checks for.

How the VCs boiled the frog

Over the last two decades, the loudest and most prominent venture capital investors have gone from saying they simply provide resources to founders who come up with great ideas, to major VC firms now having extremist political manifestos on their websites, which they promote through coordinated media operations. These campaigns are designed to recruit compliant subordinates as “founders” in order to out the agenda of the money-movers. This is nothing like the prior ideal of enabling creative people who just have a genius idea that they want to get out into the world.

Part of this effort has also been building the distortion that the only way a new business happens is through venture capital funding. Venture funding is, compared to other sources, an extreme form of betting on new businesses that was only ever supposed to be one narrow kind of high-risk, high-reward funding, complemented by many, many other sources. And any of these other much more reasonable options might be more likely ways of building a sustainable business. But the tycoons who made their money in VC have warped the public dialogue so much that the idea of getting something like a bank loan to start a company sounds as anachronistic as a horse and buggy. New companies and even the word “startup” itself have become virtually synonymous with venture capital and the extremist agenda of the most vocal cohort of that community.

Worse, this reframing of capital-as-creativity has captured politicians and regulators at every level. Why does Jeff Bezos need three billion dollars in handouts to build a headquarters in NYC, based on a fake promise to hire Amazon workers that he was always going to hire in the city anyway? Because compliant chumps like then-governor Andrew Cuomo mistakenly think moving money around to billionaires is what constitutes building a business. Do you know how many mom-and-pop small businesses in NYC you could have saved if you put three billion dollars in subsidies into helping those who are squeezed out of their spaces by greedy landlords, instead of the HQ2 boondoggle?

What is really a business, then?

So: beware of people conflating pushing pennies around with actually doing the hard work of building a business. Beware of those who pretend that venture capital and the VCs who run that industry are the voices of entrepreneurship — or that they’re even on the side of entrepreneurs.

And beware of anyone who thinks innovation is about one lone genius or big piles of money. It’s about communities, creativity, and the joyful optimism of coming together to do hard work.

Read the whole story
tante
9 days ago
reply
"Worse, this reframing of capital-as-creativity has captured politicians and regulators at every level."
Berlin/Germany
Share this story
Delete

AI doomsday and AI heaven: live forever in AI God

1 Comment and 2 Shares

I get a lot of requests for a piece on the AI 2027 paper. That’s the one that draws a lot of lines going up on a graph to show we’re getting a genuine artificial general intelligence by 2027! Then it kills us all. [AI 2027]

But the important thing about AI 2027 is that it comes straight from the hardcore AI doomsday cult nutcases.

Here in the AI bubble, you’ll see a lot of AI doomsday cultists. Their origin is the “rationalist” subculture from lesswrong.com, which is also a local Bay Area subculture. That’s nothing to do with any other historical philosophy called “rationalism”, but it’s what they call themselves.

The rationalists have spent twenty years hammering on how we need to avert AI doomsday and build the good AI instead. That’s science fiction, but they’re convinced it’s science.

I’m annoyed that this is important, because these guys are influential in Silicon Valley. They were funded by Peter Thiel for ten years, and a whole lot of the powerful tech guys buy into a lot of these ideas. Or a version of the ideas that feeds their egos.

A huge chunk of AI bubble company staff are AI doomsday cultists. Sam Altman isn’t actually a true believer, so the cultists tried to fire him from OpenAI in 2023. Ilya Sutskever and some fellow cultists went off to form Safe Superintelligence. Anthropic is full of the cultists, which is why they keep putting out papers about how the chatbot totally wants to kill you.

The rationalists also started the Effective Altruism movement. Which sounds like a nice idea — except they consider it obvious that the most altruistic project for humanity is averting the AI doomsday. This is the most effective possible altruism.

What is “rationality”?

The shortcut to a complete explanation of the rationalists is the new book More Everything Forever by Adam Becker. This book is great, and I’m not just saying that ’cos i’m in it. I cannot over-recommend it. It’s awesome. [Amazon UK; Amazon US]

Rationalism claims to be a system to make you into a better thinker. Armed with these tools, your brain will be superior and win in the real world. Sounds cool!

Rationalism was founded by a fellow called Eliezer Yudkowsky. You’ll see him quoted in the press as an “AI researcher” or similar. The media call him up cos he came up with AI doomsday as we know it.

What Yudkowsky actually does is write blog posts. He wrote up his philosophy as a million or so words of blog posts from 2006 to 2009. This collection is called The Sequences.

A lot of rationalists have not in fact read the Sequences. But the whole AI doomsday thing percolates straight from the Sequences. So the texts are useful to understand where the ideas come from.

The goal of rationality

Explaining all of rationality would take a vastly longer post than this. Read More Everything Forever. I’m not even getting to the Harry Potter fanfic, the cult of Ziz, or Roko’s basilisk today!

So let’s deal with one tiny sliver today.

The goal of LessWrong rationality is so Eliezer Yudkowsky can live forever as an emulated human mind running on the future superintelligent AI god computer, to end death itself.

Yudkowsky’s entire philosophy was constructed backwards from that goal. Being super smart obviously leads to that, see.

But. Yudkowsky realised it might go wrong if the AI didn’t care about humans and human values. Yudkowsky believes there is no greater threat to humanity than a rogue artificial super-intelligence taking over the world and treating humans as just raw materials.

So Yudkowsky has spent the years since The Sequences hammering on the AI doomsday and trying to avert it. He wrote the Sequences to convince people about the dangers of the bad AI: [LessWrong]

it got to the point that after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists.

Scientists, philosophers, and even theologians may read the following and start yelling at the screen.

“Singletons Rule OK” — in the future, there will be a single superintelligent AI local-god computer. Whichever super-AI comes first basically just takes over everything. This is why it’s very important to make sure it values humans. Yudkowsky calls that “Friendly AI.” [LessWrong]

it’s obvious that a “winner-take-all” technology should be defined as one in which, ceteris paribus, a local entity tends to end up with the option of becoming one kind of Bostromian singleton — the decisionmaker of a global order in which there is a single decision-making entity at the highest level.

“Beyond The Reach of God” explains how this friendly-AI superintelligence, our local AI God, will prevent all human death from then on: [LessWrong]

on a higher level of organization we could build some guardrails and put down some padding; organize the particles into a pattern that does some internal checks against catastrophe.

… A superintelligence — a mind that could think a trillion thoughts without a misstep — would not be intimidated by a challenge where death is the price of a single failure.

“Timeless Identity” posits that the future AI God will not just prevent death, it’ll revive every past person it can: [LessWrong]

“Why would future civilizations bother to revive me?” (Requires understanding either economic growth diminishing the cost, or knowledge of history and how societies have become kinder over time, or knowing about Friendly AI.)

With this digital immortality plan, you might think we’re just talking about copies of you, not you. But Yudkowsky’s got you covered — “Identity Isn’t In Specific Atoms” reassures you that you are a pattern of information, you are not the particular atoms in your brain and body, because all subatomic particles of a particular kind are literally identical: [LessWrong]

Quantum mechanics says there isn’t any such thing as a ‘different particle of the same kind’, so wherever your personal identity is, it sure isn’t in particular atoms, because there isn’t any such thing as a ‘particular atom’.

“Three Dialogues on Identity” tries to get across to you how, in the fabulous future when you are running as a process on the mind of the AI God, your experiences as a human emulation living in the Matrix are just as real as your experiences in this world made of atoms. If emulated you eats an emulated banana, it’s you eating a banana: [LessWrong]

Rest assured that you are not holding the mere appearance of a banana. There really is a banana there, not just a collection of atoms.

Let’s look at “Timeless Identity” again — how any copy of you, at any time, in any many-worlds quantum branch, is you. There’s no such thing as the “original” and the “copy”. Your copies are also you. The same you, not a different you.

Also, you should sign up for cryonics and freeze your brain when you die, because the future AI God can definitely retrieve your information from the freezer-burned cell mush that was once your brain. Yudkowsky is extremely into cryonics.

If you don’t understand the post, that’s because:

It is built upon many prerequisites and deep foundations; you will not be able to tell others what you have seen, though you may (or may not) want desperately to tell them.

Now, you might think that’s a cult talking about esoteric doctrines for true believers.

We’re talking about living forever. What about the heat death of the universe, huh? “Continuous Improvement” tells us how forever means forever — because we just might escape the heat death of the universe with new physics we don’t know yet! [LessWrong]

There is just… a shred of reasonable hope, that our physics might be much more incomplete than we realize, or that we are wrong in exactly the right way, or that anthropic points I don’t understand might come to our rescue and let us escape these physics (also a la Greg Egan).

So I haven’t lost hope. But I haven’t lost despair, either; that would be faith.

Yeah, Yudkowsky talked like Sephiroth a lot in the Sequences.

Do the rationalists believe?

The current AI bubble doomsday squad do believe this stuff. Anyone who talks about “alignment”, that’s actually a rationalist jargon word meaning Friendly AI God versus AI doomsday.

This all sounds like science fiction. Because it is science fiction. The rationalists take science fiction — overwhelmingly from anime, because Bay Area rationalists are the hugest weebs on earth — and they want to make make anime real.

Imagine living in the mind of AI God with these bozos … forever.

Should you believe any of this? I mean, it’d be fun. If there’s another copy of me out there in the quantum universe or running on the mind of the future AI God, I sincerely hope he has a fun time. He’s a good chap, he deserves it.

But all of this is functionally irrelevant to you and me. Because it doesn’t exist and nobody has any idea how to make it exist. Certainly not these dweebs. We have some rather more pressing material realities to be getting on with.

Does Yudkowsky still believe?

I was arguing a bit on Bluesky about this with Professor Beth Singler, who’s someone you should listen to. She thinks Yudkowsky doesn’t really believe all that stuff any more and he’s no longer into digital immortality. She bases this on an interview she did with him a short time ago and his recent posts on Twitter [Bluesky]

It’s true that Yudkowsky is very pessimistic about AI doomsday now. He thinks humanity is screwed. The AI is going to kill us. He thinks we need to start bombing data centres.

And a lot of Yudkowsky’s despair is that his most devoted acolytes heard his warnings “don’t build the AI Torment Nexus, you idiots” and they all went off to start companies building the AI Torment Nexus.

Singler considers Yudkowsky has more or less given up on digital immortality, and now he just says “sign up for cryonics.” But I think that means Yudkowsky still believes his original statements from the late 2000s, because his vision of cryonics is the digital immortality idea.

The deepest love for humanity, or a portion thereof

While we’re talking about what rationalists actually believe, I’d be remiss not to mention one deeply unpleasant thing about the rationalist subculture: they are really, really into race and IQ theories and scientific racism. Overwhelmingly.

This started early. There’s Yudkowsky posts from 2007 pushing lists of race theorist talking points. You can see the Peter Thiel influence. The Silicon Valley powerbrokers are also very into the race science bit of rationalism. [LessWrong; LessWrong]

Scott Alexander Siskind of SlateStarCodex, a writer who a whole lot of centrists read and love, now tells his readers how they need to get into the completely discredited race theorist Richard Lynn. Scott complains that activists tried to get Lynn cancelled. The mask doesn’t come any further off than this. [Astral Codex Ten]

There was also a huge controversy in Effective Altruism last year when half the Effective Altruists were shocked to discover the other half were turbo-racists who’d invited literal neo-Nazis to Effective Altruism conferences. The pro-racism faction won. [Effective Altruism Forum, 2024; polemics.md]

Now, you might look at rationalism’s embrace of fancy racism and question just how committed they are to the good of all humanity.

If you want a rationalist to go away and stop bothering you, in real life or online, ask him how much he’s into the race science bit. If he claims he isn’t, ask him if he sits down at the table with the rationalists he knows are into the race science bit, and ask him why he sits down with those guys. He’ll probably vanish, with a slight risk he starts preaching fake racist statistics at you.

Read the whole story
tante
12 days ago
reply
David Gerard on the "AI doomers"/"rationalists" and their beliefs.

In the end they are a bunch of eugenicist, racist losers afraid of death.
Berlin/Germany
Share this story
Delete

How to not build the Torment Nexus

2 Comments and 3 Shares

A very large chaotic painting. The very top says NO GODS NO MASTERS. Below that there's a whole scene of naked people in line to kiss Satan's ass. Then Bugs Bunny and a beheaded Mickey Mouse show up. Plus some pink camo guillotine blades. Honestly, it's a lot.
This is No Gods No Masters, 2024, painted by me, in wax. 65x78”

Join the $2 Lunch Club!


This week’s question comes to us from Will Hopkins:

When your job and healthcare depends on building the Torment Nexus, but you actually learned the lesson from the popular book Don't Build the Torment Nexus, how do you keep your soul intact and try to put less torment into the world?

Oh good, I was looking for a question that’s going to piss off every one of my readers, and here it is.

I mean, the answer to the question is right there in the question, which of course you already knew, even as you typed it out. If you don’t want to add more torment to the world you simply don’t build the Torment Nexus. That’s basic math. If you have too many eggs, going to the store for more eggs only results in having even more eggs than you started with.

What you’re actually looking for, I believe, is someone to absolve you of building the Torment Nexus because you took a job at the Torment Nexus Factory. Which is a thing I cannot do. Not that I don’t understand your need for income and health insurance—I very much do—but absolution is the realm of priests and other con artists. But hey, you’re the one who brought souls into the conversation. So let’s talk about souls.

Specifically, let’s talk about the soul of tech. And yes, I know industries don’t have souls, but honestly, neither do people. What they do have—or lack—is an ethical core. A way they want to interact with the world and the people that they come in contact with. For example, not too long ago, tech was seen as an industry of progress and innovation. Tech was a sector that made us think of humankind moving forward, possibly into some happy Star Trek like future where no one needed money and pie magically appeared in your wall if you said “Magic wall, pie me!” One might argue that not too long ago the soul of tech bent towards the positive. And, yes, people in the global South building your iPhones and mining the rare earth elements necessary to make a bunch of your tech shit work might very well argue with that assessment. They’d be right to do so. But the vibe, at least once you put the blinders on, was a positive one.

You could feel good working in tech knowing that you were helping Aunt Mabel in Kansas City see baby pictures of her niece and nephew in San Mateo, helping people complete mundane tasks online, helping a student find information for a paper, helping a farmer order or sell hay, and even giving some fun weirdos a stage. What you were definitely not doing, however, was building the Torment Nexus. (Again, the people of the global South would disagree. Again, they’d be correct to do so.)

When I think back to the stuff that excited me about “the web” so many years ago, that’s the stuff that pulled me in. Good vibes! Connecting the world! All of which made sense because I was coming in as the industry was nascent and figuring itself out. It was full of positivity. As nascent industries tend to be. (Think back to how excited people used to be about building railroads in the Gilded Age! The actual Gilded Age, not the TV show. Although, yeah, also the TV show.) As industries mature, they tend to get a little boring. And as industries age, and start seeing their own collapse over the horizon, they tend to get… defensive. Bitter. Conservative. Fucking hostile. (Yes, I’m talking about people too.) Tech, which has always made progress in astounding leaps and bounds, is just speedrunning the cycle faster than any industry we’ve seen before. It’s gone from good vibes, to a real thing, to unicorns, to let’s build the Torment Nexus in record time. All in my lifetime.

I was lucky (by which I mean old) to enter this field when I felt, for my own peculiar reasons, that it was at its most interesting. And as it went through each phase, it got less and less interesting to me, to the point where I have little desire to interact too much with it now. Other than sending my newsletter, reading my Below Decks recaps, and the occasional peek at Bluesky. In fact, when I think about all the folks I used to work on web shit with and what they’re currently doing, the majority are now woodworkers, ceramicists, knitters, painters, writers, etc. People who make things tend to move on when there’s nothing left to make. Nothing to make but the Torment Nexus.

Of course, the reason they get to do those things is because this shit pays really well. Or at least used to pay really well. It still does, compared to most jobs. Just not at Gold Rush 2.0 levels anymore. And the odds of you making your first million before you’re 35 aren’t what they used to be. Although a select few will, of course. You, as a worker on the Torment Nexus Team have more in common with the people you’re feeding into the Torment Nexus than you do with the Torment Nexus Leadership team, who have started feeding their own into the Torment Nexus.

I guess what I’m saying is that it’s getting close to impossible to be in this industry—at the moment—without being on the Torment Nexus Team. And lest you think “at the moment” is load-bearing… well, I wouldn’t lean too hard on it. I don’t see shit improving too soon. Industries in decline tend to pick up speed, not reverse course, and their death moan comes when they shift from making things to extracting value.

This is all just a long-winded way of saying that while your current job, and healthcare, depends on building the Torment Nexus, your best bet might be to start thinking of fields that don’t require building the Torment Nexus to earn a living. And while spending considerable time and energy (and probably going into considerable student loan debt) to enter a field that wasn’t building the Torment Nexus when you decided this was how you wanted to earn a living can be maddening, and depressing, and anger-inducing… we need to judge this field by what it’s currently doing, and not the vibes of the past. None of this is meant to make you feel good, because it doesn’t feel good. But keeping your foot on the accelerator and hoping the BRIDGE OUT sign is a lie only leads to worse outcomes than pulling over and rerouting, as annoying as that might be.

Since your question was specifically about keeping your soul intact, I will do you the favor and the kindness of answering you honestly. You cannot keep your soul intact while building the Torment Nexus. The Torment Nexus is, by definition, a machine that brings torment onto others. It destroys souls. And a soul cannot take a soul and remain whole. It will leave a mark. A memory. A scar. Your soul will not remain intact while you’re building software that keeps track of undocumented workers. Your soul will not remain intact while building surveillance software whose footage companies hand over to ICE. Your soul will not remain intact while you build software that allows disinformation to spark genocides. Your soul will not remain intact while you hoover up artists’ work to train theft-engines that poison the water of communities in need. Your soul will eventually turn into another thing altogether. An indescribable thing.

I acknowledge that there are people working at the Torment Nexus factory for whom it would be hard to leave. For example, people on H-1B visas have their residency tied to their job. The current healthcare fuckery will most surely bring the term “pre-existing condition” back into play. (America is a cruel and sick place.) Switching jobs will trigger jackassery with your health insurance, which might be covering your entire family. I’m sure there are more examples of this that are just as real, and if you are one of these people I absolutely feel for you. And as much as I feel for you, I also believe that if you’re a person of good conscience (and why would you have asked this question if you weren’t) building the Torment Nexus will have the same negative effect on your soul. So while I can’t in good conscience scream at you to put down your tools and leave your job, I will instead tell you to take care of your souls the best you can.

I’d argue it’s more ethical to do shit work at the Torment Nexus factory than to do good work at the Torment Nexus factory. But the most ethical move of all, of course, is to not work at the Torment Nexus factory at all. I just realize that’s easier for some folks than others.

I do believe, however, that the majority of folks working at the Torment Nexus factory are there because they’ve convinced themselves that it’s ok to be there, or that the Torment Nexus isn’t that bad, or that their proximity to the Torment Nexus will protect them from the Torment Nexus. Or, quite honestly, they just don’t give a fuck. They’re getting their bag. After all, that Vision Pro sitting on your office shelf gathering dust didn’t pay for itself.

To anyone who’s about to email me and let me know “they have a right to earn a living” please make sure to append “…in the manner in which I’ve become accustomed” to the end of that sentence because, honestly, that’s what you’re really saying. But no one has the right to live a hundred times better than anyone else. Equality also means this.

But surely the real problem is a systemic one and you can’t blame the workers for participating. After all, they are just trying to feed themselves and get healthcare. And, that is correct, of course. What’s also correct is that systems are powered by people. They rely on the labor of people to function, but also on the despair of people believing that propping up the system is the only option available to them. Which is fucking dire. For the Torment Nexus to get built it needs to convince you that your only option for survival is to build the Torment Nexus, much like an AI company telling us that the only path to solving the climate crisis is to use what’s left of Earth’s resources to power its AI in the hope that it will come up with a solution to the climate crisis.

Ultimately, the names of everyone who built the Torment Nexus will be engraved on the Torment Nexus, or possibly on a plaque below the Torment Nexus. Or possibly on a beacon in space roughly where Earth used to be, sending out a repeating signal to other civilizations saying “Don’t build the Torment Nexus!” That list won’t have categories. It won’t be broken up into “people who wanted to build the Torment Nexus,” “people who were tricked into building the Torment Nexus,” and “people who just really needed healthcare.”

It’ll just be a list of people who were conned into believing they had no other options.

As a final sliver of hope (and trust that I’m doing this for my own benefit as much as yours) I would remind you that this isn’t our first encounter with the Torment Nexus. We’ve built it before. Like the devil, another collector of souls, it’s come to us under many names: The Spanish Inquisition, The Dutch East India Company, The English Navy, Portuguese slave forts, Jim Crow, The Holocaust, Japanese Internment Camps, too many genocides to mention—including the one America is currently funding in Gaza, and as the spirit of Ursula K Le Guin keeps reminding us—the divine right of kings. And trust that this is by no means an exhaustive list. Turns out human beings really like building the Torment Nexus.

But it also turns out that human beings are good at defeating the Torment Nexus.


🙋 Got a question you’d like a really fucking depressing answer to? Ask it.

📣 The next Presenting w/Confidence workshop is scheduled for August 14&15 and it’s filling up fast. Come learn how to talk people out of building the Torment Nexus.

📙 I’m currently reading Karen Hao’s Empire of AI, and holy shit. This book is worth your time.

💰 If you’re “enjoying” this newsletter join the $2 Lunch Club and support independent writing that’s not on Substack, which is definitely a part of the Torment Nexus.

🍩 Last Sunday a neighbor brought homemade donuts to the dogpark and it was amazing. Don’t underestimate the healing power of bringing donuts.

💩 You already know Alan Dershowitz is a POS, but David Roth tells the story in a way that’s worth reading.

🍉 Please donate to the Palestinian Children’s Relief Fund.

🏳️‍⚧️ Please donate to Trans Lifeline.

Read the whole story
tante
23 days ago
reply
Berlin/Germany
Share this story
Delete
1 public comment
rocketo
12 days ago
reply
“You, as a worker on the Torment Nexus Team have more in common with the people you’re feeding into the Torment Nexus than you do with the Torment Nexus Leadership team, who would have no problem feeding you into the Torment Nexus.”
seattle, wa

Jussi Pakkanen: Let's properly analyze an AI article for once

1 Comment

Recently the CEO of Github wrote a blog post called Developers reinvented. It was reposted with various clickbait headings like GitHub CEO Thomas Dohmke Warns Developers: "Either Embrace AI or Get Out of This Career" (that one feels like an LLM generated summary of the actual post, which would be ironic if it wasn't awful). To my great misfortune I read both of these. Even if we ignore whether AI is useful or not, the writings contain some of the absolute worst reasoning and stretched logical leaps I have seen in years, maybe decades. If you are ever in the need of finding out how not to write a "scientific" text on any given subject, this is the disaster area for you.

But before we begin, a detour to the east.

Statistics and the Soviet Union

One of the great wonders of statistical science of the previous century was without a doubt the Soviet Union. They managed to invent and perfect dozens of ways to turn data to your liking, no matter the reality. Almost every official statistic issued by USSR was a lie. Most people know this. But even most of those do not grasp just how much the stats differed from reality. I sure didn't until I read this book. Let's look at some examples.

Only ever report percentages

The USSR's glorious statistics tended to be of the type "manufacturing of shoes grew over 600% this five year period". That certainly sounds a lot better than "In the last five years our factory made 700 pairs of shoes as opposed to 100" or even "7 shoes instead of 1". If you are really forward thinking, you can even cut down shoe production on those five year periods when you are not being measured. It makes the stats even more impressive, even though in reality many people have no shoes at all.

The USSR classified the real numbers as state secrets because the truth would have made them look bad. If a corporation only gives you percentages, they may be doing the same thing. Apply skepticism as needed.

Creative comparisons

The previous section said the manufacturing of shoes has grown. Can you tell what it is not saying? That's right, growth over what? It is implied that the comparison is to the previous five year plan. But it is not. Apparently a common comparison in these cases was the production amounts of the year 1913. This "best practice" was not only used in the early part of the 1900s, it was used far into the 1980s.

Some of you might wonder why 1913 and not 1916, which was the last year before the bolsheviks took over? Simply because that was the century's worst year for Russia as a whole. So if you encounter a claim that "car manufacturing was up 3700%" some year in 1980s Soviet Union, now you know what that actually meant.

"Better" measurements

According to official propaganda, the USSR was the world's leading country in wheat production. In this case they even listed out the production in absolute tonnes. In reality it was all fake. The established way of measuring wheat yields is to measure the "dry weight", that is, the mass of final processed grains. When it became apparent that the USSR could not compete with imperial scum, they changed their measurements to "wet weight". This included the mass of everything that came out from the nozzle of a harvester, such as stalks, rats, mud, rain water, dissidents and so on.

Some people outside the iron curtain even believed those numbers. Add your own analogy between those people and modern VC investors here.

To business then

The actual blog post starts with this thing that can be considered a picture.

What message would this choice of image tell about the person using it in their blog post?

  1. Said person does not have sufficient technical understanding to grasp the fact that children's toy blocks should, in fact, be affected by gravity (or that perspective is a thing, but we'll let that pass).
  2. Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".
Are these the sort of traits a person in charge of the largest software development platform on Earth should have? No, they are not.

To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself. Cultural misappropriation is high on the list of core values at Github HQ it seems.

With that let's move on to the actual content, which is this post from Twitter (to quote Matthew Garrett, I will respect their name change once Elon Musk starts respecting his child's).

Oh, wow! A field study. That makes things clear. With evidence and all! How can we possibly argue against that?

Easily. As with a child.

Let's look at this "study" (and I'm using the word in its loosest possible term here) and its details with an actual critical eye. The first thing is statistical representativeness. The sample size is 22. According to this sample size calculator I found, a required sample size for just one thousand people would be 278, but, you know, one order of magnitude one way or another, who cares about those? Certainly not business big shot movers and shakers. Like Stockton Rush for example.

The math above assumes an unbiased sampling. The post does not even attempt to answer whether that is the case. It would mean getting answers to questions like:

  • How were the 22 people chosen?
  • How many different companies, skill levels, nationalities, genders, age groups etc were represented?
  • Did they have any personal financial incentive on making their new AI tools look good?
  • Were they under any sort of duress to produce the "correct" answers?
  • What was/were the exact phrase(s) that was asked?
  • Were they the same for all participants?
  • Was the test run multiple times until it produced the desired result?
The latter is an age old trick where you run a test with random results over and over on small groups. Eventually you will get a run that points the way you want. Then you drop the earlier measurements and publish the last one. In "the circles" this is known as data set selection.

Just to be sure, I'm not saying that is what they did. But if someone drove a dump truck full of money to my house and asked me to create a "study" that produced these results, that is exactly how I would do it. (I would not actually do it because I have a spine.)

Moving on. The main headline grabber is "Either you embrace AI or get out of this career". If you actually read the post (I know), what you find is that this is actually a quote from one of the participants. It's a bit difficult to decipher from the phrasing but my reading is that this is not a grandstanding hurrah of all things AI, but more of a "I guess this is something I'll have to get used to" kind of submission. That is not evidence, certainly not of the clear type. It is an opinion.

The post then goes on a buzzwordsalad tour of statements that range from the incomprehensible to the puzzling. Perhaps the weirdest is this nugget on education:

Teaching [programming] in a way that evaluates rote syntax or memorization of APIs is becoming obsolete.

It is not "becoming obsolete". It has been considered the wrong thing to do for as long as computer science has existed. Learning the syntax of most programming languages takes a few lessons, the rest of the semester is spent on actually using the language to solve problems. Any curriculum not doing that is just plain bad. Even worse than CS education in Russia in 1913.

You might also ponder that if the author is so out of touch with reality in this simple issue, how completely off base the rest of his statements might be. In fact the statement is so wrong at such a fundamental level that it has probably been generated with an LLM.

A magician's shuffle

As nonsensical as the Twitter post is, we have not yet even mentioned the biggest misdirection in it. You might not even have noticed it yet. I certainly did not until I read the actual post. Try if you can spot it.

Ready? Let's go.

The actual fruit of this "study" boils down to this snippet.

Developers rarely mentioned “time saved” as the core benefit of working in this new way with agents. They were all about increasing ambition.

Let that sink in. For the last several years the main supposed advantage of AI tools has been the fact that they save massive amounts of developer time. This has lead to the "fire all your developers and replace them with AI bots" trend sweeping the nation. Now even this AI advertisement of a "study" can not find any such advantages and starts backpedaling into something completely different. Just like we have always been at war with Eastasia, AI has never been about "productivity". No. No. It is all about "increased ambition", whatever that is. The post then carries on with this even more baffling statement.

When you move from thinking about reducing effort to expanding scope, only the most advanced agentic capabilities will do.

Really? Only the most advanced agentics you say? That is a bold statement to make given that the leading reason for software project failure is scope creep. This is the one area where human beings have decades long track record for beating any artificial system. Even if machines were able to do it better, "Make your project failures more probable! Faster! Spectacularer!" is a tough rallying cry to sell. 

 To conclude, the actual findings of this "study" seem to be that:

  1. AI does not improve developer productivity or skills
  2. AI does increase developer ambition
This is strictly worse than the current state of affairs.
Read the whole story
tante
23 days ago
reply
An analysis of Github's recent studies on "AI" agents for software development
Berlin/Germany
Share this story
Delete
Next Page of Stories