Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2521 stories
·
139 followers

AI doomsday and AI heaven: live forever in AI God

1 Comment and 2 Shares

I get a lot of requests for a piece on the AI 2027 paper. That’s the one that draws a lot of lines going up on a graph to show we’re getting a genuine artificial general intelligence by 2027! Then it kills us all. [AI 2027]

But the important thing about AI 2027 is that it comes straight from the hardcore AI doomsday cult nutcases.

Here in the AI bubble, you’ll see a lot of AI doomsday cultists. Their origin is the “rationalist” subculture from lesswrong.com, which is also a local Bay Area subculture. That’s nothing to do with any other historical philosophy called “rationalism”, but it’s what they call themselves.

The rationalists have spent twenty years hammering on how we need to avert AI doomsday and build the good AI instead. That’s science fiction, but they’re convinced it’s science.

I’m annoyed that this is important, because these guys are influential in Silicon Valley. They were funded by Peter Thiel for ten years, and a whole lot of the powerful tech guys buy into a lot of these ideas. Or a version of the ideas that feeds their egos.

A huge chunk of AI bubble company staff are AI doomsday cultists. Sam Altman isn’t actually a true believer, so the cultists tried to fire him from OpenAI in 2023. Ilya Sutskever and some fellow cultists went off to form Safe Superintelligence. Anthropic is full of the cultists, which is why they keep putting out papers about how the chatbot totally wants to kill you.

The rationalists also started the Effective Altruism movement. Which sounds like a nice idea — except they consider it obvious that the most altruistic project for humanity is averting the AI doomsday. This is the most effective possible altruism.

What is “rationality”?

The shortcut to a complete explanation of the rationalists is the new book More Everything Forever by Adam Becker. This book is great, and I’m not just saying that ’cos i’m in it. I cannot over-recommend it. It’s awesome. [Amazon UK; Amazon US]

Rationalism claims to be a system to make you into a better thinker. Armed with these tools, your brain will be superior and win in the real world. Sounds cool!

Rationalism was founded by a fellow called Eliezer Yudkowsky. You’ll see him quoted in the press as an “AI researcher” or similar. The media call him up cos he came up with AI doomsday as we know it.

What Yudkowsky actually does is write blog posts. He wrote up his philosophy as a million or so words of blog posts from 2006 to 2009. This collection is called The Sequences.

A lot of rationalists have not in fact read the Sequences. But the whole AI doomsday thing percolates straight from the Sequences. So the texts are useful to understand where the ideas come from.

The goal of rationality

Explaining all of rationality would take a vastly longer post than this. Read More Everything Forever. I’m not even getting to the Harry Potter fanfic, the cult of Ziz, or Roko’s basilisk today!

So let’s deal with one tiny sliver today.

The goal of LessWrong rationality is so Eliezer Yudkowsky can live forever as an emulated human mind running on the future superintelligent AI god computer, to end death itself.

Yudkowsky’s entire philosophy was constructed backwards from that goal. Being super smart obviously leads to that, see.

But. Yudkowsky realised it might go wrong if the AI didn’t care about humans and human values. Yudkowsky believes there is no greater threat to humanity than a rogue artificial super-intelligence taking over the world and treating humans as just raw materials.

So Yudkowsky has spent the years since The Sequences hammering on the AI doomsday and trying to avert it. He wrote the Sequences to convince people about the dangers of the bad AI: [LessWrong]

it got to the point that after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists.

Scientists, philosophers, and even theologians may read the following and start yelling at the screen.

“Singletons Rule OK” — in the future, there will be a single superintelligent AI local-god computer. Whichever super-AI comes first basically just takes over everything. This is why it’s very important to make sure it values humans. Yudkowsky calls that “Friendly AI.” [LessWrong]

it’s obvious that a “winner-take-all” technology should be defined as one in which, ceteris paribus, a local entity tends to end up with the option of becoming one kind of Bostromian singleton — the decisionmaker of a global order in which there is a single decision-making entity at the highest level.

“Beyond The Reach of God” explains how this friendly-AI superintelligence, our local AI God, will prevent all human death from then on: [LessWrong]

on a higher level of organization we could build some guardrails and put down some padding; organize the particles into a pattern that does some internal checks against catastrophe.

… A superintelligence — a mind that could think a trillion thoughts without a misstep — would not be intimidated by a challenge where death is the price of a single failure.

“Timeless Identity” posits that the future AI God will not just prevent death, it’ll revive every past person it can: [LessWrong]

“Why would future civilizations bother to revive me?” (Requires understanding either economic growth diminishing the cost, or knowledge of history and how societies have become kinder over time, or knowing about Friendly AI.)

With this digital immortality plan, you might think we’re just talking about copies of you, not you. But Yudkowsky’s got you covered — “Identity Isn’t In Specific Atoms” reassures you that you are a pattern of information, you are not the particular atoms in your brain and body, because all subatomic particles of a particular kind are literally identical: [LessWrong]

Quantum mechanics says there isn’t any such thing as a ‘different particle of the same kind’, so wherever your personal identity is, it sure isn’t in particular atoms, because there isn’t any such thing as a ‘particular atom’.

“Three Dialogues on Identity” tries to get across to you how, in the fabulous future when you are running as a process on the mind of the AI God, your experiences as a human emulation living in the Matrix are just as real as your experiences in this world made of atoms. If emulated you eats an emulated banana, it’s you eating a banana: [LessWrong]

Rest assured that you are not holding the mere appearance of a banana. There really is a banana there, not just a collection of atoms.

Let’s look at “Timeless Identity” again — how any copy of you, at any time, in any many-worlds quantum branch, is you. There’s no such thing as the “original” and the “copy”. Your copies are also you. The same you, not a different you.

Also, you should sign up for cryonics and freeze your brain when you die, because the future AI God can definitely retrieve your information from the freezer-burned cell mush that was once your brain. Yudkowsky is extremely into cryonics.

If you don’t understand the post, that’s because:

It is built upon many prerequisites and deep foundations; you will not be able to tell others what you have seen, though you may (or may not) want desperately to tell them.

Now, you might think that’s a cult talking about esoteric doctrines for true believers.

We’re talking about living forever. What about the heat death of the universe, huh? “Continuous Improvement” tells us how forever means forever — because we just might escape the heat death of the universe with new physics we don’t know yet! [LessWrong]

There is just… a shred of reasonable hope, that our physics might be much more incomplete than we realize, or that we are wrong in exactly the right way, or that anthropic points I don’t understand might come to our rescue and let us escape these physics (also a la Greg Egan).

So I haven’t lost hope. But I haven’t lost despair, either; that would be faith.

Yeah, Yudkowsky talked like Sephiroth a lot in the Sequences.

Do the rationalists believe?

The current AI bubble doomsday squad do believe this stuff. Anyone who talks about “alignment”, that’s actually a rationalist jargon word meaning Friendly AI God versus AI doomsday.

This all sounds like science fiction. Because it is science fiction. The rationalists take science fiction — overwhelmingly from anime, because Bay Area rationalists are the hugest weebs on earth — and they want to make make anime real.

Imagine living in the mind of AI God with these bozos … forever.

Should you believe any of this? I mean, it’d be fun. If there’s another copy of me out there in the quantum universe or running on the mind of the future AI God, I sincerely hope he has a fun time. He’s a good chap, he deserves it.

But all of this is functionally irrelevant to you and me. Because it doesn’t exist and nobody has any idea how to make it exist. Certainly not these dweebs. We have some rather more pressing material realities to be getting on with.

Does Yudkowsky still believe?

I was arguing a bit on Bluesky about this with Professor Beth Singler, who’s someone you should listen to. She thinks Yudkowsky doesn’t really believe all that stuff any more and he’s no longer into digital immortality. She bases this on an interview she did with him a short time ago and his recent posts on Twitter [Bluesky]

It’s true that Yudkowsky is very pessimistic about AI doomsday now. He thinks humanity is screwed. The AI is going to kill us. He thinks we need to start bombing data centres.

And a lot of Yudkowsky’s despair is that his most devoted acolytes heard his warnings “don’t build the AI Torment Nexus, you idiots” and they all went off to start companies building the AI Torment Nexus.

Singler considers Yudkowsky has more or less given up on digital immortality, and now he just says “sign up for cryonics.” But I think that means Yudkowsky still believes his original statements from the late 2000s, because his vision of cryonics is the digital immortality idea.

The deepest love for humanity, or a portion thereof

While we’re talking about what rationalists actually believe, I’d be remiss not to mention one deeply unpleasant thing about the rationalist subculture: they are really, really into race and IQ theories and scientific racism. Overwhelmingly.

This started early. There’s Yudkowsky posts from 2007 pushing lists of race theorist talking points. You can see the Peter Thiel influence. The Silicon Valley powerbrokers are also very into the race science bit of rationalism. [LessWrong; LessWrong]

Scott Alexander Siskind of SlateStarCodex, a writer who a whole lot of centrists read and love, now tells his readers how they need to get into the completely discredited race theorist Richard Lynn. Scott complains that activists tried to get Lynn cancelled. The mask doesn’t come any further off than this. [Astral Codex Ten]

There was also a huge controversy in Effective Altruism last year when half the Effective Altruists were shocked to discover the other half were turbo-racists who’d invited literal neo-Nazis to Effective Altruism conferences. The pro-racism faction won. [Effective Altruism Forum, 2024; polemics.md]

Now, you might look at rationalism’s embrace of fancy racism and question just how committed they are to the good of all humanity.

If you want a rationalist to go away and stop bothering you, in real life or online, ask him how much he’s into the race science bit. If he claims he isn’t, ask him if he sits down at the table with the rationalists he knows are into the race science bit, and ask him why he sits down with those guys. He’ll probably vanish, with a slight risk he starts preaching fake racist statistics at you.

Read the whole story
tante
7 hours ago
reply
David Gerard on the "AI doomers"/"rationalists" and their beliefs.

In the end they are a bunch of eugenicist, racist losers afraid of death.
Berlin/Germany
Share this story
Delete

How to not build the Torment Nexus

1 Comment and 2 Shares

A very large chaotic painting. The very top says NO GODS NO MASTERS. Below that there's a whole scene of naked people in line to kiss Satan's ass. Then Bugs Bunny and a beheaded Mickey Mouse show up. Plus some pink camo guillotine blades. Honestly, it's a lot.
This is No Gods No Masters, 2024, painted by me, in wax. 65x78”

Join the $2 Lunch Club!


This week’s question comes to us from Will Hopkins:

When your job and healthcare depends on building the Torment Nexus, but you actually learned the lesson from the popular book Don't Build the Torment Nexus, how do you keep your soul intact and try to put less torment into the world?

Oh good, I was looking for a question that’s going to piss off every one of my readers, and here it is.

I mean, the answer to the question is right there in the question, which of course you already knew, even as you typed it out. If you don’t want to add more torment to the world you simply don’t build the Torment Nexus. That’s basic math. If you have too many eggs, going to the store for more eggs only results in having even more eggs than you started with.

What you’re actually looking for, I believe, is someone to absolve you of building the Torment Nexus because you took a job at the Torment Nexus Factory. Which is a thing I cannot do. Not that I don’t understand your need for income and health insurance—I very much do—but absolution is the realm of priests and other con artists. But hey, you’re the one who brought souls into the conversation. So let’s talk about souls.

Specifically, let’s talk about the soul of tech. And yes, I know industries don’t have souls, but honestly, neither do people. What they do have—or lack—is an ethical core. A way they want to interact with the world and the people that they come in contact with. For example, not too long ago, tech was seen as an industry of progress and innovation. Tech was a sector that made us think of humankind moving forward, possibly into some happy Star Trek like future where no one needed money and pie magically appeared in your wall if you said “Magic wall, pie me!” One might argue that not too long ago the soul of tech bent towards the positive. And, yes, people in the global South building your iPhones and mining the rare earth elements necessary to make a bunch of your tech shit work might very well argue with that assessment. They’d be right to do so. But the vibe, at least once you put the blinders on, was a positive one.

You could feel good working in tech knowing that you were helping Aunt Mabel in Kansas City see baby pictures of her niece and nephew in San Mateo, helping people complete mundane tasks online, helping a student find information for a paper, helping a farmer order or sell hay, and even giving some fun weirdos a stage. What you were definitely not doing, however, was building the Torment Nexus. (Again, the people of the global South would disagree. Again, they’d be correct to do so.)

When I think back to the stuff that excited me about “the web” so many years ago, that’s the stuff that pulled me in. Good vibes! Connecting the world! All of which made sense because I was coming in as the industry was nascent and figuring itself out. It was full of positivity. As nascent industries tend to be. (Think back to how excited people used to be about building railroads in the Gilded Age! The actual Gilded Age, not the TV show. Although, yeah, also the TV show.) As industries mature, they tend to get a little boring. And as industries age, and start seeing their own collapse over the horizon, they tend to get… defensive. Bitter. Conservative. Fucking hostile. (Yes, I’m talking about people too.) Tech, which has always made progress in astounding leaps and bounds, is just speedrunning the cycle faster than any industry we’ve seen before. It’s gone from good vibes, to a real thing, to unicorns, to let’s build the Torment Nexus in record time. All in my lifetime.

I was lucky (by which I mean old) to enter this field when I felt, for my own peculiar reasons, that it was at its most interesting. And as it went through each phase, it got less and less interesting to me, to the point where I have little desire to interact too much with it now. Other than sending my newsletter, reading my Below Decks recaps, and the occasional peek at Bluesky. In fact, when I think about all the folks I used to work on web shit with and what they’re currently doing, the majority are now woodworkers, ceramicists, knitters, painters, writers, etc. People who make things tend to move on when there’s nothing left to make. Nothing to make but the Torment Nexus.

Of course, the reason they get to do those things is because this shit pays really well. Or at least used to pay really well. It still does, compared to most jobs. Just not at Gold Rush 2.0 levels anymore. And the odds of you making your first million before you’re 35 aren’t what they used to be. Although a select few will, of course. You, as a worker on the Torment Nexus Team have more in common with the people you’re feeding into the Torment Nexus than you do with the Torment Nexus Leadership team, who have started feeding their own into the Torment Nexus.

I guess what I’m saying is that it’s getting close to impossible to be in this industry—at the moment—without being on the Torment Nexus Team. And lest you think “at the moment” is load-bearing… well, I wouldn’t lean too hard on it. I don’t see shit improving too soon. Industries in decline tend to pick up speed, not reverse course, and their death moan comes when they shift from making things to extracting value.

This is all just a long-winded way of saying that while your current job, and healthcare, depends on building the Torment Nexus, your best bet might be to start thinking of fields that don’t require building the Torment Nexus to earn a living. And while spending considerable time and energy (and probably going into considerable student loan debt) to enter a field that wasn’t building the Torment Nexus when you decided this was how you wanted to earn a living can be maddening, and depressing, and anger-inducing… we need to judge this field by what it’s currently doing, and not the vibes of the past. None of this is meant to make you feel good, because it doesn’t feel good. But keeping your foot on the accelerator and hoping the BRIDGE OUT sign is a lie only leads to worse outcomes than pulling over and rerouting, as annoying as that might be.

Since your question was specifically about keeping your soul intact, I will do you the favor and the kindness of answering you honestly. You cannot keep your soul intact while building the Torment Nexus. The Torment Nexus is, by definition, a machine that brings torment onto others. It destroys souls. And a soul cannot take a soul and remain whole. It will leave a mark. A memory. A scar. Your soul will not remain intact while you’re building software that keeps track of undocumented workers. Your soul will not remain intact while building surveillance software whose footage companies hand over to ICE. Your soul will not remain intact while you build software that allows disinformation to spark genocides. Your soul will not remain intact while you hoover up artists’ work to train theft-engines that poison the water of communities in need. Your soul will eventually turn into another thing altogether. An indescribable thing.

I acknowledge that there are people working at the Torment Nexus factory for whom it would be hard to leave. For example, people on H-1B visas have their residency tied to their job. The current healthcare fuckery will most surely bring the term “pre-existing condition” back into play. (America is a cruel and sick place.) Switching jobs will trigger jackassery with your health insurance, which might be covering your entire family. I’m sure there are more examples of this that are just as real, and if you are one of these people I absolutely feel for you. And as much as I feel for you, I also believe that if you’re a person of good conscience (and why would you have asked this question if you weren’t) building the Torment Nexus will have the same negative effect on your soul. So while I can’t in good conscience scream at you to put down your tools and leave your job, I will instead tell you to take care of your souls the best you can.

I’d argue it’s more ethical to do shit work at the Torment Nexus factory than to do good work at the Torment Nexus factory. But the most ethical move of all, of course, is to not work at the Torment Nexus factory at all. I just realize that’s easier for some folks than others.

I do believe, however, that the majority of folks working at the Torment Nexus factory are there because they’ve convinced themselves that it’s ok to be there, or that the Torment Nexus isn’t that bad, or that their proximity to the Torment Nexus will protect them from the Torment Nexus. Or, quite honestly, they just don’t give a fuck. They’re getting their bag. After all, that Vision Pro sitting on your office shelf gathering dust didn’t pay for itself.

To anyone who’s about to email me and let me know “they have a right to earn a living” please make sure to append “…in the manner in which I’ve become accustomed” to the end of that sentence because, honestly, that’s what you’re really saying. But no one has the right to live a hundred times better than anyone else. Equality also means this.

But surely the real problem is a systemic one and you can’t blame the workers for participating. After all, they are just trying to feed themselves and get healthcare. And, that is correct, of course. What’s also correct is that systems are powered by people. They rely on the labor of people to function, but also on the despair of people believing that propping up the system is the only option available to them. Which is fucking dire. For the Torment Nexus to get built it needs to convince you that your only option for survival is to build the Torment Nexus, much like an AI company telling us that the only path to solving the climate crisis is to use what’s left of Earth’s resources to power its AI in the hope that it will come up with a solution to the climate crisis.

Ultimately, the names of everyone who built the Torment Nexus will be engraved on the Torment Nexus, or possibly on a plaque below the Torment Nexus. Or possibly on a beacon in space roughly where Earth used to be, sending out a repeating signal to other civilizations saying “Don’t build the Torment Nexus!” That list won’t have categories. It won’t be broken up into “people who wanted to build the Torment Nexus,” “people who were tricked into building the Torment Nexus,” and “people who just really needed healthcare.”

It’ll just be a list of people who were conned into believing they had no other options.

As a final sliver of hope (and trust that I’m doing this for my own benefit as much as yours) I would remind you that this isn’t our first encounter with the Torment Nexus. We’ve built it before. Like the devil, another collector of souls, it’s come to us under many names: The Spanish Inquisition, The Dutch East India Company, The English Navy, Portuguese slave forts, Jim Crow, The Holocaust, Japanese Internment Camps, too many genocides to mention—including the one America is currently funding in Gaza, and as the spirit of Ursula K Le Guin keeps reminding us—the divine right of kings. And trust that this is by no means an exhaustive list. Turns out human beings really like building the Torment Nexus.

But it also turns out that human beings are good at defeating the Torment Nexus.


🙋 Got a question you’d like a really fucking depressing answer to? Ask it.

📣 The next Presenting w/Confidence workshop is scheduled for August 14&15 and it’s filling up fast. Come learn how to talk people out of building the Torment Nexus.

📙 I’m currently reading Karen Hao’s Empire of AI, and holy shit. This book is worth your time.

💰 If you’re “enjoying” this newsletter join the $2 Lunch Club and support independent writing that’s not on Substack, which is definitely a part of the Torment Nexus.

🍩 Last Sunday a neighbor brought homemade donuts to the dogpark and it was amazing. Don’t underestimate the healing power of bringing donuts.

💩 You already know Alan Dershowitz is a POS, but David Roth tells the story in a way that’s worth reading.

🍉 Please donate to the Palestinian Children’s Relief Fund.

🏳️‍⚧️ Please donate to Trans Lifeline.

Read the whole story
tante
10 days ago
reply
Berlin/Germany
Share this story
Delete

Jussi Pakkanen: Let's properly analyze an AI article for once

1 Comment

Recently the CEO of Github wrote a blog post called Developers reinvented. It was reposted with various clickbait headings like GitHub CEO Thomas Dohmke Warns Developers: "Either Embrace AI or Get Out of This Career" (that one feels like an LLM generated summary of the actual post, which would be ironic if it wasn't awful). To my great misfortune I read both of these. Even if we ignore whether AI is useful or not, the writings contain some of the absolute worst reasoning and stretched logical leaps I have seen in years, maybe decades. If you are ever in the need of finding out how not to write a "scientific" text on any given subject, this is the disaster area for you.

But before we begin, a detour to the east.

Statistics and the Soviet Union

One of the great wonders of statistical science of the previous century was without a doubt the Soviet Union. They managed to invent and perfect dozens of ways to turn data to your liking, no matter the reality. Almost every official statistic issued by USSR was a lie. Most people know this. But even most of those do not grasp just how much the stats differed from reality. I sure didn't until I read this book. Let's look at some examples.

Only ever report percentages

The USSR's glorious statistics tended to be of the type "manufacturing of shoes grew over 600% this five year period". That certainly sounds a lot better than "In the last five years our factory made 700 pairs of shoes as opposed to 100" or even "7 shoes instead of 1". If you are really forward thinking, you can even cut down shoe production on those five year periods when you are not being measured. It makes the stats even more impressive, even though in reality many people have no shoes at all.

The USSR classified the real numbers as state secrets because the truth would have made them look bad. If a corporation only gives you percentages, they may be doing the same thing. Apply skepticism as needed.

Creative comparisons

The previous section said the manufacturing of shoes has grown. Can you tell what it is not saying? That's right, growth over what? It is implied that the comparison is to the previous five year plan. But it is not. Apparently a common comparison in these cases was the production amounts of the year 1913. This "best practice" was not only used in the early part of the 1900s, it was used far into the 1980s.

Some of you might wonder why 1913 and not 1916, which was the last year before the bolsheviks took over? Simply because that was the century's worst year for Russia as a whole. So if you encounter a claim that "car manufacturing was up 3700%" some year in 1980s Soviet Union, now you know what that actually meant.

"Better" measurements

According to official propaganda, the USSR was the world's leading country in wheat production. In this case they even listed out the production in absolute tonnes. In reality it was all fake. The established way of measuring wheat yields is to measure the "dry weight", that is, the mass of final processed grains. When it became apparent that the USSR could not compete with imperial scum, they changed their measurements to "wet weight". This included the mass of everything that came out from the nozzle of a harvester, such as stalks, rats, mud, rain water, dissidents and so on.

Some people outside the iron curtain even believed those numbers. Add your own analogy between those people and modern VC investors here.

To business then

The actual blog post starts with this thing that can be considered a picture.

What message would this choice of image tell about the person using it in their blog post?

  1. Said person does not have sufficient technical understanding to grasp the fact that children's toy blocks should, in fact, be affected by gravity (or that perspective is a thing, but we'll let that pass).
  2. Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".
Are these the sort of traits a person in charge of the largest software development platform on Earth should have? No, they are not.

To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself. Cultural misappropriation is high on the list of core values at Github HQ it seems.

With that let's move on to the actual content, which is this post from Twitter (to quote Matthew Garrett, I will respect their name change once Elon Musk starts respecting his child's).

Oh, wow! A field study. That makes things clear. With evidence and all! How can we possibly argue against that?

Easily. As with a child.

Let's look at this "study" (and I'm using the word in its loosest possible term here) and its details with an actual critical eye. The first thing is statistical representativeness. The sample size is 22. According to this sample size calculator I found, a required sample size for just one thousand people would be 278, but, you know, one order of magnitude one way or another, who cares about those? Certainly not business big shot movers and shakers. Like Stockton Rush for example.

The math above assumes an unbiased sampling. The post does not even attempt to answer whether that is the case. It would mean getting answers to questions like:

  • How were the 22 people chosen?
  • How many different companies, skill levels, nationalities, genders, age groups etc were represented?
  • Did they have any personal financial incentive on making their new AI tools look good?
  • Were they under any sort of duress to produce the "correct" answers?
  • What was/were the exact phrase(s) that was asked?
  • Were they the same for all participants?
  • Was the test run multiple times until it produced the desired result?
The latter is an age old trick where you run a test with random results over and over on small groups. Eventually you will get a run that points the way you want. Then you drop the earlier measurements and publish the last one. In "the circles" this is known as data set selection.

Just to be sure, I'm not saying that is what they did. But if someone drove a dump truck full of money to my house and asked me to create a "study" that produced these results, that is exactly how I would do it. (I would not actually do it because I have a spine.)

Moving on. The main headline grabber is "Either you embrace AI or get out of this career". If you actually read the post (I know), what you find is that this is actually a quote from one of the participants. It's a bit difficult to decipher from the phrasing but my reading is that this is not a grandstanding hurrah of all things AI, but more of a "I guess this is something I'll have to get used to" kind of submission. That is not evidence, certainly not of the clear type. It is an opinion.

The post then goes on a buzzwordsalad tour of statements that range from the incomprehensible to the puzzling. Perhaps the weirdest is this nugget on education:

Teaching [programming] in a way that evaluates rote syntax or memorization of APIs is becoming obsolete.

It is not "becoming obsolete". It has been considered the wrong thing to do for as long as computer science has existed. Learning the syntax of most programming languages takes a few lessons, the rest of the semester is spent on actually using the language to solve problems. Any curriculum not doing that is just plain bad. Even worse than CS education in Russia in 1913.

You might also ponder that if the author is so out of touch with reality in this simple issue, how completely off base the rest of his statements might be. In fact the statement is so wrong at such a fundamental level that it has probably been generated with an LLM.

A magician's shuffle

As nonsensical as the Twitter post is, we have not yet even mentioned the biggest misdirection in it. You might not even have noticed it yet. I certainly did not until I read the actual post. Try if you can spot it.

Ready? Let's go.

The actual fruit of this "study" boils down to this snippet.

Developers rarely mentioned “time saved” as the core benefit of working in this new way with agents. They were all about increasing ambition.

Let that sink in. For the last several years the main supposed advantage of AI tools has been the fact that they save massive amounts of developer time. This has lead to the "fire all your developers and replace them with AI bots" trend sweeping the nation. Now even this AI advertisement of a "study" can not find any such advantages and starts backpedaling into something completely different. Just like we have always been at war with Eastasia, AI has never been about "productivity". No. No. It is all about "increased ambition", whatever that is. The post then carries on with this even more baffling statement.

When you move from thinking about reducing effort to expanding scope, only the most advanced agentic capabilities will do.

Really? Only the most advanced agentics you say? That is a bold statement to make given that the leading reason for software project failure is scope creep. This is the one area where human beings have decades long track record for beating any artificial system. Even if machines were able to do it better, "Make your project failures more probable! Faster! Spectacularer!" is a tough rallying cry to sell. 

 To conclude, the actual findings of this "study" seem to be that:

  1. AI does not improve developer productivity or skills
  2. AI does increase developer ambition
This is strictly worse than the current state of affairs.
Read the whole story
tante
10 days ago
reply
An analysis of Github's recent studies on "AI" agents for software development
Berlin/Germany
Share this story
Delete

The future of MAGA after Trump

1 Comment

Think About Supporting Garbage Day!

It’s $5 a month or $45 a year and you get Discord access, the coveted weekend issue, and monthly trend reports. What a bargain! Hit the button below to find out more.

The Battle Of The Bastards

I try really hard not to predict the future. I swear lol. But I am increasingly convinced that we are watching the end of something.

The MAGA coalition that’s been building for a decade is clearly fracturing, or, at the very least, morphing into something new. The apparently uneasy alliance between the manosphere, QAnon, Project 2025, reactionary Silicon Valley, and our country’s various white nationalist groups that President Donald Trump rode to victory on last year is pulling apart. And before I get to what I think will happen next, I want to break down where we’re currently at.

The first big crack in the MAGA movement, funnily enough, was King of the Boston Yah Doods, Dave Portnoy. The Barstool Sports CEO lashed out at Trump in April after the president announced his idiotic tariff plan, dubbing the disastrous stock market reaction, “Orange Monday.” And Portnoy has continued to publicly gripe about Trump’s deranged economic policy ever since.

The second fissure in the movement was Elon Musk’s departure from the Department of Government Efficiency (DOGE) in May. Musk started feuding with Trump over tax credits and, at one point, accused Trump of being on Jeffrey Epstein’s client list. Musk also had several very public meltdowns, seemingly coming out of his months-long haze to discover that the entire world now thinks he’s a Nazi. Strange, I heard the ketamine come down is usually pretty smooth. The drama got so bad that the All In podcast, the de facto Reich cabinet of Silicon Valley, wouldn’t even address it and may have even briefly broken up over it.

The same week Musk lost the game of thrones, Theo Von, a dog that got turned into a human via a magic spell and is now the world’s third-biggest podcaster, posted an impassioned video to X, titled, “What are we doing?” Where he accused the US of being complicit in the Palestinian genocide. Von is not a particularly deep thinker and hasn’t totally turned left over it, but he has continued to focus on Palestine, even pressing Vice President JD Vance on the genocide when Vance was on his show in June.

(Photo by Adam Gray/Getty Images)

All of these tensions came to a head this month, following the hapless rollout of the Trump administration’s Epstein Files investigation and The Wall Street Journal’s bombshell that Trump had written a letter for Epstein’s 50th birthday. Trump has since lost the support of Joe Rogan, as well as the weird mutant growth on Rogan’s stomach, podcaster Andrew Schulz. Rogan condemned Trump’s deportation scheme and also accused the administration of “gaslighting” their followers on the president’s connections to Epstein. While Schulz told listeners that none of this was “what he voted for.” And the vast universe of smaller MAGA influencers that run defense for Trump every day online are struggling, as well.

As I wrote earlier this month, they’re desperate for a new conspiracy theory, one that can suck up all the oxygen around the Epstein scandal. They’re currently oscillating between, “The Epstein stuff is a Democrat smear campaign,” and some variation of “it’s actually ok if the president is a pedophile.” Neither are really sticking. And random chatter on X isn’t much of a match for a mainstream media finally no longer pulling their punches, following the cancellation of Stephen Colbert’s The Late Show. Even South Park is going for the jugular now. The Republicans are trying to contain the situation, of course, sending the House of Representatives home until September, hoping that squashes the discourse. Except Trump, hilariously, continues to find new ways to make things much worse.

This week, Trump said he “never had the privilege” of going to Epstein’s island. Odd way to put it, big dog. And just yesterday, aboard Air Force One, Trump seemingly confirmed that an underaged Virginia Giuffre, who previously accused Epstein of sexual exploitation in 2011, was trafficked or, “stolen,” as Trump so eloquently put it, by Epstein from his spa at Mar-A-Lago. Trump is also reportedly mulling pardons for both Ghislaine Maxwell and Sean “Diddy” Combs for some reason??? Oh, also, there’s apparently more security footage out there of Epstein’s cell that isn’t missing a minute.

Throughout all of this chaos, though, there is one group that is clearly still consolidating power. Russell Vought, who is now both the director of the Office of Management and Budget (OMB), as well as the semi-official head of DOGE. According to the Project 2025 tracker, Vought and his cronies are about halfway through the roadmap for dismantling American democracy they constructed last year. And another author of Project 2025, The Heritage Foundation's Paul Dans, just announced he’s planning to run against Lindsey Graham in South Carolina. Dans, aptly, told NBC News that it would be a “battle for the future of MAGA.”

So what is the future of MAGA? Well, it’s time to say something that would have been unthinkable even three months ago. There is a very a good chance that we are watching the MAGA movement decouple from Trump in front of our very eyes. And as delicious as the schadenfreude would be if Trump was impeached by a government controlled by his fellow Republicans, it would actually signal a far more dangerous political reality. The direct result of Project 2025’s architects doing the arithmetic and realizing they no longer need the president’s cult of personality to maintain power. Jettisoning their loud, elderly liability, hoping Peter Thiel’s favorite special boy, Vice President JD Vance, can stabilize things and bring the warring tribes — the podcasters, the venture capitalists, the Christofascists — back into the fold and let them continue their coup. But like I said, I try not to predict the future. It’s just getting hard to shake the feeling that we’re barreling towards a new darker era that will make the Trump years will look quaint by comparison.


The following is a paid ad. If you’re interested in advertising, email me at ryan@garbageday.email and let’s talk. Thanks!

Pre-order YEAR ZERO: A CHAPO TRAP HOUSE COMICS ANTHOLOGY

The folks at the Chapo Trap House podcast (posts occasionally cited in this very newsletter) are proud to present their first foray into the world of funnybooks!

Featuring all-new stories from Will Menaker, Felix Beiderman, Amber A'Lee Frost, Chris Wade, and Matt Christman, Year Zero is the first of three oversized books.

Five Scintillating Tales of Madness… crossing time, genres, and good taste. With art from comics superstars Simon Roy, Justin Greenwood, David Cousens, Ken Knudtsen, and Dean Kotz, it’s an anthology book like no other!

Pre-order is ONLY available until August 1, right here.


A Good Post


New Dumb Crypto Scandal Dropped

—by Adam Bumas

(kick.com/adinross)

Shocking news from the crypto world. A theoretically funny memecoin seems to have been a scam the whole time. The latest offender is a Solana-based coin called “360noscope420blazeit”, or just MLG for short. The token’s value zoomed up and back down again within 24 hours earlier this week, but it’s been around since last year. Before the spike, the most attention it got was in January after endorsements by offshore-casino-made-flesh Adin Ross and multiple members of FaZe Clan — once the world’s biggest esports team, now a cautionary tale about internet success, and the same kind of nostalgic punchline as saying, “MLG,” is.

Speaking of cautionary tales, Ross and FaZe’s Richard Bengston (aka FaZe Banks) are now trading blame over the alleged scam. Bengston has stepped down as CEO of the company, but blamed Ross for everything in a leaked message. Ross responded on X “shit sad asf, if you guys think that mlg being rugged was me im sorry to tell u it wasn’t.”

So who was it? Our finest minds are stumped. Banks called in to Ross’ Kick stream on Tuesday, apologizing for his accusation. Both of them agreed that a neutral party — namely, YouTuber Coffeezilla, who’s best known for exposing the Hawk Tuah crypto scam — would be best suited to assign the blame. Either way, I’m pretty sure all this is exactly what Satoshi Nakamoto wanted.


An Internet Without “Kids”

It’s been barely a week since the UK’s Internexit. What was meant to protect children from seeing pornography has devolved into a Byzantine system of verification systems blocking users from basic internet services. British users this morning woke up to notifications telling them that if they don’t let Spotify scan their ID it will delete their accounts. You know things are bad when the UK’s closest equivalent to Trump, Nigel Farage, is demanding the whole thing is repealed.

Per Politico, UK regulators are threatening US platforms with criminal charges if they don’t comply with the Online Safety Act. I asked the UK room in the Garbage Day Discord, which was already called Starmer’s Posting Gulag lmao, and they said there isn’t any kind of age verification for Garbage Day. It would have to come from Beehiiv. So, if any UK readers want me to describe what the internet looks like, I’ll do my best. Also, anecdotally, I’ve heard the new restrictions don’t seem to work if you’re on roaming data in the UK.

But it’s not just Great Britain that is trying to carve off a chunk of the internet. Australian regulators want YouTube to block children under 16 from using the platform. The country’s version of the Online Safety Act is already targeting other platforms like Facebook, but had previously said YouTube would be exempt.

And if you’re reading this from outside the UK or Australia and thinking, “so what,” well, here’s your reminder that the internet only really works as a global system: YouTube is rolling out an AI feature that will identify users that are under 18. If the AI incorrectly identifies you as a child, you’ll have to upload your ID to prove you’re an adult.


Everyone’s Being Really Weird About This Random Japanese Woman

(x.com/@eigenrobot)

Alright, so, I wasn’t going to deal with this because it was mainly just a thing for deeply unwell X users, but I’ve seen it make the jump to other platforms. A Japanese model and salarywoman that goes by SAO has gone very very very viral this week. (Everything in this section is a link to X by the way.) Elon Musk is even sharing — thankfully SFW — AI-generated videos of her now. I’m also going to give SAO the benefit of the doubt and say maybe she doesn’t seem to totally clock the baseline level of racism on the platform. Which hopefully explains why she is sharing some “fan” works about her that are super offensive.

If you’re struggling to understand why this random Japanese woman has gone viral — and is getting pitted against Sydney Sweeney — you basically have to understand that a massive chunk of the far right on X are obsessed with anime and obsessed with Japan. And they think that she looks like an anime character and are now in love with her. This is causing them a lot of distress because they’re all extremely racist.


Sigh, Let’s Talk About The Zinc Thing

Here’s another thing I wasn’t going to fully cover, but it, too, has jumped containment. There are a lot of men — typically gay men — that are trading “zinc stacks” on Reddit. This isn’t a totally new thing, but Reddit’s algorithm is pushing this stuff more aggressively right now. I first came across these guys via a couple weight-loss and exercise subreddits over the winter.

There are all kinds of, you know, basic health benefits for taking zinc supplements, of course, but there is another side effect that these guys are very excited about. If you’re British, please send your local MP a copy of your ID before reading the following screenshot:

So yeah, a whole bunch of dudes are taking zinc to do big cums. Now you know. Have a great week, everyone!


A Good First Beat


Did you know Garbage Day has a merch store?

You can check it out here!



P.S. here’s a subtle message from The Dropkick Murphys.

***Any typos in this email are on purpose actually***

Read the whole story
tante
18 days ago
reply
"There is a very a good chance that we are watching the MAGA movement decouple from Trump in front of our very eyes. And as delicious as the schadenfreude would be if Trump was impeached by a government controlled by his fellow Republicans, it would actually signal a far more dangerous political reality."
Berlin/Germany
Share this story
Delete

Friction and not being touched

1 Comment

The journalist Karen Hao – who published an absolutely fantastic book about OpenAI called “Empire of AI” recently – coined (as far as I know) one of the best terms for describing modern “AI” systems: Everything Machines.

“AI” systems are not framed as specific tools that solve specific problems in specific ways but just as solution in itself: There is nothing “AI” cannot do, if it fails we just failed it by not prompting it right or not building large enough data centers or not waiting for another 6 months when these stochastic systems will totally be able to do whatever was needed. Pinky swear.

This trick disconnects the physical and technical realities of the capabilities and lack thereof of connectivist “AI” systems built on stochastic correlations between patterns in data and what these systems are narratively positioned to be able to do. LLMs cannot really do people’s jobs – they might be able to kinda do very small parts, especially if quality is not a criterium – but they are always presented as such: Whether it’s by doomers who predict massive unemployment and poverty or by apologists who predict an age of leisure. The Everything Machine can do – as the name implies – everything. Solve everything.

Lately I’ve been thinking about friction a lot. Not the physical force that allows us to walk and all that – even though that also is neat – but cognitive and social friction and its function.

In tech circles friction is seen as bad, everything needs to be frictionless. Every interaction with anything needs to be smooth and uninterrupted. Which usually means the path to you parting with your money/attention needs to be as seamless as possible. It’s the logic of casinos: Don’t let gamblers see natural light or a clock so nothing disturbs the efficient process of moving money from a bunch of people to the casino owner.

But it’s not only nefarious reasons that make people design frictionlessness: Errorless learning for example wants to remove the friction that making a mistake and being corrected creates in the hope of generating better outcomes. (It’s not a coincidence though, that specifically B. F. Skinner – known for inventing “Skinner Boxes” which is the theory that a lot of modern game and app design is based on to keep people hooked – is a big intellectual force in that space.) Friction can be annoying. Don’t we all just want things to work?

Sure. But friction is not just “things not working properly”, it can also be read as being touched. Just as crowded spaces create friction by other people being in my way while moving, a process with friction makes me feel other people’s work, their point of view and how it differs from mine, makes me feel their needs and wants. Friction is part of what being in, being part of society is.

The idea of frictionlessness has very narcissistic, “player character” vibes: You don’t experience friction if the whole world is build around you and your needs. When you get whatever you want when you want it. That is the Utopia of Frictionlessness: To never be touched by anyone or anything really. Because being actually touched, being inconvenienced, being emotionally moved, having your mind and perception changed means acknowledging your fellow human beings around you, realizing their differences to you and to recognize their value. It means seeing others to a certain degree as your equal. You might be richer, more influential, but we all have bodies that take up space for example. No matter how rich you are, when we all need to share space everyone will take up some.

But especially if you are rich, you can change the equation. You can pay people to keep others away from you. Keep “your space” protected. Can get your demands met at any time. Can influence politics in your favour – which is how we end up not taxing the super rich, casting our societies into socially, economically and politically destructive inequality. Frictionlessness is individualistic and isolating, about disconnecting from the world in any way that does not cater to your specific need and want and demand.

And this brings me back to “AI” as Everything Machines. Because that narrative (and the technologies that are deployed under that concept) are basically the crystallization of never having to be touched.

If “AI” solves everything, can do everything you never have to acknowledge anyone else ever again. You never have to think about how we can change our mode of being in this world in a way that is sustainable for this planet and its ecosystem. You never have to think about how to talk to this other person that you need something from.

Modern “AI” systems are sycophantic. Their conversational mode, their mode of interaction is based on frictionlessness. On keeping the conversation going regardless of what’s being said or its meaning or truth even. When interacting with an “AI” system you get to feel to be the only person that matters: Everything caters to you and your whims. This has been described as a technology that allows everyone to have the experience of having servants or worse without feeling guilty about it – and there’s something to that analysis. But it’s also about you disconnecting from the world and the people in it.

We live in what is often called the “loneliness epidemic“. People increasingly feel isolated and alone. While the term gained traction during the start of the COVID-19 epidemic when social distancing rules enforced that kind of disconnect the seed for all that was planted way before: The economic situation forces more and more people to work longer and longer hours leaving less time for activities with others (think sports clubs or whatever), activities that also keep increasing in cost. COVID just poured gasoline on that fire.

One way that some people combat this feeling of loneliness is by increasingly talking to chatbots. Digital communication has for a long time been a way for people with a small social circle to feel connected to someone, many do have large parts of what they call friends on reddit, Discord or wherever. “AI” has taken that and made it a whole different thing: You can get a chatbot that is there just for you that reacts to whatever you want. That never confronts you with something you don’t want to hear or that challenges you. A frictionless relationship with … yourself if we want to frame it nicely?

And in the end it feels like that is the narrative promise of “AI”: To never ever have to be touched by anyone. Not people you right now might need to employ to keep your business running. Not your neighbours who might want to remove cars from the streets when you want to park in front of your house just to get in their quicker. Not the environment itself that keeps showing you the consequences of your actions as a member of this species. Nothing.

The Utopia of “AI” is the Dystopia of never being touched by anything.

Read the whole story
tante
18 days ago
reply
The Utopia of “AI” is the Dystopia of never being touched by anything.
Berlin/Germany
Share this story
Delete

ChatGPT is going to kill God

1 Comment and 2 Shares

I hate generative AI. I hate how it’s destroying writing pedagogy and giving students even more excuses not to read (because they can just read a “summary”). I hate how whiny and defensive AI users are about the pathetic little ways they’ve integrated it into their lives. If I could push a button and permanently delete it from existence, I would. If I could go back in time and prevent it from being invented, I would.

The reason I hate it is not just that its output is mediocre bullshit. It’s that it is an active attack on everything I value — literacy, analysis, thought. I find the thought that I would use ChatGPT actively degrading and humiliating. I have never touched it, not even as a joke, not even to prove it sucks, not even — as Beatrice has been doing — to see what it’s showing my students. I admit that this is somewhat irresponsible of me, to indulge my revulsion in this way. This technology — which is unimaginably expensive and resource-intensive and which has not developed anything approaching a plausible profit model — is of course inevitable, “here to stay.” Nothing could interrupt the progress of a technology that requires hundreds of billions of dollars to be shovelled into the furnace year after year after year without returning anything. (Can you tell I’m angry? Can you tell I’m sick of hearing these thoughtless clichés?)

But one must eventually face it. One must eventually give it some thought. Beatrice has been helping, in her Substack posts and in our continual chat thread. She also pointed me to the work of Jan Mullen on AI as externalized attention, which takes the kind of insanely overambitious long view of AI that I find appealling as an aficionado of political theology-style genealogies. Mullen compares the rise of AI to the rise of literacy and notes that the latter was, for most of its history, primarily a tool of state control. In a wide-ranging interview, Mullen wonders aloud whether the form of control AI is creating will line up with our idea of “the state” — but is absolutely clear that the purpose of generative AI is to manipulate and control us, to take away our power and agency, the power and agency that humanity somehow managed to wring out of the technologies of literacy.

I was particularly interested in a moment where Mullen postulates that the new model of control “will be distinctly post-literate — and, as a result, post-legal.” This triggered my political theology instincts and I immediately asked: does that mean it will also be post-monotheistic? Long-time readers know I love Jan Assmann’s theory of monotheism. For Assmann, what monotheism does is not primarily or most importantly to reduce the number of gods, but to introduce a new kind of god — an exclusive God, one in relation to whom all other gods are false. A crucial technology for stabilizing the claims of this exclusive God is of course the written scriptural text, which remains a durable deposit even as day-to-day religious practice inevitably drifts away from the strict demands of the original revelation. The monotheistic God, in contrast to previous pantheons with their loosey-goosey translatability and porosity and their ever-shifting body of mythical tales, is a God of the letter because he is a God of law.

So if ChatGPT destroys literacy and law, then ChatGPT is going to kill God. By this I don’t mean that the bearded guy in the sky is going to be found dead of a gunshot wound to the chest, but that monotheistic religion as we have traditionally understood it will not be able to function in a post-literacy regime.

Note that I am not claiming that ChatGPT will take away people’s ability to read in the sense of deciphering letters — obviously its functioning depends on that. But it is killing the notion of a stable, permanent, authoritative text and undercutting the skills needed to engage with that kind of text. Note also that I am well aware that the vast majority of monotheistic believers throughout history were illiterate (in either the “can’t decipher letters” or “can’t make sense of a complex text” sense). But the elite leaders definitely were fully literate, and in fact that was the source of their authority. One cannot say the same of the type of religious leaders who are most thriving in the meme-ified, Trumpified bastard child of Christianity that sets the tone for American religious practice today. Compared to even a generation ago, literal knowledge of what the Bible says at all has been radically set aside. Some combination of personal charisma and “vibes” — above all, opposition to what they imagine progressivism to be — are the source of authority, leading to obvious absurdities like the rejection of compassion as a Christian virtue.

I am a harsh critic of the dumbed down “seeker-sensitive” model of Christianity I was raised in. But in comparison with what passes for Christian teaching today, it is intellectually sophisticated and morally demanding. This is not to say it was simply “better” (after all, it paved the way for what’s happening now) or that the answer is to get back to the Bible. It’s just to mark how far we’ve fallen. What seemed like a vacuous, popified version of Christianity at the time now seems like a robust theological ethos. The haphazard methods of “Bible study” now seem fit for a graduate seminar.

Speaking of the evangelical milieu, I’ve always resisted Luhrmann’s thesis (in When God Talks Back) that evangelicals cultivate an internal voice that they identify with God. When I was growing up, I didn’t know what people meant when they said that Jesus was their best friend or that God was telling them to do something — and I assumed they didn’t know either, that it was just a weird kind of in-group signalling. For my part, Jesus was not my best friend and God didn’t tell me even one single thing. I was all alone up there in my head, always, and I assumed that everyone else was, too. The thought that everyone I grew up around was suffering from a low-grade self-induced psychosis is difficult to cope with.

Now that people are turning to ChatGPT for spiritual insight, though, I wonder if I finally have to admit it. Obviously here again we are dealing with what seems like a quantitative change — people are using the machine to shuffle religious clichés, where previously they just half-consciously did it themselves. But the qualitative difference is that the insights of “Buddy Christ” could always be corrected against the unchanging text of Scripture. When there is no longer an external anchor like that, when the divine revelation is “customized” for each and every reader, something has changed. Again, this is not to say that what happens to be in the Bible is necessarily “better” than any given ChatGPT transcript — presumbly it’s often worse. But a point of leverage has been lost. Counterargument is no longer possible in the same way. And insofar as that point of leverage, that external source of authority, was how “God” functioned in traditional monotheism, that means God is dead.

The new regime appears to be broadly polytheistic. There is of course always the God of the uncriticizable self, who behaves with the same irritable intolerance as the monotheistic God in the face of even the faintest suggestion that one might consider behaving or thinking differently. But there are many other sites of veneration, an overlapping consensus of podcast hosts and memesters and conspiracists that each believer can mix and match to drive themselves insane in the most relevant and appealing way. Joe Rogan belongs to this pantheon, along with Jordan Peterson and any number of other luminaries whose endlessly prattling voices people love more than their own families. Donald Trump is of course a very powerful god in this polytheistic milieu, but as the boos to his vaccine advocacy and the pushback to his Epstein caginess show, he is not all-powerful, his word is not quite law. And the fact that Trump’s words lack all self-consistency means that there are many Trumps as well, that Trump can be whatever his follower imagines him to be.

I cannot see what the believers see when they look at Trump. All I can see is the most degenerate loser ever to live, the most despicable piece of shit imaginable. Maybe this is of a piece with my revulsion at ChatGPT — and indeed, my failure to develop a personal relationship with Jesus Christ. I’m too hung up on facts and reality and meaning and consistency. I’m too literal. That’s why I am unsuited to the emerging world, why I reject it more forcefully than it would ever even bother to reject me. I’m just too literal.





Read the whole story
tante
21 days ago
reply
"So if ChatGPT destroys literacy and law, then ChatGPT is going to kill God. By this I don’t mean that the bearded guy in the sky is going to be found dead of a gunshot wound to the chest, but that monotheistic religion as we have traditionally understood it will not be able to function in a post-literacy regime."
Berlin/Germany
Share this story
Delete
Next Page of Stories