Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2444 stories
·
138 followers

UFC and the beating heart of Trumpism

1 Comment and 2 Shares

Think About Supporting Garbage Day!

It’s $5 a month or $45 a year and you get Discord access, the coveted weekend issue, and monthly trend reports. What a bargain! Hit the button below to find out more.

The Manosphere’s Victory Lap

There’s been a lot of ink spilled over the last few months attempting to explain what exactly happened to America. Possibly even more than the last time this happened, 10 years ago. Because unlike 2016, it doesn’t feel like fascism is some kind of invasive force seeping in from the dark corners of web, but, now, instead, it has become the air we breathe, like cultural microplastics. 4chan slang, red pill ideology, and Substacker race science has become so prevalent that even when you try and avoid it, your behavior is still defined in opposition to it.

And while there are a myriad of ways this amorphous world of hypermasculine fascism arose, there is one chief architect. A man who has spent years connecting the various nodes of comedy, sports, influencers, platforms, and politicians that power it. And that man is Dana White.

White, the president and CEO of Ultimate Fighting Championship (UFC) and, as of January, a Meta board member, created the new masculinity underpinning Trump’s second administration. And, over the weekend, all of the spheres of influence that it touches came together to welcome podcasters and alleged human traffickers Andrew and Tristan Tate to Las Vegas. The event, UFC 313, was a victory lap for the newly energized manosphere. A showy display of power meant to prove that awful men never have to face consequences ever again.

(Photo by Ian Maule/Getty Images)

White bought UFC in 2001 after being, he claims, chased out of Boston by mobster Whitey Bulger. (Yeah, you and everyone’s uncle, Dana. Every fucking meathead in Boston born before 1985 tells some version of this same story). That’s when he hired podcaster Joe Rogan to be a commentator for UFC matches. That’s also when White met President Donald Trump, who, at the time, was hosting UFC matches at his venues in Atlantic City and Las Vegas. White would go on to officially endorse Trump during the 2016 Republican National Convention, telling the crowd, “Donald was the first guy that recognized the potential that we saw in the UFC and encouraged us to build our business.” It’s been memory-holed, but Republicans weren’t actually very excited about professional mixed martial arts at the time. Sen. John McCain once referred to it as “human cockfighting” lol.

Both White and Rogan got more vocal in their support for Trump during his first administration. Rogan always treated Trump as more of a fun character, while White told The Hill in 2018 that he and Trump had become even closer since Trump entered the White House. But the real turning point was during the pandemic. Which is when White connected with a team of YouTubers that may have been more instrumental than even Rogan in helping Trump get elected a second time.

The Nelk Boys are a group of American and Canadian YouTubers that started out making prank videos. In 2015, a few of them were arrested after telling cops that they had coke in their car (it was Coca-Cola). Around 2020, White was introduced to the Nelk Boys channel through his son, Dana Jr. During the pandemic, the Nelk Boys were flown out to UFC’s Fight Island in Abu Dhabi. And right before the election in that year, the Nelk Boys were invited on to Air Force One to shoot a video with Trump. He also went on their podcast Full Send after the election, but the episode was removed by YouTube because Trump couldn’t stop ranting about election interference. In 2022, Elon Musk made an appearance on Full Send and in 2023, Trump did another interview with them.

Last October, the Nelk Boys were instrumental in building a youth coalition for Trump’s campaign. They launched a voter registration program, threw a music festival, and bought ads on other bro-y podcasts, like Kill Tony, which records in Rogan’s comedy club, This Past Weekend w/ Theo Von, and BS w/ Jake Paul. If you woke up on November 6th and wondered where the hell all these guys came from, the answer is they’re very popular with young men on YouTube and are getting very rich from supporting Trump and, most importantly, supporting UFC.

Interestingly enough, White admitted in November that he kind of regrets going all in on Trump last year, telling The New Yorker, “I want nothing to do with this shit. It’s gross. It’s disgusting.” He’s happy Trump won, but it seems like stumping was a bit more involved than he expected. But I also think it’s an important detail here. The end goal for White has always been about enriching himself and UFC. One of the more insightful things I’ve read about White’s motives was actually a Reddit comment from a few years ago in a thread asking what the deal was with UFC and the Nelk Boys. “I think it's a strategic relationship on Dana's part (and the Nelk boys too, probably),” the user wrote. “He's knows they're extremely popular with young men and Dana wants to make sure UFC doesn't go the way of boxing; being viewed primarily by middle-aged/older men.”

Which is how White ended up personally greeting the Tate brothers in Vegas last weekend. Oh, apparently Mario Lopez is a fan of the Tates, as well, he was filmed saying hi. Also in attendance was Trump-appointed FBI Director Kash Patel, who wants to hire the UFC to train agents, and Mel Gibson. They fistbumped at one point during the match.

Like every other villain in Trump World, I think it’s important to not give White too much credit here. He’s not some kind of mastermind, but he has been instrumental it changing American masculinity, making it inseparable from conservatism and, of course, inseparable from the UFC brand. And now he and every other weird muscle man that comes to UFC fights are all aligned in their hatred of women and deep desire to feel masculine and powerful. A sea of, usually, very bald men in tight shirts that want to hurt the world and be celebrated for it. But if you drill further down into their ideology you’ll also find the same thing every time. Someone who is trying to get rich, has failed every thing they’ve tried, and realized that manipulating sad internet men was the easiest way to do it.


The following is a paid ad. If you’re interested in advertising, email me at ryan@garbageday.email and let’s talk. Thanks!

Make your details harder to find online

The most likely source of your personal data being littered across the web? Data brokers. They're using and selling your information - home address, Social Security Number, phone number and more. Incogni helps scrub this personal info from the web and gives you peace of mind to keep data brokers at bay. Protect yourself: Before the spam madness gets even worse, try Incogni. It takes three minutes to set up. Get your data off 200+ data brokers' and people-search sites automatically with Incogni. Incogni offers a full 30-day money-back guarantee if you're not happy ... But you will be.  

Unlock 55% off annual plans with code DAYDEAL at checkout.


The Last Decade Of American Politics Summed Up In One Post


An Elon Musk Conspiracy Theory Possibly Confirmed

Vivian Wilson, Elon Musk’s transgender daughter, seemingly confirmed a theory that’s been floating around for the last few weeks about Musk. “My assigned sex at birth was a commodity that was bought and paid for. So when I was feminine as a child and then turned out to be transgender, I was going against the product that was sold,” she wrote on Threads yesterday. “That expectation of masculinity that I had to rebel against all my life was a monetary transaction.”

If you haven’t been following this, a graphic has been going around, claiming that all of Musk’s known children were assigned male at birth and that Musk was likely paying for sex-selective in vitro fertilization to ensure that he only had male children. Which would explain how quickly his politics shifted when Williams came out as trans in 2020. He has since disowned Williams and has told reporters he considers her dead.


Bernie’s On Tour

Sen. Bernie Sanders did what every single — and I mean all of them — Democrat should have done two months ago, and hit the road on a national campaign against the Trump administration and DOGE. He is, so far, the only Democrat doing it. Though, there’s some chatter that Rep. Alexandria Ocasio-Cortez will be joining him soon and organizing her own events. A lot of readers have been emailing me asking what exactly the Democrats can do and, well, this is as good of an idea as any, frankly. The internet is flooded with fascist garbage (see above), so they should go outside, connect with actual people, and, in the process, make a lot of content they can flood the zone bacj with.

Also, click here for a wild video of a senior citizen that showed up to watch Sanders speak rocking a Hasan Piker shirt. Bands like The Armed and Laura Jane Grace have been opening for Sanders, which is also pretty cool. But it’s not just Sanders and AOC that are finally wrapping their heads around the current moment.

Majority Report host Sam Seder went on Jubilee, that YouTube channel that makes conservatives and liberals argue for internet traffic, and, honestly, knocked it out of the park. The video, in which he debates a bunch of young Republicans, is hard to watch, but I think he did a pretty good job. Definitely better than how Charlie Kirk did on his episode.

Ultimately, though, the Democrats have one real job right now. As TV writer Grace Freud wrote on X last week, “I have a new theory called The Guy theory of politics. No democrat is The Guy right now. They have no The Guy. You need a The Guy to even possibly confront Trump who has insane levels of The Guy-ness not seen since 2008 Obama.” Democrats need to find a Guy (not gender specific). And they need to find one fast.


JD Vance Tries To Get In On The Meme (And Fails)

Vice President JD Vance tried to get in on the rare vances meme. He, of course, shared one where he doesn’t look like a grotesque monster or gross little boy, opting, instead, to share one where he’s been photoshopped on to Leonardo DiCaprio. Elon Musk liked it, I guess. Don’t worry, though, someone fixed it. Either way, meme’s probably over now.


A Twitch-Streaming AI Keeps Trying To Kill Itself

Claude Plays Pokémon is a Twitch stream where the AI model Claude, developed by Anthropic, tries to play through Pokémon Red and Blue. And while it’s not as fun or exciting as Twitch Plays Pokémon, arguably one of the best things to ever happen online, it has been an interesting experiment.

The AI was instructed to name the Pokémon it caught and immediately started being more protective of them in battle once it had named them. It’s also very bad at navigating the map. It spent three days stuck in Cerulean City because it couldn’t figure out the hedges. And it’s currently stuck again in Mt. Moon. It’s been there a week and is now crashing out hard, implementing what it’s calling “the blackout strategy” to kill itself in-game, possibly as a way to get out of the dungeon.


It’s All Kicking Off On The Balatro Subreddit

The subreddit for the whacky poker game Balatro (which recently consumed my entire life) is in full meltdown mode right now. There are, apparently, a few issues at play. The first, and most important, is that the subreddit is very divided over AI art. Some users want to be able to post it, others do not.

The pseudonymous creator of the game, LocalThunk, for the record, is not a fan of AI art. They recently wrote on Bluesky, “I don’t condone AI-generated ‘art’.” One of the users on the subreddit who wanted to allow AI art, however, was a mod named u/DrTankHead, who also runs a Balatro erotica subreddit. Yeah, I know.

The horny AI evangelist mod responded to the controversy over on the Balatro erotica subreddit they’re still modding and wrote a big long thing about why they wanted AI art to be allowed so badly. “It is NOT my goal to make this place AI centric, I’m thinking it'll be a "day of the week" thing,” they wrote. “Where non-NSFW art that is made with AI can be posted. My goal is to be objective and keep the space safe. (Safe-For-Work Sunday's?)” The explanation didn’t really clear anything up and, honestly, makes all of this even weirder, but oh well.

It’s best to let this just sort of wash over you and never think about it again. But while we’re talking about Balatro, LocalThunk published a blog post this week all about how it was created, which is a great read.


Nunchuck Tyler

@nunchucktylor

You just bumped into Nunchuck Tylor!? #fyp #fypシ #fypage #fypシ゚viral #viralvideo #viraltiktok #tiktok


Did you know Garbage Day has a merch store?

You can check it out here!



P.S. here’s a good post about pasta.

***Any typos in this email are on purpose actually***

Read the whole story
tante
1 day ago
reply
"Because unlike 2016, it doesn’t feel like fascism is some kind of invasive force seeping in from the dark corners of web, but, now, instead, it has become the air we breathe, like cultural microplastics. 4chan slang, red pill ideology, and Substacker race science has become so prevalent that even when you try and avoid it, your behavior is still defined in opposition to it. [...]ut if you drill further down into their ideology you’ll also find the same thing every time. Someone who is trying to get rich, has failed every thing they’ve tried, and realized that manipulating sad internet men was the easiest way to do it.
"
Berlin/Germany
Share this story
Delete

Unabhängiges Prozessordesign: Europäisches RISC-V-Projekt gestartet

1 Comment
Europäische Prozessorambitionen haben bislang auf ARM gesetzt. Ein neues Projekt soll RISC-V-Kerne für Prozessoren und Beschleuniger entwickeln. (Prozessor, KI)
Read the whole story
tante
3 days ago
reply
Finde es einen guten Schritt, dass die EU jetzt RISC-V stärker in den Fokus nimmt.
Berlin/Germany
Share this story
Delete

Network State Unveils Push for Corporate Dystopia Cities

1 Comment
Network State Unveils Push for Corporate Dystopia Cities

Quick post this morning to share an important weekend read.


The Network State cult is actively lobbying Congress to legalize new kinds of corporate-controlled cities where normal laws don’t apply.

From Caroline Haskins and Vittoria Elliott at Wired:

Several groups representing “startup nations”—tech hubs exempt from the taxes and regulations that apply to the countries where they are located—are drafting Congressional legislation to create “freedom cities” in the US that would be similarly free from certain federal laws, WIRED has learned.

According to interviews and presentations viewed by WIRED, the goal of these cities would be to have places where anti-aging clinical trials, nuclear reactor startups, and building construction can proceed without having to get prior approval from agencies like the Food and Drug Administration, the Nuclear Regulatory Commission, and the Environmental Protection Agency.

Though the story makes no mention of the Network State movement, this push for legislation aligns fully with the Network State goal of creating “startup nations” or “charter cities” ruled by tech corporations. Trey Goff, chief of staff for Próspera – a Network State city in Honduras backed by Sam Altman, Marc Andreessen and Peter Thiel – is featured prominently in the piece:

According to Goff, Freedom Cities Coalition has briefed White House officials on three options for creating freedom cities. One is through “interstate compacts.” In this scenario, two or more states could set aside territories with shared tax and regulation policies, with some state-specific carve-outs. Under existing law, these compacts can’t be revoked, though they can be dissolved under certain circumstances.

If an interstate compact is approved by Congress, it becomes valid under federal law. Goff says the coalition is considering Congressional legislation that would give “advanced consent” to any freedom city compacts. That way, Congress wouldn’t need to approve each individual city.

Trump’s 2024 campaign proposed something called “freedom cities,” a lightly rebranded version of the Network State. Yet, most news outlets mentioned the idea without providing any real explanation of the concept.

Let’s be clear: These cities will be controlled entirely by tech billionaires and corporations, operating outside of U.S. laws. As this story comes into focus, there is no reason why anyone should accept the Orwellian term “freedom city” to describe zones that will actually be devoid of the laws, rights, freedoms and protections of normal American law. The term is an overt political manipulation that should be rejected by media outlets going forward, as it serves only the interest of propaganda.

Fascist Cities would be more accurate, though I’m sure U.S. newsrooms can find a milder term. The Wired story quotes me pushing back on the false freedom framing:

These are going to be cities without democracy. These are going to be cities without workers' rights. These are going to be cities where the owners of the city, the corporations, the billionaires have all the power and everyone else has no power. That's what's so attractive about these sovereign entities to these people, is that they will actually be anti-freedom cities.

I encourage you to read the entire piece and share it with everyone you know. Subscribe if you must! Wired is the only major outlet covering this story. Your subscription is a vote for more! Click below to read:

‘Startup Nation’ Groups Say They’re Meeting Trump Officials to Push for Deregulated ‘Freedom Cities’
The architects of projects like Próspera are drafting legislation to create US cities that would be free from federal regulations.
Network State Unveils Push for Corporate Dystopia Cities

More Network State Reading

Last August, I wrote about Trump's plan to build new territories and how it reflected the goals of the Network State cult:

Trump’s weird new ‘cities’ and the Network State cult
Why do Trump, Thiel and Andreessen want to build new cities?
Network State Unveils Push for Corporate Dystopia Cities

The Network State's dream of creating a world without democracy was also the subject of my five-part series in The New Republic last year.

With the Network State pushing to get a law passed in Washington, how much longer can major outlets like the New York Times and the Washington Post ignore this story?

Read the whole story
tante
3 days ago
reply
"The Network State cult is actively lobbying Congress to legalize new kinds of corporate-controlled cities where normal laws don’t apply."
Berlin/Germany
Share this story
Delete

The A.I. backlash backlash

1 Comment

Greetings from Read Max HQ! In this week’s newsletter, a consideration of the current state of A.I. discourse.

Some housekeeping: This week I appeared on Vox’s “Today, Explained” podcast with Noel King to talk about the Zizians. Also, Jamelle Bouie recently posted video of my appearance talking about Air Force One on the “Unclear and Present Danger” podcast with him and John Ganz.

A reminder: Read Max, and its intermittently thoughtful and intelligent coverage of tech, politics, and culture, depends on paying subscribers to survive. I’m able to make one newsletter free every week thanks to the small percentage of total readers who support the work. If you like this newsletter--if you find it entertaining, educational, informative, or at least “not enervating”--consider becoming a paid subscriber, and helping subsidize the freeloaders who might find some small benefit themselves. At $5/month, it’ll only cost you about a beer every four weeks, or ten beers a year.

Subscribe now

This tweet from Times reporter Kevin Roose (quote-tweeting Matt Yglesias, who’s screen-shotting an Ezra Klein column) crossed my desk on Tuesday:

I don’t mean to pick on Roose, but the sentiments expressed here--both in the tweet and the quoted paragraphs--strike me as a good examples of a new development in endless and accursed Online A.I. Discourse: The backlash to the A.I. backlash.

Since the release of ChatGPT in 2022, A.I. discourse has gone through at least two distinct cycles, at least in terms of how it’s been talked about and understood on social media, and, to a lesser extent, in the popular press. First came the hype cycle, which lasted through most of 2023, during which the loudest voices were prophesying near-term chaos and global societal transformation in the face of unstoppable artificial intelligence, and Twitter was dominated by LinkedIn-style A.I. hustle-preneur morons claiming that “AI is going to nuke the bottom third of performers in jobs done on computers — even creative ones — in the next 24 months.”

When the much-hyped total economic transformation failed to arrive in the shortest of the promised timeframes--and when too many of the highly visible, actually existing A.I. implementations turned out to be worse-than-useless dogshit--a backlash cycle emerged, and the overwhelming A.I. hype on social media was matched by a strong anti-A.I. sentiment. For many people, A.I. became symbolic of a wayward and over-powerful tech industry, and many people who admitted or encouraged the use of A.I., especially in creative fields, was subject to intense criticism.

But that backlash cycle is now facing the early stages of a backlash of its own. Last December, the prominent tech columnist (and co-host, with Roose, of the Hard Fork podcast) Casey Newton wrote a piece called “The phony comforts of AI skepticism,” suggesting that many A.I. critics and skeptics were willfully ignoring the advancing power and importance of A.I. systems:

there is an enormous disconnect between external critics of AI, who post about it on social networks and in their newsletters, and internal critics of AI — people who work on it directly, either for companies like OpenAI or Anthropic or researchers who study it. […]

There is a… rarely stated conclusion… which goes something like: Therefore, superintelligence is unlikely to arrive any time soon, if ever. LLMs are a Silicon Valley folly like so many others, and will soon go the way of NFTs and DAOs. […]

This is the ongoing blind spot of the “AI is fake and sucks” crowd. This is the problem with telling people over and over again that it’s all a big bubble about to pop. They’re staring at the floor of AI’s current abilities, while each day the actual practitioners are successfully raising the ceiling.

In January, Nate Silver wrote a somewhat similar post, “It's time to come to grips with AI,” which more specifically takes “the left” to task for its A.I. skepticism:

The problem is that the left (as opposed to the technocratic center) isn’t holding up its end of the bargain when it comes to AI. It is totally out to lunch on the issue.

For the real leaders of the left, the issue simply isn’t on the radar. Bernie Sanders has only tweeted about “AI” once in passing, and AOC’s concerns have been limited to one tweet about “deepfakes.”

Meanwhile, the vibe from lefty public intellectuals has been smug dismissiveness.

And, this week, Klein’s interview with Ben Buchanan, Biden’s special adviser for artificial intelligence--which arrives with the headline “The Government Knows A.G.I. Is Coming.” Klein’s not as direct as Newton or Silver, but he’s obviously aiming his introduction to the interview at what Newton calls “the ‘A.I. is fake and sucks’ crowd”:

If you’ve been telling yourself this isn’t coming, I really think you need to question that. It’s not web3. It’s not vaporware. A lot of what we’re talking about is already here, right now.

I think we are on the cusp of an era in human history that is unlike any of the eras we have experienced before. And we’re not prepared in part because it’s not clear what it would mean to prepare. We don’t know what this will look like, what it will feel like. We don’t know how labor markets will respond. We don’t know which country is going to get there first. We don’t know what it will mean for war. We don’t know what it will mean for peace.

And while there is so much else going on in the world to cover, I do think there’s a good chance that, when we look back on this era in human history, A.I. will have been the thing that matters.

The substance of the anti-backlash position at its broadest is something like: Actually, A.I. is quite powerful and useful, and even if you hate that, lots of money and resources are being expended on it, so it’s important to take it seriously rather than dismissing it out of hand.

Who, precisely, these columns are responding to is an open question. The objects of accusation are somewhat vague: Newton mentions Gary Marcus, the cognitive scientist and prolific blogger, but then acknowledges that Marcus “doesn’t say that AI is fake and sucks, exactly.” Silver seems to be responding to two tweets from Noah Kulwin and Ken Klippenstein. Klein doesn’t specify anyone at all. The ripostes are not so much about the many rigorous A.I.-critical voices that have emerged--taxonomized in this Benjamin Riley post, which serves as an excellent guide to some of the sharpest and smartest people currently writing on the subject--and more about an ambient, dismissive anti-A.I. sensibility that’s emerged on social media, and that animates, e.g., the spiteful banter that leads Roose to say he suffers a “social penalty for taking AGI progress seriously.”

But at the same time I also don’t think that this backlash-to-the-backlash is limited to Big Accounts complaining about their Twitter mentions, either. Speaking anecdotally, I see more pushback than I used to against some of the more vehement A.I. critics, and defenses of A.I. usage from people who are otherwise quite critical of the tech industry. I wouldn’t say we’re in a new hype cycle--yet--but it’s clear that the discursive ground has shifted slightly in favor of A.I.

Why is the attitude changing? Some proponents of a new hype cycle broadly if vaguely invoke vibes and rumors and words from sources, as though OpenAI is just a few model-weight tweaks away from releasing HAL-9000 (Klein, in his column this week: “Person after person… has been coming to me saying… We’re about to get to artificial general intelligence”; Roose, a few months ago: “it is hard to impress… how much the vibe has shifted here… twice this month I’ve been asked at a party, ‘are you feeling the AGI?’”). The problem with using “A.I. insiders” as a guide to A.I. progress is that these insiders have been whispering stuff like this for years, and at some point I think we need to admit that even the smartest people in the industry don’t have much credibility when it comes to timelines.

But it’s not all whispers at parties. There are public developments that I think help explain some of the renewed enthusiasm for A.I., and pushback against aggressive skepticism. Take, for example, the new vogue for “Deep Research” and visible chain-of-thought models. OpenAI, Google, and XAI all now have products that create authoritative “reports” based on internet searches, and whose process can be made legible to the user as a step-by-step “chain of thought.” This format can be as confidence-inspiring in its own way as the “human-like chatbot” format of the original ChatGPT was. As Arvind Narayanan puts it: “We're seeing the same form-versus-function confusion with Deep Research now that we saw in the early days of chatbots. Back then people were wowed by chatbots' conversational abilities and mimicry of linguistic fluency, and hence underappreciated the limitations. Now people are wowed by the ‘report’ format of Deep Research outputs and its mimicry of authoritativeness, and underappreciate the difficulty of fact-checking 10-page papers.”

This length and stylistic confidence makes it easy to convince yourself that LLMs are still improving by leaps and bounds, even as evidence gathers that progress on improving capability is slowing. Klein even mentions Deep Research in his interview: “I asked Deep Research to do this report on the tensions between the Madisonian Constitutional system and the highly polarized nationalized parties we now have. And what it produced in a matter of minutes was at least the median of what any of the teams I’ve worked with on this could produce within days.”1 But since the debut of ChatGPT it’s been clear that the format and “character” in which an LLM generates text has an enormous effect on how users understand and trust the output. As Simon Willison and Benedict Evans--two bloggers who are not kneejerk A.I. critics--have both pointed out, for all of its strengths, Deep Research has the same ineradicable flaws as its LLM-app predecessors, masked by its authoritative tone. “It's absolutely worth spending time exploring,” Willison writes, “but be careful not to fall for its surface-level charm.”

But if Deep Research is providing some hope for forward momentum, I think a broader--if somewhat less specific or sexy--development has softened the ground a bit for a renewed A.I. hype: the straightforward and observable facts that generative A.I. output has gotten much more reliable since 2022, and more people have found ways to incorporate A.I. into their work in ways that seem useful to them. The latest models are more dependable than the GPT 3-era models that were many people’s first interaction with LLM chatbots--and, crucially, many of them are able to provide citations and sources that allow you to double-check the work. I don’t want to overstate the trustworthiness of any text produced by these apps, but it’s no longer necessarily the case that, say, asking an LLM a question of fact is strictly worse than Googling it, and it’s much easier to double-check the answers and understand its sourcing than it was just a year ago.

These fairly obvious improvements go hand in hand with a larger number of people who don’t have a particular ideological or financial commitment to A.I. who’ve found ways to integrate it into their work, coming up with more clearly productive uses for the models than the useless, nefarious or obviously bogus suggestions posed by the A.I. influencers who annoying dominated the first hype wave. I don’t use A.I. much for writing or research (old habits die hard), but I’ve found it extremely useful for creating and cleaning up audio transcriptions, or for finding tip-of-my-tongue words and phrases. (It’s possible that all these people, myself including, are fooling themselves about the amount of time they’re saving, or about the actual quality of the work being produced--but what matters in the question of hype and backlash is whether people feel as though the A.I. is useful.)

None of which is to say, of course, that A.I. is universally useful, harmless, or appropriate. Aggressive A.I. integration into existing products like Google Search and Apple notifications over the last couple years has mostly been a highly public, who-asked-for-this? dud, and probably the most widespread single use for ChatGPT has been cheating on homework. But it’s much harder to make the case that A.I. products are categorically useless and damaging when so many people seem able to use them to adequately supplement tasks like writing code, doing research, or translating or proofing texts, with no apparent harm done.

And even if it undermines more aggressive claims about the systems’ uselessness or fraudulence, I tend to think that more widespread consumer adoption of A.I. tools is, on balance, a good development for A.I. skepticism. In my own capacity as an A.I. skeptic I’m desperate for A.I. to be demystified, and shed of its worrying reputation as a one-size-fits-all solution to problems that range from technical to societal. I think--I hope, at any rate--that widespread use may help accomplish that demystification: The more people use A.I. with some regularity, the more broad familiarity they’ll develop with its specific and and consistent shortcomings; the more people understand how LLMs work from practical experience, the more they can recognize A.I. as an impressive but flawed technology, rather than as some inevitable and unchallengeable godhead.

In many ways I’m sympathetic to the backlash-to-the-backlash. I often find myself annoyed when I see smug wholesale dismissiveness of A.I. systems as a whole on Bluesky. As a general rule, I think it serves critics well to be curious and open when it comes to the object of criticism. But where I draw the line on A.I. openness, personally, is “artificial general intelligence.”

I don’t like this phrase, and I wish journalists would stop using it. I’m not sure that it’s well-known enough outside of the world of people following this stuff that “A.G.I.” doesn’t mean anything specific. It tends to be casually thrown around as though it refers to a widely understood technical benchmark, but there’s no universal test or widely accepted, non-tautological definition. (“A canonical definition of A.G.I.,” Klein’s interview subject Buchanan says, “is a system capable of doing almost any cognitive task a human can do.” 🆗.) Nor, I think we should be clear, could there be: Unfortunately for the haters, “intelligence”--or, now, “general intelligence”--is not an empirical quality or a threshold that be achieved but a socio-cultural concept whose definition and capaciousness emerges at any given point in time from complex and overlapping scientific, cultural, and political processes.

None of which, of course, has stopped its wide adoption to mean some--really any--kind of important A.I. achievement. In practice it’s used to refer to dozens of distinct scenarios from “apocalyptic science-fiction singularity” to “particularly powerful new LLM” to “hypothetical future point of wholesale labor-market transformation.” This collapse tends to obfuscate more than it clarifies: When you say “A.G.I. is coming soon,” do you mean a we’re about to flip the switch and birth a super-intelligence? Or do you mean that computer is going to do email jobs? Or do you just mean that pretty soon A.I. companies will stop losing money?

I’m not even joking about that, by the way--among the only verifiable definitions of “A.G.I.” out there is a contractual one between Microsoft and OpenAI, currently bound in a close partnership legally breakable upon development of A.G.I. According to The Information, the agreement between the companies declares that OpenAI will have achieved A.G.I.

only when OpenAI has developed systems that have the “capability” to generate the maximum total profits to which its earliest investors, including Microsoft, are entitled, according to documents OpenAI distributed to investors. Those profits total about $100 billion, the documents showed.

The documents still leave some things open to interpretation. They say that the “declaration of sufficient AGI” is in the “reasonable discretion” of the board of OpenAI. And the companies may still have differing views on whether the existing technology has a massive, profit-generating capability.

There is something very funny and Silicon Valley about A.G.I.’s meaning having shifted from a groovy “rapturous technosingularity” to a lawyerly “$100b in profits,” but it’s also worth noting that at least one person might benefit in quite a direct and literal way from the increasingly broad and casual use of “A.G.I.,” in the press as well as at Silicon Valley parties: Sam Altman.

But I think what really gets to me about the overuse of “A.G.I.” is not so much the vagueness or the fact that Sam Altman might profit from it, but the teleology it imposes--the way its use compresses the already and always ongoing process of A.I. progress, development, and disruption into a single point of inflection. Instead of treating A.I. like a normal technological development whose emergence and effect is conditioned by the systems and structures already in place, we’re left anxiously awaiting a kind of eschatological product announcement--a deadline before which all we can do is urgently and at all costs prepare, a messianic event after which we will no longer be in control. This kind of urgency and anxiety serves no one well, except for the people who’ve found a way to profit from cultivating it.

1

It’s an off-the-cuff interview and I wouldn’t want to make to much of it, but I thought it was interesting that Klein’s immediate illustrative example didn’t involve a way that A.I. might replace him, but a way it might replace people who work for him. Whatever else you can say about this technology, it has a way of making people think like bosses.



Read the whole story
tante
3 days ago
reply
"The problem with using “A.I. insiders” as a guide to A.I. progress is that these insiders have been whispering stuff like this for years, and at some point I think we need to admit that even the smartest people in the industry don’t have much credibility when it comes to timelines."
Berlin/Germany
Share this story
Delete

Sondervermögen Infrastruktur: IT- und Digitalwirtschaft fordert Investitionen

1 Comment

Union und SPD planen ein Infrastruktur-Sondervermögen von 500 Milliarden Euro. Die deutsche IT- und Digitalbranche will auch ein Stück vom Kuchen haben.

Read the whole story
tante
7 days ago
reply
"Deutschland Stack" entwickelt mit dem Infrastruktur Sondervermögen. Genau diese Formen von Narrativ kriegt man durch das affirmative Aufgreifen von "Digitale Souveränität". Da werden Milliarden in Gedöns versacken.
Berlin/Germany
Share this story
Delete

Hamburg-Wahl: »Es schadet der AfD, wenn Migration nur eine untergeordnete Rolle spielt«

1 Comment
Bei der Bundestagswahl holte die AfD 20,8 Prozent, in Hamburg dagegen erlitt die extrem rechte Partei eine Wahlschlappe. Der Hamburger Politikwissenschaftler Kai-Uwe Schnapp erklärt, warum das so ist.

Read the whole story
tante
9 days ago
reply
"Migration als Problem ist aber das zentrale Thema der AfD, weshalb es der Partei tatsächlich schadet, wenn es nur eine untergeordnete Rolle spielt."
Berlin/Germany
Share this story
Delete
Next Page of Stories