Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2365 stories
·
127 followers

AI will never solve this

1 Comment

Greetings all — hell of a week here. As always, thanks to everyone who reads, supports, and shares this stuff. Paid subscribers, you are the very best. Gonna try a thing where I put the week’s tech and labor headlines additional commentary below a paywall, who knows. So sign up or chip in here if you get value out of this work, and cheers to all.

Subscribe now

It was one of those weeks laden with so many compounding crises that you don’t really know where to start, so I guess I’ll start with the hurricane that looked so ominous in the modeling forecasts that it made a career weatherman weep on the air. Hurricane Milton started gathering strength just as the extent of the wreckage of Hurricane Helene—which left over two hundred dead and is now the second deadliest hurricane to hit the United States in the last 50 years, after Katrina—was beginning to be understood.

Both storms stunned meteorologists with their ferocity—Helene with the *40 trillion gallons* of water it dumped, Milton with its rapid growth and intensity. As the Orlando-based meteorologist Noah Bergren wrote in a viral X post, “This is nothing short of astronomical… This hurricane is nearing the mathematical limit of what Earth's atmosphere over this ocean water can produce.”

Of course, we have climate change to thank for the warmer, storm-friendlier conditions that fueled both monster storms, these deadly juggernauts affirming that we are living in an age of crisis. So it was jarring, if not particularly surprising, to hear former Google CEO Eric Schmidt argue that we should spare no expense in ramping up and running energy-intensive AI systems, since “we’re not going to hit the climate goals anyway” while thousands of hurricane survivors were mourning the loss of loved ones, millions were still without power, and millions more braced for a potentially even more brutal storm.

Hurricane Milton, NOAA.

Schmidt was speaking at a summit in Washington DC when he was asked about the energy demands of AI, and whether that was a concern. All of the incremental progress we’ve made as a nation to reduce carbon emissions, he said, “will be swamped by the enormous needs of this new technology… we may make mistakes with respect to how it's used, but I can assure you that we're not going to get there through conservation." Schmidt continued: "We're not going to hit the climate goals anyway because we're not organized to do it… Yes, the needs in this area will be a problem, but I’d rather bet on AI solving the problem than constraining it.”

The clip of the talk, which you might have seen floating around, went viral, as the sentiment was expressed rather bluntly and callously, but it’s a pretty commonly held view among the tech set, and an increasingly popular one outside it, too; Bill Gates shares it, so do droves of AI influencers on social media, so, to some extent, does the World Economic Forum and even the UN.

But we should be extremely clear about this, because it is an inane and even maybe dangerous notion: AI will never “solve” climate change. Even if OpenAI successfully builds an AGI tomorrow, it will never, under any circumstances, produce any kind of magic bullet that will “fix” the climate crisis.

Look, this is not that hard. Even without AGI, we already know what we have to do. We do not need a complex and all-knowing artificial intelligence to understand that we generate too many carbon emissions with our cars, power plants, buildings, and factories, and we need to use less fossil fuels and more renewable energy.

The tricky part—the only part that matters in this rather crucial decade for climate action—is implementation. As impressive as GPT technology or the most state of the art diffusion models may be, they will never, god willing, “solve” the problem of generating what is actually necessary to address climate change: Political will. Political will to break the corporate power that has a stranglehold on energy production, to reorganize our infrastructure and economies accordingly, to push out oil and gas.

Even if an AGI came up with a flawless blueprint for building cheap nuclear fusion plants—pure science fiction—who among us thinks that oil and gas companies would readily relinquish their wealth and power and control over the current energy infrastructure? Even that would be a struggle, and AGI’s not going to doing anything like that anytime soon, if at all. Which is why the “AI will solve climate change” thinking is not merely foolish but dangerous—it’s another means of persuading otherwise smart people that immediate action isn’t necessary, that technological advancements are a trump card, that an all hands on deck effort to slash emissions and transition to proven renewable technologies isn’t necessary right now. It’s techno-utopianism of the worst kind; the kind that saps the will to act.

Subscribe now

Now this is pointedly not to say that AI systems cannot be useful in research and in improving clean energy at all—AI has been used for things like identifying the optimal way to place solar panels to maximize the sunlight they receive, to locate and analyze which glaciers are shrinking fastest, and so on. And not to discount that—those would all be great things, if they were happening in a vacuum, and are genuinely useful. And yet they are also being used to justify both the ideology outlined above and further investment in the technology itself—which is, ironically, itself an increasingly potent contributor to climate change. The rush to adopt AI, as readers of this newsletter know, has done nothing less than helped revitalize the gas industry in the United States.

The big tech companies, once proudly committed to sustainability—and some really were, this is not to be snide about it; Google and Facebook were huge purchasers of solar power, and for a long time made sure to run their servers with clean energy—are faced either to adopt something resembling Schmidt’s attitude, that AI’s steep carbon costs will be worth it, because those costs will eventually come down and AI will unleash unimaginable advances in clean tech, or to ignore the contradiction altogether. This is apparent especially in those tech companies, such as Microsoft and Amazon especially, that are selling AI tools—the same ones touted for their ability to fight climate change—to oil companies to help them locate and extract fossil fuels faster and more efficiently.

The idea that AI can “solve climate change” is what the critic Lewis Mumford would have called a magnificent bribe—a lofty promise or function that encourages us to adopt a tech product despite its otherwise obvious harmful costs. It is of AI’s greatest predicted benefits, to help us overlook its proven harms, to paraphrase Dan McQuillan. Because right now, on net, it’s clear that AI is only adding to our already significant carbon burden.

That AI will “solve climate change” is nonsense—a quasi-religious mantra that is repeated by tech executives and AI advocates to help them make the case for their products, which happen to consume tremendous amounts of energy. And I get it. Like so many similarly-shaped pitches for AI, it’s easy to see the appeal. We’re all exhausted and anxious here, sure it’d be nice if some all-powerful sentient mass of data could just fix everything for us. But you might as well be praying for divine intervention.

There’s just something uniquely dark about surveying the state of play, as folks like Schmidt and Bill Gates have surely done, and saying, ah well, let’s just build more data centers and hook them up to more gas plants and hope for the best. It’s another instance of Silicon Valley’s halo era wearing off—where once it was at least easy to believe the tech companies’s stories about building a better future, now they’re not even bothering to tell them. Instead of ‘we’re part of the solution’ now it’s ‘well it’s complicated’—at best. Schmidt’s vision is even more dire: We’re never going to address climate change anyway, so we might as well set the controls for the heart of the sun, full steam ahead.


Anyway! It was yet another major week in AI news on a number of different fronts, starting with…

Read more

Read the whole story
tante
1 day ago
reply
"But we should be extremely clear about this, because it is an inane and even maybe dangerous notion: AI will never “solve” climate change. Even if OpenAI successfully builds an AGI tomorrow, it will never, under any circumstances, produce any kind of magic bullet that will “fix” the climate crisis. "
Berlin/Germany
Share this story
Delete

Interneting Is Hard

1 Comment

Comments

Read the whole story
tante
4 days ago
reply
Really cool tutorials on HTML and CSS for complete beginners
Berlin/Germany
Share this story
Delete

‘The Community Is In Chaos:’ WordPress.org Now Requires You Denounce Affiliation With WP Engine To Log In

2 Comments

WordPress.org users are now required to agree that they are not affiliated with website hosting platform WP Engine before logging in. It’s the latest shot fired by WordPress co-creator Matt Mullenweg in his crusade against the website hosting platform.

The checkbox on the login page for WordPress.org asks users to confirm, “I am not affiliated with WP Engine in any way, financially or otherwise.” Users who don’t check that box can’t log in or register a new account. As of Tuesday, that checkbox didn’t exist. 

Since last month, Mullenweg has been publicly accusing WP Engine of misusing the WordPress brand and not contributing enough to the open-source community. WP Engine sent him a cease and desist, and he and his company, Automattic, sent one back. He’s banned WP Engine from using WordPress’ resources, and as of today, some contributors are reporting being kicked out of the community Slack they use for WordPress open-source projects. 

A screenshot of the WordPress.org login page as it appears on Oct. 9 at 1:50 p.m. EST

Among WordPress community contributors, who keep the open-source project running, this checkbox added to the organization’s site is an inflection point in the story of a legal battle that they’ve been mostly isolated from until today. 

“Right now the WordPress community is in chaos. People don’t know where they stand legally, they are being banned from participating for speaking up, and Matt is promising more ‘surprises’ all week,” one WordPress open-source community member who has contributed to the project for more than 10 years told me. They requested to speak anonymously because they fear retribution from Mullenweg. “The saddest part is that while WordPress is a tool we use in our work, for many of us it is much more than a software. It is a true global community, made up of long-time friends and coworkers who share a love for the open-source project and its ideals. We are all having that very abruptly ripped away from us.” 

In a Slack channel for WordPress community contributors, Mullenweg said on Wednesday that the checkbox is part of a ban on WP Engine from using WordPress.org’s resources.

Screenshot via @JavierCasares on X

Mullenweg explained the ban in a blog post published on the WordPress.org site in September, saying it’s "because of their legal claims and litigation against WordPress.org.” (WP Engine named Automattic and Mullenweg as defendants in its lawsuit, which we'll get to in a moment, but not WordPress.org or the WordPress Foundation.)

“WP Engine is free to offer their hacked up, bastardized simulacra of WordPress’s GPL code to their customers, and they can experience WordPress as WP Engine envisions it, with them getting all of the profits and providing all of the services,” Mullenweg wrote in the blog. “If you want to experience WordPress, use any other host in the world besides WP Engine.” 

WP Engine is an independent company and platform that hosts sites built on WordPress. WordPress.org is an open-source project, while WordPress.com is the commercial entity owned by Automattic, and which funds development of, and contributes to, the WordPress codebase. Last month, Mullenweg—who also co-founded Automattic—wrote a post on the organization’s blog, calling WP Engine a “cancer to WordPress” and accusing WP Engine of “strip-mining the WordPress ecosystem, giving our users a crappier experience so they can make more money” because the platform disables revision history tracking.

Mullenweg also criticized WP Engine for not contributing enough to the WordPress open source project, and its use of “WP” in its branding. “Their branding, marketing, advertising, and entire promise to customers is that they’re giving you WordPress, but they’re not. And they’re profiting off of the confusion,” he wrote. “WP Engine needs a trademark license to continue their business.” He also devoted most of a WordCamp conference talk to his qualms with WP Engine and its investor Silver Lake.

WP Engine sent Automattic and Mullenweg a cease and desist letter demanding that he “stop making and retract false, harmful and disparaging statements against WP Engine,” the platform posted in September. 

The letter claimed that Mullenweg “threatened that if WP Engine did not agree to pay Automattic—his for-profit entity—a very large sum of money before his September 20th keynote address at the WordCamp US Convention, he was going to embark on a self-described ‘scorched earth nuclear approach’ toward WP Engine within the WordPress community and beyond.”

Automattic lobbed its own cease and desist back. “Your unauthorized use of our Client’s trademarks infringes on their rights and dilutes their famous and well-known marks. Negative reviews and comments regarding WP Engine and its offerings are imputed to our Client, thereby tarnishing our Client’s brands, harming their reputation, and damaging the goodwill our Client has established in its marks,” the letter states. “Your unauthorized use of our Client’s intellectual property has enabled WP Engine to compete with our Client unfairly, and has led to unjust enrichment and undue profits.” 

The WordPress Foundation’s Trademark Policy page was also changed in late September to specifically name WP Engine. “The abbreviation ‘WP’ is not covered by the WordPress trademarks, but please don’t use it in a way that confuses people,” it now says. “For example, many people think WP Engine is ‘WordPress Engine’ and officially associated with WordPress, which it’s not. They have never once even donated to the WordPress Foundation, despite making billions of revenue on top of WordPress.”

WP Engine filed a lawsuit against Automattic and Mullenweg earlier this month, accusing them of extortion and abuse of power, Techcrunch reported.

Last week, Mullenweg announced that he’d given Automattic employees a buyout package, and 159 employees, or roughly 8.4 percent of staff, took the offer. “I feel much lighter,” he wrote.

According to screenshots posted by WordPress project contributors, there’s a heated debate happening in the WordPress community Slack at the moment—between contributors and Mullenweg himself—about the checkbox.

One contributor wrote that they have a day job as an agency developer, which involves working on sites that might be hosted by WP Engine. “That's as far as my association goes. However, ‘financially or otherwise’ is quite vague and therefore confusing,” they wrote. “For example, people form relationships at events, are former employees, collaborate on a project, contribute to a plugin, or have some other connection that could be considered impactful to whether that checkbox is checked. What's the base level of interaction/association that would mean not checking that box?” 

Mullenweg replied: “It’s up to you whether to check the box or not. I can’t answer that for you.” 

At least two WordPress open-source project contributors—Javier Casares and Andrew Hutchings—posted on X that they’ve been kicked out of the WordPress community Slack after questioning Mullenweg’s actions.

“A few of us asked Matt questions on Slack about the new checkbox on the .org login,” Hutchings posted. “I guess we shouldn't have done that.”

“In today's case, somebody changed the login and disconnected everybody, so, without explanation on the check, if you need to contribute to WordPress and access the website, you need to activate it,” Casares told me in an email. “In my case, this morning, I had to publish a post about a Hosting Team meeting this afternoon.” He had to check the box, he said, because without it he couldn’t access the platform to post it, but the vagueness of the statement concerned him.

He said the people banned this morning included contributors who have been contributing to the WordPress project for more than 10 years, or people related to other source-code projects.

“Why? Only Matt knows why he is doing everything he is doing. I really don't know,” Casares said. 

“Matt’s war against WP Engine has been polarizing and upsetting for everyone in WordPress, but most of the WP community has been relatively insulated from any real effects. Putting a loyalty test in the form of a checkmark on the WordPress.org login page has brought the conflict directly to every community member and contributor. Matt is not just forcing everyone to take sides, he is actively telling people to consult attorneys to determine whether or not they should check the box,” the anonymous contributor I spoke to told me. “It is also more than just whether or not you agree to a legally dubious statement to log in. A growing number of active, dedicated community members, many who have no connection with WP Engine, have had their WordPress.org accounts completely disabled with no notice or explanation as to why. No one knows who will be banned next or for what... Whatever Matt’s end goal is, his ‘tactics,’ especially this legally and ethically ambiguous checkbox, are causing a lot of confusion and mental anguish to people around the world.”

Based on entries to his personal blog and social media posts, Mullenweg has been on safari in Africa this week. Mullenweg did not immediately respond to a request for comment. 



Read the whole story
tante
5 days ago
reply
"At least two WordPress open-source project contributors—Javier Casares and Andrew Hutchings—posted on X that they’ve been kicked out of the WordPress community Slack after questioning Mullenweg’s actions."

Matt Mullenweg is not fit to lead anything Wordpress related
Berlin/Germany
Share this story
Delete
1 public comment
fxer
4 days ago
reply
Mullenweg has always been a twat
Bend, Oregon

Report: Roblox Is Somehow Even Worse Than We Thought, And We Already Thought It Was Pretty Fuckin’ Bad

1 Comment

'Moderators described being paid $12 a day to review countless instances of child grooming and bullying'

The post Report: Roblox Is Somehow Even Worse Than We Thought, And We Already Thought It Was Pretty Fuckin’ Bad appeared first on Aftermath.



Read the whole story
tante
6 days ago
reply
"Anyone [...] probably already has a dim view of Roblox. Whether it's for the child labour stuff [...] or a host of other issues--from customer service to loot boxes to child predators--I think we'd all agree that it's a pretty shitty platform run by a pretty shitty company. But not even I, an avowed hater, was prepared for the depths of Roblox's reported shittiness until I read through a paper released by Hindenburg Research earlier today."
Berlin/Germany
Share this story
Delete

Lilium: Volker Wissing dringt auf schnelle Staatshilfe für Flugtaxifirma

1 Comment
Das Flugtaxi-Start-up Lilium ruft nach staatlichen Krediten, andernfalls könnte die Pleite drohen. Verkehrsminister Volker Wissing fordert nach SPIEGEL-Informationen eine Zustimmung des Parlaments. Doch dort gibt es erhebliche Bedenken.

Read the whole story
tante
8 days ago
reply
Wenn man sich fragt: Was macht Volker Wissing eigentlich so? Staatliche Kredite an Flugtaxifirmen lancieren.

Hat wohl nix zu tun den Tag über der Mann.
Berlin/Germany
Share this story
Delete

Five qualities of A.I. apps

1 Comment

Greetings from Read Max HQ! I was on NPR last week discussing Zyn. And if you haven’t read it yet, let me re-plug my New York magazine feature on A.I. slop.

In today’s newsletter:

  • Assessing Google’s cool new A.I. product NotebookLM;

  • creating A.I.-generated podcasts out of your group chats;

  • the problem with A.I.-generated summaries;

and more!

A reminder: Read Max is 99.5 percent funded by the generosity of paying readers. I treat this newsletter as my full-time job, and spend dozens of hours every week researching, reading, reporting, thinking, procrastinating, and writing, in the hopes of creating something that helps people understand the world, or at least helps them kill 10-15 minutes entertainingly. If you gain something from this newsletter, and if you want to support its weekly public availability, please consider upgrading your subscription for the price of about one-third a fancy cocktail ($5) a month, or three or four fancy cocktails ($50) a year.

Subscribe now

This week’s hot A.I. app--among both the kinds of people who have “hot A.I. apps,” and among college kids on TikTok--is NotebookLM, a Google Labs product to which you can upload “sources”--links, PDFs, MP3s, videos--that then form the basis of a narrowly specific LLM chatbot. For example, if I upload the complete works of a great author of fiction, I can ask questions about characters and themes that span the entire oeuvre, and its answer will come with clear citations:

I can also use it to generate a graduate-level study guide, FAQs, a “briefing,” etc.:

And so on.

NotebookLM has been available for a year or so now, but what’s made it suddenly popular over the last week or so is the discovery of its “audio overview” feature, which creates a short, fully A.I.-generated podcast, in which two realistic A.I.-generated “hosts,” speaking in the chipper and casual tones we associate with professional podcasters, cover whatever’s in your notebook. Here, e.g., is the audio overview for my Curious George notebook:

I like NotebookLM, or, at least, I don’t hate it, which is more than I can say for a lot of A.I. apps. It has a fairly clear purpose and relatively limited scope; its interface is straightforward, with a limited emphasis on finicky “prompting,” and you can imagine (if maybe not put into practice) a variety of productive uses for it. But even if it’s a modest and uncomplicated LLM-based app, it’s still an LLM-based app, which means its basic contours, for better and for worse, are familiar.

The five common qualities of generative-A.I. apps

By this I mean that NotebookLM shares what I think of as the five qualities visible in all the generative-A.I. apps of the post-Midjourney era. NotebookLM, for all that it represents a more practical and bounded LLM experience than ChatGPT or Claude, is in a broad sense not particularly different:

  1. Its popular success is as much (and often more) about novelty and entertainment value than actual utility.

  2. It’s really fun to use and play around with.

  3. Its product is compellingly adequate but noticeably shallow.

  4. It gets a lot of stuff wrong.

  5. It’s almost immediately being used to create slop.

Let me try to explain what I mean, one at a time.

Its popular success is as much (and often more) about novelty and entertainment value than actual utility.

Generative-A.I. apps are almost always promoted as productivity tools, but they tend to go viral (and gain attention and adopters) thanks to entertaining examples of their products. This is not to say that NotebookLM is useless, but I think it’s telling that the most viral and attention-grabbing example of its use so far was the Redditor who got the “podcast hosts” to “realize” that they’re “A.I.,” especially once it was shared on Twitter by Andreessen Horowitz V.C. Olivia Moore.

Subscribe now

My hunch in general is that the entertainment value of generative A.I.--by which I just mean the simple pleasure of using and talking to a computer that can reproduce human-like language--is as underrated as the productivity gains it offers are overrated, and that often uses that are presented as “productive” are not actually more efficient, just more fun:

it seems pretty clear to me that these apps, in their current instantiation, are best thought of, like magic tricks, as a form of entertainment. They produce entertainments, yes--images, audio, video, text, shitposts--but they also are entertainments themselves. Interactions with chatbots like GPT-4o may be incidentally informative or productive, but they are chiefly meant to be entertaining, hence the focus on spookily impressive but useless frippery like emotional affect. OpenAI’s insistence on pursuing A.I. that is, in Altman’s words, “like in the movies” is a smart marketing tactic, but it’s also the company meeting consumer demand. I know early adopters swear by the tinker-y little uses dutifully documented every week by Ethan Mollick and other A.I. influencers, but it seems to me that for OpenAI these are something like legitimizing or world-building supplements to the core product, which is the experience of talking with a computer.

Is “generating and listening to a ten-minute podcast about an academic paper” a more efficient way to learn the material in that paper? I would guess “no,” especially given the limitations of the tech discussed below. But is it more entertaining than actually reading the paper? Absolutely, yes.

It’s really fun to use and play around with.

The first thing I did with NotebookLM was, obviously, upload the text of the group chat for my fantasy football league, in order to synthesize the data and use the power of neural networks understand how bad my friend Tommy’s team is this year:

I can’t say the podcast hosts fully understood every dynamic, but they were very clear that Tommy’s team is bad:

Fantasy sports are obviously fertile territory for NotebookLM, but you want to really fuck up some friendships, I highly recommend uploading as much text as possible from a close-friends group chat and unleashing the power of large language models to analyze the relationships, settle disputes, and just generally wreak havoc on the delicate dynamics of a long-term friend group:

Its product is compellingly adequate, but it’s shallow and gets a lot of stuff wrong.

The answers and syntheses that NotebookLM creates are legible and rarely obviously wrong, and the verisimilitude of the podcast is wild, even if you can hear small glitches here and there.

But the actual quality of NotebookLM’s summaries (both audio and text) is--unsurprisingly if you’ve used any other LLM-based app--inconsistent. Shriram Krishnamurthi asked co-authors to grade its summaries of papers they’d written together; the “podcasters” mostly received Cs. “It is just like a novice researcher: it gets a general sense of what's going on, doesn't always know what to focus on, sometimes does a fairly good idea of the gist (especially for ‘shallower’ papers), but routinely misses some or ALL of what makes THIS paper valuable,” he concludes.

Henry Farrell, who was also unimpressed by the content of the “podcasts,” has a theory about where they go wrong:

It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality. A moderately unusual argument about tariffs and sanctions (it got into the FT after all) was replaced by the generic criticism of sanctions that everyone makes. And so on for everything else. The large model had a lot of gaps to fill, and it filled those gaps with maximally unsurprising content.

This reflects a general problem with large models. They are much better at representing patterns that are common than patterns that are rare.

This seems intuitively right to me, and it’s reflected in the podcasts, which not only summarize shallowly, often to the point of inaccuracy, but draw only the most banal conclusions from the sources they’re synthesizing. For me, personally, the possibility that I’m consuming either a shallow or, worse, completely incorrect summary of whatever it is I’ve asked the A.I. to summarize all but cancels out the purported productivity gains.

Subscribe now

And yet, for a while now it’s seemed like “automatically generated summaries” will be the first widespread consumer implementation of generative A.I. by established tech companies. The browser I use, Arc, has a feature that offers a short summary of a link when you hover and press the shift key, e.g.:

Gmail, of course, is constantly asking me if I want to “summarize” emails I receive, no matter how long; Apple’s new “Apple Intelligence” is touting a feature through which it “summarizes” your alerts and messages, though screenshots I’ve seen make it seem of … dubious worth, at best:

Setting aside the likelihood that the A.I. is getting these summaries wrong (which it almost always will with the kinds of socially complex messages you get from friends), is reading an email or a text or even a whole article really that much of a burden? Is replacing human-generated text with a slightly smaller amount of machine-generated text actually any kind of timesaver? Seeing all these unnecessary machine summaries of communications already smoothed into near-perfect efficiency, it’s hard to not to think about this week’s Atlantic article about college kids who have apparently never read an entire book, which suggests we’re mostly training kids to be human versions of LLMs, passable but limited synthesists, unable to handle depth, length, or complexity:

But middle- and high-school kids appear to be encountering fewer and fewer books in the classroom as well. For more than two decades, new educational initiatives such as No Child Left Behind and Common Core emphasized informational texts and standardized tests. Teachers at many schools shifted from books to short informational passages, followed by questions about the author’s main idea—mimicking the format of standardized reading-comprehension tests. Antero Garcia, a Stanford education professor, is completing his term as vice president of the National Council of Teachers of English and previously taught at a public school in Los Angeles. He told me that the new guidelines were intended to help students make clear arguments and synthesize texts. But “in doing so, we’ve sacrificed young people’s ability to grapple with long-form texts in general.”

Mike Szkolka, a teacher and an administrator who has spent almost two decades in Boston and New York schools, told me that excerpts have replaced books across grade levels. “There’s no testing skill that can be related to … Can you sit down and read Tolstoy? ” he said. And if a skill is not easily measured, instructors and district leaders have little incentive to teach it. Carol Jago, a literacy expert who crisscrosses the country helping teachers design curricula, says that educators tell her they’ve stopped teaching the novels they’ve long revered, such as My Ántonia and Great Expectations. The pandemic, which scrambled syllabi and moved coursework online, accelerated the shift away from teaching complete works.

A Krishnamurthi puts it: “I regret to say that for now, you're going to have to actually read papers.”

It’s almost immediately being used to create slop.

Yes, there are fantasies of productivity, and experiments in shitposting. But all LLM apps trend very quickly toward the production of slop. This week, using NotebookLM, OpenAI founder Andrej Karpathy “curated a new Podcast of 10 episodes called ‘Histories of Mysteries,’” which he generated out of Wikipedia articles about historical mysteries, and uploaded it to Spotify. Moore, the a16z partner, “uploaded 200 pages of raw court documents to NotebookLM [and] created a true crime podcast that is better than 90% of what's out there now...” Enjoy discovering new podcasts? Not for long!

Read the whole story
tante
9 days ago
reply
"My hunch in general is that the entertainment value of generative A.I.--by which I just mean the simple pleasure of using and talking to a computer that can reproduce human-like language--is as underrated as the productivity gains it offers are overrated, and that often uses that are presented as “productive” are not actually more efficient, just more fun"
Berlin/Germany
Share this story
Delete
Next Page of Stories