Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2336 stories
·
124 followers

Put Up Or Shut Up

1 Comment

I feel like the tech industry is currently in the midst of the most bizarre cognitive dissonance I've ever seen — more so than the metaverse, even — as company after company simply lies about their intentions and the power of AI. 

I get it. Everybody wants something to be excited about. Everybody wants something new and interesting and fun and cool that gives everybody something to write about and point at and say that the tech industry is equal parts important and developing the future. It’s easier this way — to just accept what we’re being told by Sam Altman and a coterie of other people slinging marketing nonsense — rather than begin to ask whether any of this actually matters, and whether the people hyping it up might be totally full of shit. 

Last week, HR platform Lattice announced that it would be, to quote CEO Sarah Franklin, "the first company to lead in the responsible employment of AI ‘digital workers’ by creating a digital employee record to govern them with transparency and accountability." This buzzword-laden nonsense, further elaborated on in a blog post while adding absolutely nothing in the process, suggested that Lattice would be treating digital workers as if they were employees, giving them "official employee records in Lattice," and "securely onboarding, training and assigning them goals," as well as performance metrics, "appropriate systems access, and even a manager, just as any person would be."

Lattice claimed that this would "mark a significant moment in the evolution of AI technology — the moment when the idea of an "AI employee moves from concept to reality in a system and into an org chart."

After less than a week of people relentlessly dunking on the company, Lattice would add a line to the blog post stating that "this innovation sparked a lot of conversation and questions that have no answers yet" and that it "[looks] forward to continuing to work with our customers on the responsible use of AI, but will not further pursue digital workers in the product."

This idea was (and is), of course, total nonsense. From what I can tell — as Lattice didn't really elaborate beyond a few screenshots and PR-approved gobbledygook — the company planned to create a profile for AI "workers" within the platform, which would then, in turn, allow something else to happen, though what that is doesn't seem obvious, because I'm fairly certain that this entire announcement was the equivalent of allowing you to make a profile in a CRM but with a dropdown box that said "AI." 

CEO Sarah Franklin — who spent sixteen years at Salesforce, a company that has announced it was adding AI every year for the last decade, and that nobody really understands what it does — wanted to turn what was a fairly mediocre and meaningless product addition into a big marketing push, only to find out that doing so resulted in annoying people like me asking annoying questions like "what does this mean?" and "will this involve paying them benefits, as they're employees?"

alt

It's also a nonsensical product idea, one that you'd only conceive if you'd never really used AI or done a real job in quite some time. When you use ChatGPT or any of the other generative AI bots that Franklin listed, you're not using...them, you're accessing a tool that generates stuff. 

Why would you add ChatGPT to your org chart? What does that give you? What does it mean to be ChatGPT's manager, or ChatGPT's direct report, or for ChatGPT to be considered an "employee"? This would be like adding Salesforce, or Gmail, or Asana as an employee, because these are tools that, uh, do stuff in the organization.

It's all so fucking stupid! Putting aside the ethical concerns that never crossed Franklin's mind (they're employees — do they get paid? If there's a unionization vote, who controls them? Given the fears of AI replacing human workers, how will this make existing employees feel?), this entire announcement is nonsense, complete garbage, conjured up by minds disconnected from productivity or production of any kind. What was this meant to do? What was the intent of this announcement, other than to get attention, and what was the product available at the end?

Nothing. The answer is nothing. There was nothing behind this, just like so much of the AI hype — a bunch of specious statements and empty promises in a wrapper of innovation despite creating nothing in the process.

According to Charles Schwab, there's an "AI revolution" happening, with companies like Delta allegedly using it to "deliver efficiency" through the power of AI, and Lisa Martin, "CMO Advisor" (?) of hype-fiend analysts The Futurum Group claiming that there are "hard results" with call center volume "dropping 20% thanks to Delta's AskDelta Chatbot." One might assume that this was a recent addition as it was the one and only statistic in a rambling screed about "efficiency" and "customer service value," except AskDelta was launched sometime before 2019 according to this PMYNTS article, assuming that this VentureBeat article from 2016 is talking about some other Delta chatbot, potentially made by a company called 247.ai that it sued for a data breach that happened in 2017. 

Now OpenAI has announced that it has created a "five-level system" to track development toward Artificial General Intelligence and "human-level problem-solving," a non-story that should be treated with deep suspicion. 

According to Bloomberg, these five levels go from Level 1 ("Chatbots, AI with conversational language") to Level 5 ("Organizations, AI that can do the work of an organization") and, somehow, Bloomberg credulously accepted an OpenAI spokesperson's statement that it is "on Level 1, but on the cusp of Level 2," when GPT can do "human-level problem solving" without any explanation of what that means, how that's measured, or how GPT, a model that probabilistically guesses the right thing to do next, will start "reasoning," a thing that Large Language Models cannot do according to multiple academics

It’s kind of like saying you’re on the first step to becoming Spider-Man because you’re a man. 

It’s also important to note that the steps between these stages are huge. To reach “reasoner” level, Large Language Models would have to do things they are currently incapable of doing. To reach level 3 (whatever “Agents, systems that can take actions” means) you’d need a level of sentience dependent on entirely new forms of mathematics. While a kinder, more patient person would suggest that these were just frameworks created for potential endeavors, the fact that they’re tactically leaking feels like a marketing exercise far more than anything resembling innovation. 

Alex Kantrowitz on X: "OpenAI has a five level ranking for how close it is  to AGI. Internally, the company says it's approaching Level 2.  https://t.co/tmXlqcFvtB https://t.co/ualz5jOxeL" / X
OpenAI's Five Levels Of Bullshit That Mean Nothing, Source: Bloomberg reporting

OpenAI is likely well aware that Large Language Models are incapable of reasoning, because Reuters reports that it’s got some sort of in-development model called Strawberry, which is designed to have human-like reasoning capabilities. 

However, this story is riddled with strange details. It’s based on a document that Reuters viewed in May, and “Strawberry” is apparently the new name for Q*, a supposed breakthrough from last year that Reuters reported on around the time of Altman’s ouster, which allegedly could “[answer] tricky science and math questions out of reach of today’s commercially-available models.” Reuters also misreports Bloomberg’s story about how “OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills,” a liberal way of describing, to quote Bloomberg, “a research project involving its GPT-4 AI model that OpenAI thinks shows some new skills that rise to human-like reasoning.” 

The document — again, viewed in May but reported on in July, for whatever reason — describes “what Strawberry aims to enable, but not how,” yet also somehow describes using “post-training” of models, which Reuters says means “adapting the base models to hone their performance in specific ways after they’ve already been “trained” on reams of generalized data,” which sounds almost exactly like how models are trained today, making me wonder if this wasn’t so much a “leak” as it was “OpenAI handing a document to Reuters for marketing reasons.” 

I realize I’m being a little bit catty, but come on. What is the story here? That OpenAI is working on something it was already working on, and have yet to achieve anything with?

Take this quote:

The document describes a project that uses Strawberry models with the aim of enabling the company’s AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms “deep research,” according to the source.

Who cares! I guarantee you Anthropic, Google and Meta are all working on something doing exactly the same thing, because what they are describing is something that limits every single Large Language Model due to the very nature of how they’re built, something Reuters even acknowledges by adding that “this is something that has eluded AI models to date.”

And here’s another wrinkle. If this Q/Strawberry thing was meant to be such a big breakthrough, why hasn’t it broken through yet? Reuters wrote up Q*'s existence in December of last year! Surely if there was some sort of breakthrough - even a small one - these companies would share it/"a source would leak it."

The problem is that these pieces aren’t about breakthroughs — they’re marketing collateral dressed up as technological progress, superficially-exciting yet hollow on the inside, which almost feels a little too on the nose. 

I hate to be that guy, but it’s all beginning to remind me of the nebulous roadmaps that cryptocurrency con artists used to offer. What’s the difference between OpenAI vaguely suggesting that “Strawberry” will “give LLMs reasoning” and NFT project Bored Ape Yacht Club’s roadmap that promises a real-life clubhouse in Miami and a “top secret blockchain game”? I’d argue that the Bored Ape Yacht Club has a better chance of delivering, if only because “a blockchain game” would ostensibly use technology that exists.

No, really, what’s the difference here, exactly? Both are promising things they might never deliver.

Look, I understand that there’s a need to studiously cover these companies and how they’re (allegedly) thinking about the future, but both of these stories demand a level of context that’s sorely lacking.


These stories dropped around the same time a serious-seeming piece from the Washington Post reported that OpenAI had rushed the launch of GPT-4o, its latest model, with the company "planning the launch after-party prior to knowing if it was safe to launch," inviting employees to celebrate the product before GPT-4o had passed OpenAI's internal safety evaluations. 

You may be reading this and thinking "what's the big deal?" and the answer is "this isn't really a big deal," other than the fact that OpenAI said it cared a lot about safety and only sort-of did, if it really cared at all, which I don't believe it does. 

The problem with stories like this is that they suggest that OpenAI is working on Artificial General Intelligence, or something that left unchecked could somehow destroy society, as opposed to what it’s actually working on — increasingly faster iterations of a Large Language Model that's absolutely not going to do that. 

OpenAI should already be treated with suspicion, and we should already assume that it’s rushing safety standards, but its "lack of safety" here is absolutely nothing to do with ethical evaluators or "making sure GPT-4o doesn't do something dangerous." After all, ChatGPT already spreads election misinformation, telling people how to build bombs, giving people dangerous medical information and generating buggy, vulnerable code. And, to boot, former employees have filed a complaint with the SEC that alleges its standard employment contracts are intended to discourage any legally-protected whistleblowing.

The safety problem at OpenAI isn't any bland fan fiction about how "it needs to be more careful as it builds artificial general intelligence," but that a company stole the entirety of the internet to train a model that regularly and authoritatively states incorrect information. We don't need a story telling us it kind-of-sort-of-rushed a product to market, even though it didn't actually do that, but that this company, on a cultural level, operates without regard for the safety of its users, and is deliberately misrepresenting to outlets like Bloomberg that it is somehow on the path to creating AGI.

Where is OpenAI's investment in mitigating or avoiding hallucinations? While there's tons of evidence that they're impossible to fix, surely a safety-minded culture would be one that sought to fix or mitigate the most obvious and dangerous problem of Large Language Models.

The reality is that Sam Altman and OpenAI don't give a shit, have never given a shit, and will not give a shit, and every time they (and others) are given the opportunity to talk in flowery language about "safety culture" and "levels of AI," they're allowed to get away from the very obvious problem: that Large Language Models are peaking, will not solve the kind of complex problems that actually matter, and OpenAI (and other LLM companies) are being allowed to accumulate money and power in a way that's allowed them to do actual damage in broad daylight.

To quote my friend and Verge reporter Kylie Robison, Sam Altman is more interested in accruing power than he is in developing AGI, and I'd take that a level further and add that I think he's more than aware that OpenAI will likely never do so. 

OpenAI has allowed Altman to gain thousands of breathless headlines whenever he makes some sort of half-assed statement about how his model will "solve physics," with the media helping build an empire for a career bullshit-merchant that has a history of failure, including his specious crypto-backed biometric data hoarder Worldcoin, which has missed its target of signing up 1 billion users by 2023 by nine-hundred and ninety-four million people, largely because it doesn't do anything and there's not a single reason for it to exist.

But it doesn't matter, because any time Sam Altman or any major CEO says something about AI everybody has to write it up and take it seriously. Last week, career con-artist Arianna Huffington announced a partnership between Thrive Global (a company that sells "science-backed" productivity software(?)) and OpenAI that would fund a "customized, hyper-personalized AI health coach" under Thrive AI Health, another company for Arianna Huffington to take a six-figure salary from. It claims it will "be trained on the best peer-reviewed science as well as Thrive's behavior change methodology," a mishmash of buzzwords and pseudoscientific pablum that means nothing because the company has produced no product and may likely never do so.

It's pretty tough to find out what Thrive actually does, but from a little digging in, one of its products appears to be called "Thrive Reset," which is, and I shit you not, a product that makes customer service agents take a 60-second "science-backed" break during the workday. According to Glassdoor, a publicly-available database of honest reviews of companies, Thrive Global has a "toxic culture" and "awful leadership" that is "mostly bad" with a management team lacking a clear pathway to success," with one review saying that Thrive had "the most toxic work environment they'd ever encountered" with "direct bullying," and another saying you should "stay away" because it's "toxic beyond belief," with a hierarchy "based on who the founder favors the most." 

If I had to pick an actual, real safety story, it'd be investigating the fact that OpenAI, a company with a manipulative, power-hungry charlatan as its CEO, considers it safe to partner up with a company that doesn't appear to do anything other than make its employees miserable (and has done so for years).

Indeed, we should be deeply concerned that Thrive Global hosts — to quote Slate — a "an online community whose members frequently publish writings that traffic in myths about "alternative COVID cures," and that this is the partner that OpenAI believes should work with it on a personalized health coach.

And, crucially, we should all be calling this what it really is: bullshit! It's bullshit! 

Arianna Huffington and Sam Altman "co-wrote" an advertisement masquerading as an editorial piece in TIME Magazine to hype up a company that's promised to build something extremely vague on an indeterminate timeline that will allegedly "learn your preferences across five behaviors," with "superhuman long-term memory" and a "fully integrated personal AI coach." When you cut through the bullshit, this sounds — if it's ever launched — like any number of other spurious health apps that nobody really wants to use because they're not useful to anybody other than millionaire and billionaire hucksters trying to get attention.

Generative AI's one real innovation is that it's allowed a certain class of scam artist to use the vague idea of "powerful automation" to hype companies to people that don't really know anything. The way to cover Thrive's AI announcement isn't to say "huh, it said it will do this," or to both-sides the argument with a little cynicism, but to begin asking a very real question: what the fuck is any of this doing? Where is the product? What is any of this stuff doing, and for whom is it doing it for? Why are we, as a society or as members of the media blandly saying "AI is changing everything" without doing the work to ask whether it's actually changing anything? I understand why some feel it's necessary to humor the idea that AI could help in healthcare, but I also think they're wrong to do so. Arianna Huffington has a long history of producing nothing, and if we're honest, so does Sam Altman.

No, really, where are the innovations from generative AI? What great product has Sam Altman ushered into the world and what does it do? What evidence do we have that generative AI is actually making meaningful progress toward anything other than exactly what a Large Language Model has already been doing? What evidence do we have that generative AI is actually helping companies, other than Klarna's extremely suspicious stories about saving money? Why does CNBC have an interview with sex-pest enabler and former Activision Blizzard CEO Bobby Kotick talking about how "AI can help personalize education" as the biggest "AI education pioneer" hocks a product that struggles with basic maths?

The media seems nigh-on incapable of accepting that generative AI is a big, stupid, costly and environmentally-destructive bubble, to the point that they'll happily accept marketing slop and vague platitudes about how big something that's already here will be in the future based on a remarkable lack of proof. Arianna Huffington's announcement should've been met with suspicion, with any article — and I'd argue there was no reason to cover this at all — refusing to talk about how "it'd be nice if AI helped with medical stuff" in favor of a brusque statement about how Arianna Huffington has built very little and Sam Altman isn't much better.

When it comes to OpenAI, now is a couple of years too late to suddenly give a shit about safety. Sam Altman has been a seedy character for years, and was already fired from OpenAI once for intentionally misleading the board, his company has made billions by training its models on other people's work, his company's models are actively damaging the environment, all so that they can authoritatively provide false information to their users and not make any businesses any money.

The "safety" problem with AI isn't about the ethical creation of a superintelligence, but the proliferation of quasi-useful technology at a scale that destroys the environment, and that the cost of said technology involves stealing things written by hundreds of millions of people while occasionally making people like The Atlantic's Nick Thompson money.

And it’s time for the media to start pushing back and asking for real, tangible evidence that anything is actually happening. Empty promises and vacuous documents that say that a company might or might not do something sometime in the future are no longer sufficient proof of the inevitability of artificial intelligence. 

It’s time to treat OpenAI, Anthropic, Google, Meta and any other company pushing generative AI with suspicion, to see their acts as an investor-approved act of deception, theft and destruction. There is no reason to humor their “stages of artificial intelligence” - it’s time to ask them where the actual intelligence is, where the profits are, how we get past the environmental destruction and the fact that they’ve trained on billions of pieces of stolen media, something that every single journalist should consider an insult and a threat. And when they give vague, half-baked answers, the response should be to push harder, to look them in the eye and ask “why can’t you tell me?”

And the answer is fairly simple: there isn’t one. Generative AI models aren’t getting more energy-efficient, nor are they getting more “powerful” in a way that would increase their functionality, nor are they even capable of automating things on their own. They’re not getting “reasoning,” nor are they getting “sentience,” nor are they “part of the path to superintelligence.” 

Read the whole story
tante
9 days ago
reply
"Generative AI's one real innovation is that it's allowed a certain class of scam artist to use the vague idea of "powerful automation" to hype companies to people that don't really know anything."
Berlin/Germany
Share this story
Delete

Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable

1 Comment and 2 Shares

Investment giant Goldman Sachs published a research paper about the economic viability of generative AI which notes that there is “little to show for” the huge amount of spending on generative AI infrastructure and questions “whether this large spend will ever pay off in terms of AI benefits and returns.” 

The paper, called “Gen AI: too much spend, too little benefit?” is based on a series of interviews with Goldman Sachs economists and researchers, MIT professor Daron Acemoglu, and infrastructure experts. The paper ultimately questions whether generative AI will ever become the transformative technology that Silicon Valley and large portions of the stock market are currently betting on, but says investors may continue to get rich anyway. “Despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst,” the paper notes. 

Goldman Sachs researchers also say that AI optimism is driving large growth in stocks like Nvidia and other S&P 500 companies (the largest companies in the stock market), but say that the stock price gains we’ve seen are based on the assumption that generative AI is going to lead to higher productivity (which necessarily means automation, layoffs, lower labor costs, and higher efficiency). These stock gains are already baked in, Goldman Sachs argues in the paper: “Although the productivity pick-up that AI promises could benefit equities via higher profit growth, we find that stocks often anticipate higher productivity growth before it materializes, raising the risk of overpaying. And using our new long-term return forecasting framework, we find that a very favorable AI scenario may be required for the S&P 500 to deliver above-average returns in the coming decade.” (Ed Zitron also has a thorough writeup of the Goldman Sachs report over at Where's Your Ed At.)

It adds that “outside of the most bullish AI scenario that includes a material improvement to the structural growth/inflation mix and peak US corporate profitability, we forecast that S&P 500 returns would be below their post-1950 average. AI’s impact on corporate profitability will matter critically.”

"Despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks"

What this means in plain English is that one of the largest financial institutions in the world is seeing what people who are paying attention are seeing with their eyes: Companies are acting like generative AI is going to change the world and are acting as such, while the reality is that this is a technology that is currently deeply unreliable and may not change much of anything at all. Meanwhile, their stock prices are skyrocketing based on all of this hype and investment, which may not ultimately change much of anything at all.

Acemoglu, the MIT professor, told Goldman that the industry is banking on the idea that largely scaling the amount of AI training data—which may not actually be possible given the massive amount of training data already ingested—is going to solve some of generative AI’s growing pains and problems. But there is no evidence that this will actually be the case: “What does a doubling of data really mean, and what can it achieve? Including twice as much data from Reddit into the next version of GPT may improve its ability to predict the next word when engaging in an informal conversation, but it won't necessarily improve a customer service representative’s ability to help a customer troubleshoot problems with their video service,” he said. “The quality of the data also matters, and it’s not clear where more high-quality data will come from and whether it will be easily and cheaply available to AI models.” He also posits that large language models themselves “may have limitations” and that the current architecture of today’s AI products may not get measurably better. 

Jim Covello, who is Goldman Sachs’ head of global equity research, meanwhile, said that he is skeptical about both the cost of generative AI and its “ultimate transformative potential.” 

“AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do,” he said. “People generally substantially overestimate what the technology is capable of today. In our experience, even basic summarization tasks often yield illegible and nonsensical results. This is not a matter of just some tweaks being required here and there; despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks.” He added that Goldman Sachs has tested AI to “update historical data in our company models more quickly than doing so manually, but at six times the cost.” 

Covello then likens the “AI arms race” to “virtual reality, the metaverse, and blockchain,” which are “examples of technologies that saw substantial spend but have few—if any—real world applications today.” 

The Goldman Sachs report comes on the heels of a piece by David Cahn, partner at the venture capital firm Sequoia Capital, which is one of the largest investors in generative AI startups, titled “AI’s $600 Billion Question,” which attempts to analyze how much revenue the AI industry as a whole needs to make in order to simply pay for the processing power and infrastructure costs being spent on AI right now. 

To break even on what they’re spending on AI compute infrastructure, companies need to vastly scale their revenue, which Sequoia argues is not currently happening anywhere near the scale these companies need to break even. OpenAI’s annualized revenue has doubled from $1.6 billion in late 2023 to $3.4 billion, but Sequoia’s Cahn asks in his piece: “Outside of ChatGPT, how many AI products are consumers really using today? Consider how much value you get from Netflix for $15.49/month or Spotify for $11.99. Long term, AI companies will need to deliver significant value for consumers to continue opening their wallets.”

This is all to say that journalists, artists, workers, and even people who use generative AI are not the only ones who are skeptical about the transformative potential of it. The very financial institutions that have funded and invested in the AI frenzy, and are responsible for billions of dollars in investment decisions are starting to wonder what this is all for.



Read the whole story
tante
14 days ago
reply
"What this means in plain English is that one of the largest financial institutions in the world is seeing what people who are paying attention are seeing with their eyes: Companies are acting like generative AI is going to change the world and are acting as such, while the reality is that this is a technology that is currently deeply unreliable and may not change much of anything at all."
Berlin/Germany
Share this story
Delete

AI is not "democratizing creativity." It's doing the opposite

1 Comment

Greetings, and welcome to another edition of BLOOD IN THE MACHINE, a newsletter about big tech, AI, labor, and power. This newsletter is free to read, so please sign up below, thx, but it’s made possible by those of you who pay to subscribe. It means a great deal, and makes the continuation of this work possible. Thank you. Onwards, and keep those hammers at the ready.

Subscribe now

You’ve probably heard the news: AI is going to “democratize creativity”. It’s also going to democratize medicine. And education, design, innovation, and, somehow, even knowledge itself. Few AI buzzphrases have stoked my anger as much as this one1, given that AI companies, of course, are in fact doing something closer to the opposite—giving management a tool to attempt the automation of jobs and execs an chance to concentrate wealth while promising benefits for all. And it’s everywhere.

When Arianna Huffington announced the launch of a new startup, Thrive Health AI, backed by her company, Thrive Global, and OpenAI, she wrote that the “company’s mission is to use AI to democratize access to expert-level health coaching to improve health outcomes.”

Screenshot from LinkedIn.

When OpenAI Chief Technology Officer Mira Murati caused a(nother) wave of backlash for saying that her company’s automation software was going to kill some creative jobs but “maybe they shouldn’t have been there in the first place,” she was quick to deploy the “d” word in her mea culpa:

“Moving forward, I believe AI has the potential to democratize creativity on an unprecedented scale. A person’s creative potential should not be limited by their access to resources, education, or industry connections. AI tools could lower the barriers and allow anyone with an idea to create.”

Now, this language long predates the current AI boom; as long as I’ve been covering the industry, tech companies have been including the dubious verb in their pitch decks and press releases. RobinHood pledged to democratize finance. Uber was going to democratize transportation. Maybe most famously, Theranos promised to democratize medical testing, or all of healthcare, depending on the day.

I think it’s mostly been understood, or at least intuited, that “democratize” has typically amounted to ignorable Silicon Valley jargon, ubiquitous but generally meaningless; the “4 out of 5 dentists recommend Colgate” of promoting your startup. Something that a founder or company is expected to say, because all their competitors say it, to assure a prospective investor or partner or tech media outlet that it is aiming for as broad a market as possible—but something that few outside of the industry would assign any real credence or even pay attention to.

“We want to democratize x” essentially translated into “we want to get as many customers to use this product as possible”, which, in its belabored effort to layer an altruistic sheen onto a corporate pitch, may have been eye-roll inducing, but it was noncontroversial enough.

Unfortunately, this has shifted: While AI companies are certainly using the term to pitch their products as having broad appeal, they’re also leaning on the notion of “democratization” as a way of countering growing concerns over the damage those products stand to do to creatives. They’re trying to get the term to do real work. And “AI will democratize creativity” is perhaps the worst offender. Specifically, there’s this frankly absurd argument that much of the AI industry seems to have embraced—best exemplified by the above Murati quote, perhaps, though she’s far, far from alone in advancing it—that AI is somehow leveling the playing field for would-be artists. That AI products will break down the barriers erected by the art world’s gatekeepers, and allow anyone and everyone to finally be creative.

The first time I heard the “democratize creativity” tagline was at a Copilot demo event in LA, where Microsoft was presenting the AI software to influencers. The speaker showed how Copilot, powered by OpenAI’s GPT tech, could create images with the simple inputting of key terms into a text field. “I never thought of myself as creative,” the Microsoft rep said, “but it turns out I can be.” (He just needed some software automation, it turns out.) More recently, Justine Moore, a partner at the VC firm Andreessen Horowitz, which has invested heavily in AI, has been making a similar case as Murati to marshal support for the embattled technology. In a thread that concluded that “the panic around AI art is overblown,” she insisted that “all AI creators are artists.” A few months ago, I had an exchange with a member of Microsoft’s AI team, who countered my assertion that AI could erode working conditions for creative workers by asking, ‘well don’t you think it democratizes creativity, too?’

But the ‘AI democratizes creativity’ line finally became unignorable when Murati, who helps run the most influential consumer AI firm in existence, issued her damage control statement. Now, I think part of this defaulting to this term is that OpenAI and others have been caught flat-footed by all the protests to their products by artists, writers, and creative workers—I do not think that if they had properly anticipated such a widespread backlash against AI, they would choose “it uh democratizes creativity” as their primary PR defense against the fact that it threatens people’s livelihoods.

This is because if you spend more than 45 seconds thinking about it, rather than allowing it to glitch past at 2x speed on a business podcast, it becomes so patently ridiculous that it is almost offensive. I know it is offensive to many artists, especially since it’s deployed in service of achieving *precisely the opposite aim* that it purports to. AI will not democratize creativity, it will let corporations squeeze creative labor, capitalize on the works that creatives have already made, and send any profits upstream, to Silicon Valley tech companies where power and influence will concentrate in an ever-smaller number of hands. The artists, of course, get 0 opportunities for meaningful consent or input into any of the above events. Please tell me with a straight face how this can be described as a democratic process.

The other thing that really irks artists and creatives is that making art is already a fundamentally democratic process. Anyone can do it! (Hence the pick up a pencil meme.) It just takes time, effort, training, dedication, a development of craft. AI advocates have tried to argue that AI helps disabled people create art—but the already plenty vibrant disabled artist community shut that down extremely quickly. No, it’s making a living practicing art is the tricky part, the already deeply precarious part—and it’s that part to which the AI companies are taking a battering ram.

Image via Artstation.

It’s true that, as Murati points out, not everyone has the right industry contacts, but how does AI change the equation there? Besides, that is, making matters worse? With AI giving rise to a flood of samey-looking AI output, if anything, industry connections only matter more; the science fiction magazine Clarkesworld had to close its submissions, as its editors no longer had time to wade through reams of mediocre ChatGPT output, and turned to working only with writers they recognized or already had relationships with. As far as I’ve seen, no one who’s arguing that AI is a harbinger for a new democratized paradigm of creativity has offered an explanation of how the current gatekeepers might be done away with, what that might mean for a society with a functioning creative economy, or how the industries that creatives rely on to pay rent will in any way be made more equitable by its arrival.

Of course they haven’t. To the big AI companies, none of that enters into the equation. The democratization pitch is aimed not at aspiring artists, but at tech enthusiasts who may or may not feel that largely abstracted gatekeepers have been unkind to them or derided their cultural contributions, who feel satisfaction at seeing slick-looking images produced from their prompting and eagerly share and promote the results, and industries who read the ‘democratize’ lingo as code for ‘cheap’, and would like to automate the production of images, text, or video.

The AI companies, of course, do not much care if they take a wrecking ball to the already fragile creative economy. Creatives are already losing work, seeing pay rates decline, and asked to use AI to improve their productivity. In another of her defenses of AI art, Andreessen’s Moore argues that AI helps artists produce more output—which, hooray! Artists get to crank out more work for the same or, soon, lower price, spending less time on craft and more on jamming the generate button, and so the value of art on the market tumbles down. People often say they wish the tech set would take more humanities classes—I wish they’d study some political economy.

And that’s just the “democratization of creativity.” I’m not even going to wade into the intellectual bankruptcy of the idea that AI will democratize healthcare or education here, in part because those notions are somehow even more insultingly and obviously opportunistic. Education and healthcare face serious, structural problems, and the idea that they could be “solved” or even meaningfully ameliorated with chatbots who make mistakes 20% of the time should immediately strike any reasonably intelligent observer as unserious. (This is not to say that there may or may not be use cases for AI in either field, but the idea that it will “democratize” sectors in desperate need of reforms to make more affordable, or funding for adequate supplies and pay, is preposterous.)

No, AI is not going to democratize a whole lot, I’m afraid, aside from things like ‘access to second-rate customer service bots for midsized business owners’. It is, by and large, a profoundly anti-democratic force, in fact, given that who decides to adopt it in a workplace is almost always management, that rank and file workers are almost never given any input into whether or how they might want to use it, and freelancers must simply deal with the economic fallout of declining wages and fewer opportunities.2 Again, this doesn’t mean there are not specific utilities for which AI might be useful; but describing the act of generating automated output as ‘democratizing’ is nonsensical at best, and insulting at worst.

There’s plenty that companies like OpenAI could do if the were earnestly interested in “democratizing” anything. To start, they could compensate writers, artists, coders, and other creative workers for the material they’re already training their automation systems on. They could seek meaningful consent from these workers as to how or whether they’d like their stuff treated in the training corpuses. But they won’t. Because the major AI companies aren’t interested in democratizing creativity—they’re interested in transmuting it into profit.


If you found this interesting or useful, and you’d like to support independent journalism, subscribe below and join Ned Ludd’s army:

1

There IS another one, and I’ll get to it in due time…

2

It’s not lost on me that AI companies are making their democratization pitches, which inherently rely on a pointed dumbing down of the very concept of ‘democracy’, just as we have entered a period of unparalleled handwringing over whether the real thing is going to make it through the next presidential administration intact.

Read the whole story
tante
14 days ago
reply
"AI will not democratize creativity, it will let corporations squeeze creative labor, capitalize on the works that creatives have already made, and send any profits upstream, to Silicon Valley tech companies where power and influence will concentrate in an ever-smaller number of hands. The artists, of course, get 0 opportunities for meaningful consent or input into any of the above events. Please tell me with a straight face how this can be described as a democratic process. "
Berlin/Germany
Share this story
Delete

More slop for the void

1 Comment

I’m on the road this week and doing a bunch of running around before this week’s show. So no audio versions, but we’ll be back next week. You can find previous editions on every major podcasting app. If it’s not there, here’s an RSS feed.


The Age Of Slop

(I love content)

You’ve probably seen the phrase AI slop already, the term most people have settled on for the confusing and oftentimes disturbing pictures of Jesus and flight attendants and veterans that are filling up Facebook right now. But the current universe of slop is much more vast than that. There’s Google Slop, YouTube slop, TikTok slop, Marvel slop, Taylor Swift slop, Netflix slop. One could argue that slop has become the defining “genre” of the 2020s. But even though we’ve all come around to this idea, I haven’t seen anyone actually define it. So today I’m going to try.

Content slop has three important characteristics. The first being that, to the user, the viewer, the customer, it feels worthless. This might be because it was clearly generated in bulk by a machine or because of how much of that particular content is being created. The next important feature of slop is that feels forced upon us, whether by a corporation or an algorithm. It’s in the name. We’re the little piggies and it’s the gruel in the trough. But the last feature is the most crucial. It not only feels worthless and ubiquitous, it also feels optimized to be so. The Charli XCX “Brat summer” meme does not feel like slop, nor does Kendrick Lamar’s extremely long “Not Like Us” roll out. But Taylor Swift’s cascade of alternate versions of her songs does. The jury’s still out on Sabrina Carpenter. Similarly, last summer’s Barbenheimer phenomenon did not, to me, feel like slop. Dune: Part Two didn’t either. But Deadpool & Wolverine, at least in the marketing, definitely does.

Speaking of Ryan Reynolds, the film essayist Patrick Willems has been attacking this idea from a different direction in a string of videos over the last year. In one essay titled, “When Movie Stars Become Brands,” Willems argues that in the mid-2000s, after a string of bombs, Dwayne Johnson and Ryan Reynolds adapted a strategy lifted from George Clooney, where an actor builds brands and side businesses to fund creatively riskier movie projects. Except Reynolds and Johnson never made the creatively riskier movie projects and, instead, locked themselves into streaming conglomerates and allowed their brands to eat their movies. The zenith of this being their 2021 Netflix movie Red Notice, which literally opens with competing scenes advertising their respective liquor brands. A movie that, according to Netflix, is their most popular movie ever.

And Willems’ fascination with when this shift occurred in both actors’ careers is the right impulse because identifying slop is less about describing a static state of being and more about pinpointing a sliding of standards. For instance, most people know that the Netflix of 2024 feels very different from the Netflix of 2014, but articulating exactly when that change happened is difficult. Though, we should try.

Lining up my own memory of using Netflix over the years with IMDB lists of their original TV shows and movies, 2018 seems to be the start of Netflix’s slop era. In June of that year, the streamer canceled Sense8, a show they would never make now, and in December, it released of Bird Box, the last time I remember pressing play on a Netflix movie expecting it to be good. This isn’t to say Netflix doesn’t make art anymore, but after 2018, the production of slop clearly outpaced everything else and now stumbling across something I like on there is a fun surprise rather than the default. Locating the slop threshold is easier with other studios, though. Disney’s slop era, and, by extension Pixar’s, Star Wars’s, and Marvel’s, clearly started with the launch of Disney+ in November 2019. (Star Wars: The Rise of Skywalker was released a month later btw.) And HBO’s began with the rebranded launch of Max last year. Though the cause of all three’s decent into slopdom appears to be the same: algorithmic feedback, a desperation for mass appeal, and a void of content that needs to be filled.

And this content void is the real driver of all of this. When, in the early 2010s, sites like Facebook and YouTube began to morph from simple social networks and user-generated content platforms into genuine competitors of movies, TV, record labels, and news networks, the main problem they had to solve was having stuff people wanted to look at more than traditional media. To solve this problem, each app created its own algorithm, its own set of standards, its own incentives, and its own metrics to determine if users were meeting them successfully. And by the 2020s, not only did they successfully destabilize pop culture, they also offloaded their fear of the content void onto all of us. Now we’re the ones worrying if we’re posting enough. The malls convinced the shoppers to work there for free.

The first sector of culture that fell into this trap was, of course, the news. I knew an editor that, all the way back in 2015, was using the term “dog food” for the viral content we had to produce for the sake of it. But no one cares about journalists and so the first time anyone really started to notice the rise of what we would now call slop was when rappers started gaming Spotify in 2018. There’s that year again. 2018 was also the year TikTok became the most-downloaded app in the US. And it was also the year Drake, the king of slop, released Scorpion, a 25-song album with a runtime comparable to a feature film. Wikipedia tells me it had six singles, only one of which, “God’s Plan,” I even recognize and that’s because the chorus is just “God’s Plan” over and over again.

And six years later, it’s not just music that feels forgettable and disposable. Most popular forms of entertainment and even basic information have degraded into slop simply meant to fill our various feeders. It doesn’t matter that Google’s AI is telling you to put glue on pizza. They needed more data for their language model, so they ingested every Reddit comment ever. This makes sense because from their perspective what your search results are doesn’t matter. All that matters is that you’re searching and getting a response. And now everything has meet these two contradictory requirements. It must fill the void and also be the most popular thing ever. It must reach the scale of MrBeast or it can’t exist. Ironically enough, though, when something does reach that scale now, it’s so watered down and forgettable it doesn’t actually feel like it exists.

The fix for all of this seems obvious and, unfortunately, impossible, at least right now. It has to come from us, the user, the viewer, the consumer, and there’s a lot of us now. We have to be the ones to demand that we all make less, aim smaller, be more deliberate about what we consume, and find new ways of funding — and distributing — what we do make. Which, sure, could happen. Maybe the next Chappell Roan or Timothée Chalamet shows up without any presence on Spotify or streaming platforms. But if it is going to happen it needs to happen soon because slop is only slop when you remember what real food looks like and the anxiety we’re all feeling right now is that if our slop era lasts any longer we won’t anymore.


Garbage Day Live. Swedish American Hall, San Francisco. This Friday. Tickets here! But also we’ve set aside a few at the door for folks who don’t buy online. See you there! It’s going to be a real fun night.


Think About Supporting Garbage Day!

It’s $5 a month or $45 a year and you get Discord access, the coveted weekend issue, and monthly trend reports. What a bargain! Hit the button below to find out more.

There’s also a new referral program, which is a great way to get Garbage Day for free in exchange for sharing it with your friends. Click here to check it out.


A Good AI Song

@beatsbyaiofficial

Asking Ai To Make A Hit Country Song Day 82 🤠🍺🛻 #aimusic #country #countrymusic #aisong #funnysong #discover #newmusic #comedy #drinking #... See more

I do not condone drunk driving, which is why I think it’s fine to let the AI write all the songs about it.


The Democrats Are Trying To Weaponize Project 2025

I’ll be honest, I’m impressed with how Biden’s team is campaigning on Project 2025. I, of course, wish any of this was mentioned when Biden was in the room with Trump on, you know, national television. But I’ll take any aggression from the Democrats I can get at this point.

Last week, I wrote a bit about Project 2025, asking exactly how seriously we should be taking it. Based on conversations I’ve had recently with other reporters, it seems like most newsrooms are wondering the same thing right now. If you’re out of the loop on this, Project 2025 is the policy paper-equivalent of a school shooter manifesto dreamed up by right-wing think tank the Heritage Foundation. The mainstream media was slow to cover it — because it is, honestly, kinda silly — but it’s been building buzz on TikTok and got a big bump after Taraji P. Henson mentioned it at the BET Awards. It also caught the attention of gay furries who then hacked the Heritage Foundation.

The Biden campaign, this week, summarized the sprawling document and put it up on a website so everyone gets a glimpse of what these freaks want to do if Trump gets elected in November — and if Trump actually does what they want. Which is a big if.

And like I said, I wish any of this was brought up during the debate, but literally the bar is in hell and, frustratingly, I do think the path to victory for the Democrats is very simply just more of this. The Republican Party is full of weird men that talk like The Joker and all you really have to do is hold a mirror up to them and they fizzle. My most steadfast view of American politics is that it’s not about having coherent political beliefs or clear policy objectives, it’s simply about not being a huge fucking weirdo.


There Was A V-Tuber At A Dodgers Game

@gravityvt

I don’t know what’s happening to sports right now, but I gotta say, I’m real intrigued. Also, if you’re wondering, this V-Tuber is named Gawr Gura. She has 4.5 million subscribers on YouTube.


Here Come The Palworld Adaptations

Remember Palworld? The not-quite-Pokémon-knockoff that dominate Steam charts for a while earlier this year? The studio behind it, Pocketpair, has partnered with Sony Music and Aniplex to expand into “global licensing and merchandising.”

Strangely enough, having played the game somewhat obsessively and then dropping it and never thinking about it again, I actually think adding some lore to it via an anime or something could really help it. And I wasn’t the only one who lost interest with it. It peaked around two million concurrent players on Steam back in January is now hovering around 100,000. And I have to guess a lot of that drop off was because once you get far enough into the game you realize there isn’t much there. It has an addicting gameplay loop, but that’s about it.


DOOM Runs On A Flesh Light

There’s a full rundown of how this was built here. Aaron Christophel, the mastermind behind the project, previously got DOOM to run on an electric toothbrush.

I sacrificed my search history to get some details about the fleshlight he used. It’s called a Tifforun Spaceship Automatic Stroker and, uh, yep, that’s about all you need to know I think.

It seems like Christophel mapped the suction buttons to control the game and, most impressively of all, it even plays the music via a USB-C connection. According to a listing for the Tifforun I was looking at, apparently, the fleshlight also plays audio when it’s not running DOOM. I guess the base model has “a mellow real voice” that you can play. Kinda like a Peloton, I guess?


“Not Like Us” by Kendrick Lamar But It’s Goth


Did you know Garbage Day has a merch store?

You can check it out here!



P.S. here’s the Cheesecake Joker.

***Any typos in this email are on purpose actually***



Read the whole story
tante
15 days ago
reply
"Content slop has three important characteristics. The first being that, to the user, the viewer, the customer, it feels worthless. [...] The next important feature of slop is that feels forced upon us, whether by a corporation or an algorithm [...] But the last feature is the most crucial. It not only feels worthless and ubiquitous, it also feels optimized to be so"
Berlin/Germany
Share this story
Delete

New Solutions to the Trolley Car Problem!

1 Comment


This cartoon is by me and Becky Hawkins.


TRANSCRIPT OF CARTOON

This cartoon has four panels. Each panel shows a different scene with one character in it. And each panel has a caption, in large letters, at the top. A large caption over the top of the entire strip says NEW SOLUTIONS TO THE TROLLEY CAR PROBLEM.

PANEL ONE

CAPTION: REPUBLICANS

A smiling, well-dressed woman with long hair stands behind a podium, gesturing to indicate a trolley car parked behind her. The trolley car is gory with blood spattered all over the front, and we can see bodies in a pile under the car.

WOMAN: Cleaning blood off a trolley car is expensive! That’s why we’re proposing tax breaks for trolley car companies!

PANEL TWO

CAPTION: LIBERTARIANS

A man with a very thick orange beard, wearing a green knit cap and a plaid shirt, is sitting in his armchair at home and speaking directly to us, with an intense expression. He’s holding a joint in one hand and raising his I’m-making-an-important-point-now-forefinger with the other. Next to him one one side are a bunch of LP records stored in milk cartons; on the other side is a side table with a bottle of whisky, a whisky glass, and a thick book.

MAN: Trolley car companies need freedom to choose who to run over without bureaucrats getting in the way! Deregulate now!

PANEL THREE

CAPTION: DEMOCRATS

This is the same scene as panel one, but now a frightened looking old man, wearing huge glasses, a jacket and a bow tie, is behind the podium. He is shaking and sweating a bit as he talks to us. His dialogue is split into three balloons.

MAN: Something must be done! Er, someday. Maybe. If no one disagrees. Gotta stay bipartisan!

PANEL FOUR

CAPTION: TERFs

A woman wearing a blue pantsuit, and with nicely-done short white hair, is sitting on a park bench, looking thoughtful.

WOMAN (thought): One person’s life versus six people’s lives… Hmmm. Which choice hurts more trans people?

CHICKEN FAT WATCH

“Chicken fat” is an old-timey cartoonists’ expression for fun but irrelevant details the cartoonist puts in.

PANEL 1: The seal on the front of the Republican’s podium shows a stern Sam the Eagle from the Muppets, and the words going around the seal say “Resistance is Futile.”

PANEL 2: There’s the classic kitten hanging from a branch poster in the background, but instead of “hang in there” it says “just fall already.” The book on the side table has the title “The Featherhead.”

PANEL 3: The seal on the front of the Democrat’s podium shows a friendly Big Bird from the Muppets, and the words going around the seal say “Pretty Please Re-Elect Us.”

PANEL 4: A takeout container of poutine has spilled on the ground; two pigeons are posing by it and taking a selfie using a tiny phone on a tiny selfie stick.


New Solutions To The Trolley Car Problem! | Patreon

Read the whole story
tante
16 days ago
reply
"New Solutions to the Trolley Car Problem"
Berlin/Germany
Share this story
Delete

Venture capital will eat itself

1 Comment

The people yelling loudest about the potential of AI to drive humans out of their jobs are venture capitalists. It turns out they were right — tech VCs themselves are replacing their lower-level peons with LLM-generated slop. Matt Krna of Two Meter Capital claims to use generative AI for much of his actual day-to-day portfolio management. [Business Insider, archive]

VCs have long thrown machine learning at their businesses in the hope of gaining just that little bit more edge. Sometimes the models are similar enough that multiple VCs will all call a startup that’s happened to have the right numbers.

You might think that venture capital intrinsically involves human feelings — VCs frequently speak of investing in company founders they think have the talent to go unicorn, and not in the specific businesses.

But this risks anthropomorphizing VCs, who should largely be replaced with paperclips. Matt Levine points out that LLMs can replace VC Twitter and blogging — which already use ghostwriters to properly fake the human touch. [Bloomberg, archive]

VC Balaji Srinivasan has been worrying about LLMs since GPT-3 in 2020, and we concur — a text bot could tweet better than he does.

 

Read the whole story
tante
17 days ago
reply
"You might think that venture capital intrinsically involves human feelings — VCs frequently speak of investing in company founders they think have the talent to go unicorn, and not in the specific businesses.

But this risks anthropomorphizing VCs, who should largely be replaced with paperclips."
Berlin/Germany
HarlandCorbin
17 days ago
But the author doesn't mention that the paperclip that will replace them is clippy.
Share this story
Delete
Next Page of Stories