Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2305 stories
·
120 followers

“AI” as support systems (for diagnostics)

1 Comment

One of the more reasonable use cases for modern “AI” (statistical pattern matching and generating machines) is to support doctors in diagnostics, especially in the evaluation of complex data sets / documents in order to determine potentially dangerous abnormalities.

It’s a problem tailor-made for neural networks (it’s actually one of the types of cases with that we got trained on them back when I was still studying in the beginning of the 2000s): You have large more or less clean and tagged sets of data of diagnosed x-rays and such that you can feed into your system to train a detector. Now since human beings are not really standardized/normalized the results are never 100%, one person’s abnormal is another person’s normal. But you can get decent results.

Results that “AI” hypesters often celebrate: Look AI’s not a hype, studies show that AI systems are better than X% of doctors at diagnosing Y!

I think that many people don’t fully understand the context though: What we see time and time again is that these systems are better than a significant percentage of doctors at diagnoses. Cool. But which doctors?

More often than not you will find that these systems are better than new or unexperienced doctors but worse than experienced specialists. Which isn’t surprising. It’s like with text: ChatGPT and others generate text that’s decent-ish and better than what first semester students would write but usually a lot worse than what experienced professionals create. AI is cheap and mediocre.

Now we could say that that’s a positive: We’ve basically shifted the baseline up. No longer do you need to hope that a new doctor diagnoses your condition correctly, you have the AI level as baseline and more complex cases get sent to the experts (how many cases would be found as complex when the mediocre machine has already diagnosed the case is another very good question).

But there is an issue (one that you can see coming from a mile if you’ve every heard one of my talks on “AI”): How do young, inexperienced doctors learn, if the machines diagnose all the routine cases? You learn diagnoses by diagnosing, by talking about them with your peers, especially more experienced peers. It’s great that we’ve potentially made the baseline better but how do we ensure that young physicians get enough time to practice their diagnostic skills? Because the super experienced people will die at some point. And experience does neither come cheap nor quick.

Especially in a world where the main promise of “AI” is efficiency and productivity do we really believe that the “time savings” will be put into training young people who can’t yet do the work? When it takes years to get them to where they need to be to catch the stuff that the AIs didn’t find? In a different system we might but in neoliberal capitalism? In austerity?

It’s great that we look at reasonable use cases for statistics, especially in critical and highly complex domains like medicine. But the studies presenting successes need to be read and understood correctly and fully.

Better than X% of humans is an empty statement. You need to know what exactly characterizes the X% to understand the actual capabilities and limitations of the systems that are supposedly so great.

Read the whole story
tante
14 hours ago
reply
Are "AI" systems better that doctors at diagnosis? And is that all of the story?
Berlin/Germany
Share this story
Delete

AI really is smoke and mirrors

1 Comment

Hello, and welcome to Blood in the Machine: The Newsletter. (As opposed to, say, Blood in the Machine: The Book.) It’s a one-man publication that covers big tech, labor, power, and AI. It’s free, though I’m in the process of ramping up to be less occasional, so if you’d like to support this brand of independent tech journalism, and I’d be thrilled if you’d consider pledging support — I’m considering going paid, and seeing how much interest might be lurking out there would be a big help. But enough about all that; onwards and upwards, and thanks for reading.


This might well be the most fraught moment in generative AI’s young lifespan. Sure, thunderous hype continues to emanate from Silicon Valley and echo across Wall Street, Hollywood, and the Fortune 500, and yes, de facto industry spokesman Sam Altman is pursuing ever more science fictional and GDP-of-a-G7-nation-sized ambitions, heralding the coming of a nascent Artificial General Intelligence all the while, and indeed, the AI bulls blog away, insisting someone using AI is about to take your job — so don’t get left behind.

And yet. We’re over a year into the AI gold rush now, and corporations using top AI services report unremarkable gains, AI salesmen have been asked to rein in their promises for fear of underdelivering on them, an anti-generative AI cultural backlash is growing, the first high-profile piece of AI-centered consumer hardware crashed and burned in its big debut, and a bombshell scientific paper recently cast serious doubt on AI developers’ ability to continue to dramatically improve their models’ performance. On top of all that, the industry says that it can no longer accurately measure how good those models even are. We just have to take the companies at their word when they inform us that they’ve “improved capabilities” of their systems.

Subscribe to BLOOD IN THE MACHINE for more stories about tech, labor, power, and AI:

So what’s actually going on with AI here? We’ve got a still-pervasive cloud of buzz, aggressive showmanship, and an intriguing if problematic technology, whose shortcomings are hidden, increasingly haphazardly, behind the lumbering hype machine. We’ve heard the term snake oil used to describe the generative AI world’s shadier products and promises — it’s the title of a good newsletter, too — but I think there’s a more apt descriptor for what’s going on in the industry at large right now. We’re smack in the middle of AI’s smoke and mirrors moment, and the question is what will be left when it clears.

Now, look; I don’t mean this entirely derisively. I do recognize that we understand the phrase ‘smoke and mirrors’, whose modern coinage apparently comes from a journalist writing about Nixon, to describe an elaborate illusion that ultimately holds no substance at all. What’s happening here is a bit more complex.

We are at a unique juncture in the AI timeline; one in which it’s still remarkably nebulous as to what generative AI systems actually can and cannot do, or what their actual market propositions really are — and yet it’s one in which they nonetheless enjoy broad cultural and economic interest.

It’s also notably a point where, if you happen to be, say, an executive or a middle manager who’s invested in AI but it’s not making you any money, you don’t want to be caught admitting doubt or asking, now, in 2024, ‘well what is AI actually, and what is it good for, really?’ This combination of widespread uncertainty and dominance of the zeitgeist, for the time being, continues to serve the AI companies, who lean even more heavily on mythologizing — much more so than, say, Microsoft selling Office software suites or Apple hocking the latest iPhone — to push their products. In other words, even now, this far into its reign over the tech sector, “AI” — a highly contested term already — is, largely, what its masters tell us it is, as well as how much we choose to believe them.

And that, it turns out, is an uncanny echo of the original smoke and mirrors phenomenon from which that politics journo cribbed the term. The phrase describes the then-high tech magic lanterns in the 17th and 18th centuries and the illusionists and charlatans who exploited them to convince an excitable and paying public that they could command great powers — including the ability illuminate demons and monsters or raise the spirits of the dead — while tapping into widespread anxieties about too-fast progress in turbulent times. I didn’t set out to write a whole thing about the origin of the smoke and mirrors and its relevance to Our Modern Moment, but, well, sometimes the right rabbit hole finds you at the right time.

sGravesande’s illustration of a magic lantern (1721), via Koen Vermeir.

The original smoke and mirrors

In the 1660s, an inventor, probably, according to scholars, one Christiaan Huygens, created the first “magic lantern.” The device used a concave mirror to intensify the light of a candle flame to project an image printed on a slide through a tube with two convex lenses, thus amplifying that image on any nearby flat surface.

The first sketch of a magic lantern, ‘cette lanterne de peur’, in a letter to Huygens (28 November 1662). Vermeir.

This was a profound technological advance. For the first time, an intangible image could be made “real.” According to the science historian Koen Vermeir, whose wonderful 2005 paper, “The magic of the magic lantern (1660–1700): on analogical demonstration and the visualization of the invisible”, has consumed many of my afternoon hours,

“The projected image was new to most spectators and was a reason for bewilderment. The shadowy projections on the wall resembled dreams, visions or apparitions summoned up by a necromancer, and the devil was widely regarded as the master of such delusions. The effect of strange apparitions was further enhanced by the depicted subject; the prominent theme which leaps to the eye is the monstrous, and monsters, demons and devils were the highlights of the show. Indeed, the typical illusionist capacity of this new apparatus was best accentuated by projecting the ‘unreal’. It was the first time that a fantastic and fictional image could be materialized, without becoming as solid as a picture.”

You might already see where I’m going with this. Last year, when OpenAI released ChatGPT, the reaction among the media and users alike often transcended mere excitement over a new tech product — the New York Times’ tech columnist Kevin Roose said he was “deeply unsettled” that a chatbot had tried to break up his marriage, and users, who fast multiplied into the tens of millions, reported being “freaked out” by the bots; one user reportedly took his own life upon the bots’ recommendation.

But the devil isn’t behind these such delusions — that would be the concept of AGI, the all powerful sentient machine intelligence that top AI industry advocates insist is taking shape before our eyes. Silicon Valley leaders’ constant invocation of AGI, paired with years of more generalized and deterministic insisting that AI Is The Future, lends a gravity to the technical systems, known as large language models (LLMs), that really have gotten pretty proficient at predicting which pixel or word it should fill in next, given a particular prompt.

The LLM is like the magic lantern that gave us the first smoke and mirrors in a few other ways, too. Here’s Vermeir again, noting that the magic lantern

… embodies the intersection of mathematical, physical and technical ‘sciences’. It mediated between educated, popular and courtly cultures, and it had a place in collections, demonstration lectures and texts. In the secondary literature, the magical qualities of the lantern are often unmarked or taken for granted. The magic lantern is taken as an ancestor of cinema, as an instrument in a demonstration lecture, or as a curiosum provoking wonder.

Yet the lantern probably became most famous for giving rise to phantasmagoria — demonstrations, seances, and horror shows that deployed one or more of the devices, along with that titular smoke, to create scenes where the images appeared to be floating in thin air; often accompanied by sound effects and dramatic narration. The technology was embraced by illusionists and magicians, and, naturally, by grifters who took the tech from town to town claiming to be able to conjure the spirits of the underworld, for a fee.

Illustration of hidden magic lantern projection on smoke in Guyot's "Nouvelles récréations physiques et mathématiques" (1770).

Importantly, audiences were not simply open to believing in the illusion out of the sheer novelty of the experience, or even its rather capable powers of confirmation bias, but thanks to the mounting social instability and political upheaval going on around them. As Vermeir notes, “social uncertainty and anxiety were expressed in a cultural fascination for illusion.”

In fact, some of the most famous phantasmagoria shows in London were held in the first decade of the 19th Century, and by then they included elaborate automata and other mechanical instruments — just as the conditions that would give rise to the fury of the Luddites were ripening. (Sometimes you stumble onto something so ripe with resonance you wish you could go back and add it retroactively to your book — would love to be able to add in a bit about phantasmagoria and magic lanterns to Blood in the Machine, but alas).

Veirmeer describes what the magic lanterns allowed its operators to do as “analogical demonstration.” The new technology allowed its operators to create a more convincing vessel for demonstrating an abstract, even unprovable, concept or a force. Not all of the public believed they were seeing emanations from the beyond, or that they were seeing proof of a divine world, but the power of the technological demonstration helped articulate and underline those beliefs nonetheless.

You don’t want to put too fine a point on historical parallels, and the contexts in which generative AI and the magic lantern were developed and deployed were clearly quite different — but there’s plenty to chew on here. The science historian Veirmeer, in his conclusion, notes that the lanterns “could shift from magical contexts to natural philosophy, and sometimes the borderlines are far from clear… They were analogical demonstrations of undemonstrable philosophical principles.”

I love that phrase. Incidentally, it’s a pretty good way to describe how chatbots and image generators function for AI executives and true believers in AGI: as “analogical demonstrations of undemonstrable philosophical principles.” After all, there is no scientifically determined point or threshold at which we will have “reached” or “achieved” AGI — it’s an ambiguous conceit rooted largely in the ideas of technologists, futurists and Silicon Valley operators. These AI-produced images and videos, these interactions with chatbots and text generators, are analogical demonstrations of the future those parties believe, or want to believe, AGI renders inevitable.

Screenshot of a demo for Sora, via OpenAI.

AI’s smoke and mirrors moment

Because of course, AI is not inevitable. Not as a roundly successful product, and much less as a sentient, world-beating computer program. It may even be closer to the opposite: Report after report indicates that generative AI services are underperforming in the corporate world. The relentlessly hyped Humane AI Pin is a laughingstock. As I write this, the stock of Nvidia, whose chips undergird the AI boom, is tanking. OpenAI’s much-ballyhooed GPT store has so far come up short; developers and consumers alike find it inert and unimpressive. And that’s not even mentioning the copyright woes that plague the store and the industry at large.

So, AI company valuations are “coming down to earth,” as the Information put it, amid adjusted projections as to how much revenue the AI companies might actually be able to make. Some AI companies aren’t so much as “coming down” to earth, but crashing: Stability AI, once a frontrunner in AI image generation, saw an exodus of top staff as a litany of setbacks and scandals roiled the company, leading the embattled CEO, Emad Mostaque, to resign. Inflection, the high profile startup founded by Mustafa Suleyman, the former head of applied AI at Deepmind, was more or less poached piece by piece by Microsoft after struggling to gain market traction.

And yet. Sam Altman, who just debuted on the Forbes billionaire list, pushes ever onward, pronouncing visions of an AGI that will transform the world, seeking trillions of dollars in investment for chips and, most recently, $100 billion to build a supercomputer called Stargate with Microsoft.

It’s this story that propels the generative AI industrial complex onward, amid so many shortcomings and uncertainties. (That, and the multibillion dollar support from the tech giants and capital flows from the VC sector.) It’s the driving force behind why investors and corporate clients are still buying in — why, financial services firm Klarna, for one, says it has replaced the equivalent of 700 customer service workers with OpenAI products even when other companies’ recent attempts to do the same have backfired spectacularly. And why a large percentage of Fortune 500 companies are reportedly using generative AI. As a recent Times headline put it: “Will A.I. Boost Productivity? Companies Sure Hope So.

All this is quite fortunate for Altman. And this is an element of the rise of AI that I don’t see discussed enough: His omnipotent AI is struggling to be born at an extremely convenient moment. There’s a tight labor market, high employment, and companies are very eager to embrace technological tools to either replace human workers or wield as leverage against them. Read through that Times piece, and you hear company after company hungry to slash labor costs with AI — if only they could! That’s the vision corporate America sees cast on the walls, the product of generative AI’s smoke and mirrors: Artificial systems that can save them lots of money by making workers disappear. Once it was the implied presence of the devil that underwrote the delusion that a charlatan could bring back the dead, today, it’s the specter of AGI that animates the idea that AI will finally unleash mass job automation.

Any threat to that show, however, is a threat to the generative AI enterprise at large. Last month, I wrote how the tide was turning for OpenAI: Between mounting legal woes and plateauing user growth, a disastrous Wall Street Journal interview and being booed at SXSW, the backlash, it seemed, had become at least as prominent as the mythos the world’s top AI company had worked so hard to generate for itself. And that’s a particularly pernicious problem for OpenAI and co; generative AI desperately needs that mythos. Once the narrative blows over, once the public, or at least the middle managers, get tired of waiting for real labor savings, aka more than analogical demonstrations of an incipient AGI — once the limits of the demonstration become too clear — the facade may begin to fall away from the entire phenomenon. In which case we’ll be left with text generators churning out reams of variously usable content, a pile of variously interesting chatbots, and automated JPEG producers that may or may not be infringing on copyright law thousands of times a day.

Unlike trends of the very recent past, generative AI has real gravitational pull — companies desperately do want the promised service to work here, unlike, say, the metaverse or web3 or crypto, when most companies had no idea what they were really supposed to do with the vaporous trend at hand. And there is real tech behind those smoke and mirrors. Even critics admit there are some good uses for generative AI — even if it’s not nearly good enough to justify the AI industry’s costs, harms, and messianic promises.

And so, with generative AI, we’re once again witnessing a core problem with entrusting technological development to a handful of self-mythologizing executives and founders in Silicon Valley. Instead of systems that are democratically and ethically constructed, built to serve humans and not just managers, whole constituencies and not just consultants; systems that could be very useful in some less-than earth-shattering ways, we get the smoke and mirrors. Again. And we can only hope that the magic lanterns of the 21st century haven’t cost us too much in the short term — in lost or damaged jobs, corrupted digital infrastructure, in the cheapening of culture — because by so many counts, that smoke is already beginning to waft away.


Get more BLOOD in your inbox:

Read the whole story
tante
1 day ago
reply
"All this is quite fortunate for Altman. And this is an element of the rise of AI that I don’t see discussed enough: His omnipotent AI is struggling to be born at an extremely convenient moment. There’s a tight labor market, high employment, and companies are very eager to embrace technological tools to either replace human workers or wield as leverage against them. Read through that Times piece, and you hear company after company hungry to slash labor costs with AI — if only they could! That’s the vision corporate America sees cast on the walls, the product of generative AI’s smoke and mirrors: Artificial systems that can save them lots of money by making workers disappear. Once it was the implied presence of the devil that underwrote the delusion that a charlatan could bring back the dead, today, it’s the specter of AGI that animates the idea that AI will finally unleash mass job automation. "
Berlin/Germany
Share this story
Delete

On giant piles of cash, and their origins

1 Comment
source: Unsplash https://unsplash.com/photos/100-us-dollar-bill-BRl69uNXr7g

Technological innovation requires capital. A lot of capital. A giant pile of cash.

There are, to a first approximation, only three places you can find of a giant pile of cash. There’s government money. There’s venture capital. And there’s big corporate R&D.

Thanks for reading The Future, Now and Then! Subscribe for free to receive new posts and support my work.

Of the three, I would argue that government is clearly the best. The reason is simple: government funding doesn’t come attached to some rich asshole who inevitably screws things up later.

Venture capital, for this same reason, is quite clearly the worst. Andy Baio’s, “The Quiet Death of Ello’s Big Dream” is worth reading on this point. VC funding comes attached to VC revenue expectations. It makes unreasonable demands. Sometimes they work out for the investors. They very rarely work out for everyone else.

Big corporate money is somewhere in the middle. I’d classify it as eh, fine I guess, so long as the marketplace is otherwise well-regulated and those large corporations are constrained. (Which, heh, ofcoursewedon’thaverightnow.)

The examples of big corporate money that spring immediately to mind are Bell Labs and Xerox PARC. There was indeed a time when big companies invested a ton in basic science and R&D, with no immediate plans to turn the results into products. And, as Ian Betteridge pointed out last month, this was because the big companies were rightly concerned about the antitrust cops. A nervous monopolist is a (relatively) well-behaved monopolist.

Still, there’s a bit of elegance to the big-pile-of-government-money approach. I first took notice of this with the passage of the Inflation Reduction Act (IRA). Fifteen years ago, everyone kind of thought that the only way to jumpstart the clean energy transition was to institute a carbon tax or cap-and-trade/cap-and-dividend system. (Basically, do some fancy tricks that the economists suggested, then let the market work its magic.) Cap-and-Trade failed for a whole lot of reasons, but one of the biggest was that the proposal was complex and boring in a way that appealed to the economists but were impossible to message. Most people were left thinking “this is probably gonna turn into weird corporate bullshit, right?”

The IRA, by contrast, was just a big pile of money attached to industrial policy — directional instructions for how to spend the pile. That’s effectively all it could be, because of some vagaries in the Senate rules that let you avoid the filibuster and pass budget bills with a 51-vote majority. And while that big pile of money isn’t perfect, it is funding a lot of good things.

And of course we can scan back in history for plenty of other examples. The semiconductor industry and the internet were both built out of government grants and defense contracts. We have sputnik and the space race to thank for the computer age. The Human Genome Project was government funding. So was the National Nanotechnology Initiative. Some of these were big hits, some were glancing misses. But the general model of promising avenue —> government sets up a big pile of money —> research and development flourishes in that area is pretty well established. It works.

The are, as far as I can tell, only three downsides to the government-pile-of-money approach:

  1. You’ll need to make the money back through taxes on the industries that develop as a result. And they won’t want to pay those taxes. And these industries will be popular, with a lot of capital to spend on pressuring government to give them special deals and tax breaks. So maintaining the equilibrium of establish big pile of government money —> industries flourish —> they pay taxes, which lets you set up the next big pile of government money is going to take constant effort to maintain. (This is an abbreviated version of Marianna Mazzucato’s The Entrepreneurial State. Her book is excellent, btw.)

  2. The big pile of government money is tied up in bureaucracy, which means it isn’t especially nimble or responsive. This can be fixed through public-private partnerships and other administrative design choices. But it’s bound to be frustrating and sometimes wasteful. (Imagine if, circa 2022, the Biden administration had set up a $30 billion fund for, like, metaverse research.) And that waste and frustration will be fodder for ideological opponents to the whole endeavor.

  3. The big pile of government money doesn’t glorify entrepreneurs and innovators as the very-special-boys who are heroically building the future. And that doesn’t feel very gratifying. They would much prefer an equilibrium where government provides the big pile of money, but then no one talks about it. Or gives the government any credit. Or taxes the windfall profits. It is not, in other words, a story that plays well at Davos. If those entrepreneurs get rich enough, they’ll develop elaborate philosophical justifications for why the most important cause in the world is that everyone clap louder for them.

But those three downsides just mean that the system will require political maintenance. It works quite well, but it isn’t self-sustaining. And this observation extends beyond science and technology. The big-pile-of-government-money is a return to how we approached public problems through much of the 20th century, before it was rejected as “tax-and-spend” liberalism and replaced by neoliberalism’s faith-based market fundamentalism.

The basic proposition is as follows: we should provide public funding for public goods. Those public goods will make us a better, more productive society. And then we refill the public coffers through a system of taxation. The same approach can be applied to other public goods:

  • There is no magic-unicorn funding model that will save journalism. It’s going to require public subsidy.

  • The crises facing the U.S. education system can mostly be reduced to the simple fact that we no longer fund public education like we used to. The idea that we are going to somehow innovate our way into a system that is both higher-quality and cheaper is as fantastical as it is flawed.

  • Ditto for higher education. If you think higher education is a public good, then you should demand public subsidy of higher ed. (Including, but not limited to, student debt cancellation.) If you do not think higher eduction is a public good, then that’s fine too. But you ought to go ahead and say as much.


The VC approach is the prevailing model today. And I have become increasingly convinced that many of the problems with the current state of Silicon Valley are rooted in the venture capital-class simply acquiring too much capital. VC is fine (good, even!) when it is relatively small. But when VC is the primary source of funding new science and technology breakthroughs, the whole system gets skewed.

As it stands today, the primary direction-setter for new advances in science and technology is “what do folks like Marc Andreessen and Peter Thiel and Gary Tan and Sam Altman find exciting?”

The problem isn’t just that these guys are ideological fellow-travelers, hell bent on a shared political project that socializes all risk while privatizing all gain. (though, I mean… that would kinda be enough of a problem on its own.)

It’s also that their eye for science, technology, and even consumer products just kinda sucks. Building toward their vision of the future means we’re going to end up with a lot of silly investments in pretend-cities and eugenics vaporware. It means we’ll have abundant funding for hail-mary attempts at developing cold fusion and geo-engineering, and practically nothing for helping cities remain habitable under extreme weather conditions.

(Read Karen Hao, I’m begging you!: We’re sapping all the remaining water from the desert to cool AI data centers. It’s good for Microsoft’s stock performance. It is counterproductive to the climate emergency.)

This kind of VC-thinking works fine under limited conditions, and pretty terribly everywhere else.

Take Sam Altman. Altman is not an engineer. He is not a scientist. He has no real training in either. He is an entrepreneur, with a flair for optimistic tech futurism. He places bets on where technology is headed, and then attempts to bend society in a direction that makes those bets pay off.

The previous generation of venture capitalists were, for the most part, actual engineers and scientists. They had spent time in the lab. They had experience being “close to the metal.” They made some real money early, then started to point that money in the direction of funding audacious high-risk/high-reward projects that didn’t fit anywhere else. The sector was small, compared to government money and corporate R&D money.

Folks like Altman approach the future much like bad sci-fi. They begin from a prediction of where they’d like to end up, and then work backwards. Altman has faith that we’re going to end up at world-changing Artificial General Intelligence. That means we’ll need transformative breakthroughs in chip production and energy production. Ergo, he invests in cold fusion companies and sets out to raise $7 trillion to build his own manufacturing empire.

That’s actually a fine approach, as private investment strategies go. (Hey, you’ve got a prediction for where the world is heading, and you want to make a bunch of correlated bets that support the prediction? Go ahead and spend your own money that way. God bless.) But when it becomes the primary source of funding for basic science and/or applied technologies, then the whole system gets warped in the direction of a few billionaires’ speculative fantasies.

Venture Capital is not inherently better than government funding. That’s a myth that was popularized during the neoliberal era. It has now become impossible to sustain. (Just look at all the stupid bullshit they fund and all the nice things that they ruin!)

Big Tech only got so big because we stopped enforcing antitrust 20+ years ago. If the current reemergence of antitrust enforcement has staying power, that will be good for society and also good for Big tech funding of basic research.

But, most of all, we should get back to enthusiastically celebrating large piles of government money, paired with substantial taxes on the companies that flourish as a result. Government money isn’t perfect. It can be slow and inefficient. But nothing is perfect. And of all the potential big-pile-of-money sources, it is the one that does the most good, while putting power in the hands of people who are at least supposed to have the best interests of the public in mind.

Thanks for reading The Future, Now and Then! Subscribe for free to receive new posts and support my work.

Read the whole story
tante
1 day ago
reply
"As it stands today, the primary direction-setter for new advances in science and technology is “what do folks like Marc Andreessen and Peter Thiel and Gary Tan and Sam Altman find exciting?”

The problem isn’t just that these guys are ideological fellow-travelers, hell bent on a shared political project that socializes all risk while privatizing all gain. (though, I mean… that would kinda be enough of a problem on its own.)

It’s also that their eye for science, technology, and even consumer products just kinda sucks. Building toward their vision of the future means we’re going to end up with a lot of silly investments in pretend-cities and eugenics vaporware. It means we’ll have abundant funding for hail-mary attempts at developing cold fusion and geo-engineering, and practically nothing for helping cities remain habitable under extreme weather conditions. "
Berlin/Germany
Share this story
Delete

On AI agents: how are these digital butlers supposed to get paid?

2 Comments

I’ve been hearing and reading a lot about AI agents lately.

Ezra Klein has been discussing them all month on his podcast, in a pretty excellent interview series. WIRED’s Will Knight wrote a newsletter last month with the headline “The Age of AI Agents Is Fast Approaching.” The general consensus seems to be that this is where we’re headed.

I have my doubts.


As background, an AI agent is a piece of software that can complete tasks on your behalf. Ezra provides a clarifying example in his most recent interview:

“The example I always use in my head is, when can I tell an A.I., my son is turning five. He loves dragons. We live in Brooklyn. Give me some options for planning his birthday party. And then, when I choose between them, can you just do it all for me? Order the cake, reserve the room, send out the invitations, whatever it might be.”

That’s a tight and evocative description. You can immediately see the appeal, right? It is, broadly speaking, rich-people-shit. One of the (many) advantages that the wealthy have over the rest of us is that they can afford a personal staff that takes care of everyday-life time sinks. Planning a kid’s birthday, figuring out travel logistics, submitting paperwork, etc. Our normal daily lives include an inordinate number of tasks that consume time and mental energy. Rich people can hire someone to handle all that stuff. The rest of us just grin and bear it.

The promise of software agents is that sometime, in the not-too-distant future, the trappings of rich-people-shit could become available to the rest of us.

I’d love to believe in that promise. I am, amongst other things, a perpetually-overwhelmed parent. If technology could reliably help me manage the day-to-day life churn, I would be thrilled.

And Klein’s reasoning is facially quite strong: A whole lot of very well-funded businesses are working quite hard to build software agents right now. The technical hurdles are comparatively small. They have (much of) the necessary technology. They have the funding. They (likely) have (some) market demand. This is not an absurd belief for Ezra to have arrived at.

But I keep being troubled by the ghosts of digital futures’ past. These promises are not new. Nicholas Negroponte and the MIT Media Lab folks were insisting that the age of software agents was imminent in the early ‘90s. Douglas Adams wrote and performed Hyperland, a “documentary of the future,” for the BBC in 1990. it featured Tom Baker as the personified software agent, dressed up as a literal butler.

Instead of software agents acting as personalized digital butlers, we ended up with algorithmic feeds and the infinite scroll.

Facebook’s algorithm is personalized, sure, but it is designed to maximize value for Facebook by keeping you within the company’s walled garden. Amazon’s algorithm is optimized to sell you the most products.

These are not digital butlers. They are digital sales associates.

And, with the benefit of hindsight, we can generalize this phenomenon: the trajectory of any new technology bends toward money.

We could have developed software agents 10, 20, 30 years ago. Software engineers were working quite hard on it. They started companies and obtained funding. The technical hurdles were comparatively small. But there was little money in it. And, in a VC-dominated marketplace, we do not get products that would be useful to the end-user unless they hold the promise of phenomenal financial returns to the investors.

We didn’t get free (or cheap) digital-butlers-for-everyone, because there was no money it.

That’s why the current wave of enthusiasm seems like a hype bubble to me. I am seeing a lot of very smart, normally insightful people being taken in by the idea of “AI personal assistants for the masses,” without asking what the revenue model is meant to resemble.

And let’s be clear: Dario Amodei is casually dropping numbers like “$5 or $10 billion” to train the next-generation models that are supposed to make these AI agents possible. That’s the financial hole these AI agents are somehow meant to fill. And that’s just for starters.

The promise of personalization in internet-futures-past went unfufilled, because the money wasn’t in personalizing to your interests. The money was in keeping you on-site, seeing targeted ads. And, again, the trajectory of the future bends toward money.

Sam Altman says we’ll have agents. Sam Altman says a lot of things. Most of what he says is tuned to what he senses people want to hear at any given juncture. But what is the revenue model for personalized agents? In particular, what is the revenue model that might convince investors over the longer term that it could go to infinity.

(Side note: a number of tech critics have been arguing that AI hype is nearly over, because it’s becoming obvious that the tech companies are spending billions to make millions. I want to gently suggest that these critics are not yet jaundiced enough. The companies are spending billions on a loss-leading product that juices their stock price and makes them worth paper-trillions. The OpenAI investment doesn’t need to generate more sales than MSFT spends on it. It just needs to keep the share price absurdly high. It’s ridiculous, and further evidence that our entire economy is just derivative financial products at this point. But that’s the absurd state of things right now.)

Agents, if they are developed at all, are going to be a bespoke, luxury good. They’ll be for discerning customers with money to spend on personalization. The business class lounge set. The rest of us will get info-sludge and degraded public services. That’s the status quo ante, at least. It’s what will happen if we don’t resist, and collectively demand a better future.


I’d like it to be otherwise. And I plan to keep a close eye on this, since it sort of represents a hard test of my broader thesis about how technologies develop.

  • A lot of companies are trying to build AI agents right now. They are well funded. There is supply.

  • The appeal of AI agents, if a smooth and trustworthy product can be brought to market, is undeniable. …Holy hell would it be nice if AI could make the trappings of rich-people-shit available to the rest of us, just this once.

  • But we are still living in the free trial period of these technologies. The trajectory of the future bends toward money.

  • So, either a market is going to develop for subsidizing these tools (packaging and reselling all of our behavioral and personal data, for instance), or the products will be rendered unaffordable to the mass public.

If you want to know where social technologies are headed, don’t focus on what the technology might be used for under ideal conditions.

Focus on the direction that currently-existing market forces will channel it.

And if that direction looks bad, exert pressure on public officials accordingly.


Maybe I’ll be wrong. Maybe I’ll revisit this post in 2029 and, with the benefit of all the time made available by my digital personal assistant, compose a thoughtful mea culpa.

But, for the time being, I would urge you to be skeptical of the promise of AI agents.

Until someone can explain how we’re going to pay these digital butlers, I’m going to assume they aren’t ever going to be available to the masses. That’s not their purpose in this story. Their purpose is to get us excited about the promise of AI, to place our faith in these tech firms (old and new) under the assumption that the benefits will be broadly distributed sometime later.

Subscribe now

Read the whole story
tante
7 days ago
reply
"Until someone can explain how we’re going to pay these digital butlers, I’m going to assume they aren’t ever going to be available to the masses. That’s not their purpose in this story. Their purpose is to get us excited about the promise of AI, to place our faith in these tech firms (old and new) under the assumption that the benefits will be broadly distributed sometime later."
Berlin/Germany
Share this story
Delete
1 public comment
jepler
7 days ago
reply
"If you want to know where social technologies are headed, don’t focus on what the technology might be used for under ideal conditions.

Focus on the direction that currently-existing market forces will channel it.

And if that direction looks bad, exert pressure on public officials accordingly."
Earth, Sol system, Western spiral arm

The Measure of Intelligence

1 Comment


This cartoon is by me and Nadine Scholtes.


TRANSCRIPT OF CARTOON

This cartoon has four panels.

PANEL 1

A man wearing a brown jacket over jeans and a v-neck t-shirt is sitting on a park bench, staring at something in his hands with great concentration. Let’s call him JACKET.

A red-headed man in a red smiley face t-shirt is on the path in front of the bench, looking at the first man with a dubious expression. Let’s call him REDHEAD.

REDHEAD: Er… Excuse me. What are you doing?

JACKET: A lot of my genius ideas get lost when I lose focus.

PANEL 2

A close-up on Jacket shows that his hands are filled with a stick, lumpy, gooey, dripping mess of green-gray ooze. He continues to stare at it with great concentration.

JACKET: So I invented “the idea net” by smooshing rubber cement, peanut butter, and used chewing gum. This way I’ll catch ideas before they escape.

PANEL 3

Redhead is responding, with a rather grumpy expression. Jacket doesn’t even glance at Redhead, continuing to study the mess in his hands.

REDHEAD: That’s gotta be the stupidest idea I’ve ever–

JACKET: I’m a billionaire.

PANEL 4

The scene has changed to an apartment. Redhead is seated on a sofa, mixing up some sticky goo in his hands. On the coffee table in front of him we can see an open peanut butter jar, an open bottle of rubber cement, and a bunch of little crumpled pieces of paper (presumably gum wrappers). He is staring at the mess in his hands and smiling.

Behind him, a blonde woman is watching what’s he’s doing with a very doubtful expression on her face.

REDHEAD: I know it looks stupid, but he’s a billionaire! His ideas must be good!

CHICKEN FAT WATCH

Chicken fat is an old cartoonists’ expression for meaningless but fun details in a cartoon.

In panel one, hidden from the humans by a bush, a squirrel in a slouch hat and trenchcoat is standing next to a magpie with a bag of nuts. The magpie and the squirrel have their backs to each other and are studious ignoring each other.

In panel three, we can see that the squirrel and magpie are looking at each other. The squirrel has opened his trenchcoat to reveal a small bag labeled “catnip.” The magpie is holding out the bag of nuts to the squirrel.

In panel four, in the background, there is an open window. The magpie has landed on the windowsill, holding the bag of catnip. Below the windowsill, a gray housecat is making the “shh” gesture with one paw, and with the other paw is offering the magpie a shiny necklace.

Also in panel four, there are a couple of framed pictures on the wall. One of them is of the blonde woman; the other one is of the cat.


The Measure of Intelligence | Patreon

Read the whole story
tante
15 days ago
reply
This comic (is|is not) about Sam Altman.
Berlin/Germany
Share this story
Delete

AI revolution will be boon for natural gas, say fossil fuel bosses

1 Comment and 2 Shares
Read the whole story
tante
15 days ago
reply
AI push will lead to more carbon emissions, it's that simple.
Berlin/Germany
sarcozona
16 days ago
reply
Epiphyte City
Share this story
Delete
Next Page of Stories