Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2324 stories
·
121 followers

“Can Europe Chart Its Own Path on Tech?” – being a guest on the “Tech Won’t Save Us” podcast

1 Comment

Tech Won’t Save Us is probably one of the most influential podcasts in the space of critical, deep analysis of technologies and their social and political effects. That’s why it has been such an honour and pleasure to be a guest on it.

In this week’s episode we talk about Europe and tech and innovation. About how the current “AI” race (and a lot of other hyped technologies) are just empty innovation.

You can also listen to it on YouTube:

Read the whole story
tante
10 hours ago
reply
On "Tech Won't Save Us" I got to talk about tech, Europe and innovation.
Berlin/Germany
Share this story
Delete

AI can't fix what automation already broke

1 Comment

Hello, and welcome to Blood in the Machine: The Newsletter; not to be confused with Blood in the Machine: The Book. It’s a newsletter about Silicon Valley, AI, labor, and power, written by me, journalist and former LA Times tech columnist Brian Merchant. It’s free, but if you find this sort of independent tech journalism and criticism valuable, and you’re able, I’d be thrilled if you’d help back the project. Enough about all that, though; grab your hammers, onwards, and thanks for reading.

Subscribe now


Yes, there’s a constant influx of ‘snapshot of our ever-exacerbating dystopia’ type stories to endure these days, but this one, from the American Banker trade magazine, manages to stand out. The piece reveals a new, cutting-edge use case for enterprise AI: trying to prevent call center workers from “losing it” by showing them video montages of their family set to their favorite pop music after they have been barraged with angry callers and the system has assessed they are on the brink.

Pretty bleak! But it’s a telling example of how AI and automation gets used in the workplace—in more ways than initially meets the eye.

First, the details:1

The AI bringing zen to First Horizon's call centers

Call center agents who have to deal with angry or perplexed customers all day tend to have through-the-roof stress levels and a high turnover rate as a result. About 53% of U.S. contact center agents who describe their stress level at work as high say they will probably leave their organization within the next six months, according to CMP Research's 2023-2024 Customer Contact Executive Benchmarking Report.

Some think this is a problem artificial intelligence can fix. A well-designed algorithm could detect the signs that a call center rep is losing it and do something about it, such as send the rep a relaxing video montage of photos of their family set to music. 

First Horizon is using artificial intelligence and such video "resets" to bring a state of calm and well-being to the people who talk to customers on the phone all day.

If this showed up in the b-plot of a Black Mirror episode, we’d consider it a bit much. But it’s not just the deeply insipid nature of the AI “solution” being touted here that gnaws at me, though it does, or even the fact that it’s a comically cynical effort to paper over a problem that could be solved by, you know, giving workers a little actual time off when they are stressed to the point of “losing it”, though that does too. It’s the fact that this high tech cost-saving solution is being used to try to fix a whole raft of problems created by automation in the first place.

Testing output from call center operator headset using KEMAR artificial head fixture. Image by Chuck Kardous for NIOSH, 2007.

Consider: Why, exactly, are these workers so stressed out? Why are they dealing with so many “angry” and “perplexed” customers—a consequential number of whom yell at them every single day—that they are, according to their own employers, on the brink of breaking down?

Later in the article, a clue emerges: “Today, about 85% to 95% of customer calls that First Horizon fields are handled in a self-service manner within the interactive voice response,” according to one of the bank’s executives. Aha! This tells us that however many years ago, the management at First Horizon Bank was sold on an another automation technology, interactive voice response (IVR), that is now used to field the vast majority of its incoming calls. So, like most of its peers, First Horizon Bank was able to replace many of its call center workers with an IVR system, or to allows its call volume to balloon without hiring more of them, until most calls were not being answered by people at all, but IVR. The problem is, everybody absolutely hates IVR.

It’s one of the most reviled forms of automation in existence, and this is why so many customers are livid by the time they reach the beleaguered call center workers who remain on hand. These callers have been navigating an automated system designed to save a bank some money on labor costs (and to encourage exasperated callers to hang up and drop their issue, which would likely require more resources to address) for many minutes or even hours, and are now understandably angry that so much of their time has been wasted.

To better illustrate: Let’s say you’re a First Horizon customer; the bank has put a hold on your credit card as a fraud prevention measure (the result of an automated warning system) and you’re standing in the grocery store after finally managing to get the self-checkout kiosk to work (the result of labor-saving automation technology) and your kids are screaming and the card is declined. You call the bank, and you’re routed to an interactive voice menu, and you can’t get the thing to go to the right option and now your kid has spilled one of the grocery bags and you’re scrambling as you wait and punch the number again on your phone and the people behind you are shaking their heads and nope, wrong department and the prerecorded voice drones on so you try again, and it’s ringing now, and you FINALLY get through to someone. You don’t mean to sound mad, you know that it’s not this poor worker’s fault for installing such an impenetrable and spirit-crushing system, but this worker is there, on the other end of the phone, and you can’t help but be aggravated when it’s time to explain your issue.

Any of this sound familiar? Maybe you have the patience of a saint that is necessary to endure a world overstuffed with the broken promises of automation without “losing it”. Not everyone does! The point is, these are the kinds of situations the First Horizon call center workers are picking the phone up to every day; callers evincing polite but thinly concealed agitation to those thrown into a full-blown rage. Anxious customers, yellers, insult-hurlers, the gamut. I’ve worked in a call center, I get it.

That’s why this story broke me a little bit. First Horizon and the company that sold them on this AI solution are telling low wage workers—whose jobs are to absorb customers’ wrath at the fact that First Horizon has messed something up, and then used automation to make it all but impossible to answer for it—that “We get that you’re stressed thanks to decades of our cost cutting and bad automation, but you can now listen to a pop song and look at AI-curated vacation slides as a treat, before returning to the call mines.” It’s deeply insulting.

I used that example of that incident in the grocery store as a way to try to illustrate how pervasive the effects all these little examples of automation are. Taken alone, none of them are the end of the world—except, perhaps, for workers who once relied on a job that was automated away—and some are well-intentioned (fraud prevention). But collectively they’re a corrosive force that erodes social bonds and spoils personal interactions and generally makes it less pleasant to go about our days as human beings.

Next to no one’s lives are improved, except maybe the company that saves on labor costs—and even then not always. I’ve termed this shitty automation in the past, because no one wins. (An art historian once wrote an academic paper about it, even!) Customers dislike it, workers hate it, and all around, it causes our experience to suffer. The world would be a better place if most IVR systems simply vanished, and we once again could speak with real humans about our problems, and real humans could try to solve them—just imagine it!

But instead of, say, making room in the budget for more intelligent human staff members, a bank like First Horizon decides to shell out for the latest technological trend that promises ever-improved efficiency—this time, the AI ‘reset’ button that impresses management but condescends to workers. Much of the history of workplace technologies is thus: high-tech programs designed to squeeze workers, handed down by management to graft onto a problem created by an earlier one.

This is my great worry with generative AI. I have not lost a single wink of sleep over the notion that ChatGPT will become SkyNet, but I do worry that it, along with Copilot, Gemini, Cohere, and Anthropic, is being used by millions of managers around the world to cut the same sort of corners that the call center companies have been cutting for decades. That the result will be lost and degraded jobs, worse customer service, hollowed out institutions, and all kinds of poor simulacra for what used to stand in its stead—all so a handful of Silicon Valley giants and its client companies might one day profit from the saved labor costs. The result will be the latest wave of shitty automation, spread all over the internet, our phones, and our lives.


Subscribe now

On that note, I might mention that I had a piece in the Atlantic last week about the growing trend of companies, creators, and brands promising “AI-free” products and services—a reaction to ethical issues and safety concerns, and the reputation AI-produced content has for being cheap and lower-quality.

I dug the (human-made) art and the piece has been spurring a lot of good discussion online. Also some bad discussion: The CEO of Medium showed up in the Atlantic’s mentions on Threads saying the platform should get credit for being the first to be “pro-human.” I was one of 70 journalists that Medium laid off after deciding it was cheaper to run the platform on poorly compensated user-generated content, so I shot back that that wasn’t very pro human. A debate ensued, and it’s still going on as I write this…

One reader reached out and asked how I felt about the Atlantic’s licensing deal with OpenAI, which a) surprised many Atlantic staffers themselves, and b) was announced after this piece had been commissioned. The answer is: I vehemently oppose such licensing deals! I think they’re bad for the industry, and are the latest capitulation to tech companies, who have spent the last 20 years steamrolling journalism in myriad ways. Will I write for the Atlantic again as long as they have this policy in place, if I or other freelancers can’t opt out? Probably not! I’ll have to learn more—writers can usually carve out exceptions in the contracts, which are typically awful in all sorts of ways, but that can be onerous and time consuming. At the very least, contributors should be asked for consent to add their work to a training data set, and compensated for it.

I also headed up to South Lake Tahoe last week to speak to the Teamsters about luddites, AI, and organizing around big tech—it was a great chat, and was very glad to see Blood in the Machine added to a number of locals’ libraries. Good people, good times; I’m always happy to do this stuff. I’d love to do more of it, in fact—and your support can help there. I recently turned payments on, and plan to be doing more of this writing—look for a wider announcement in coming weeks—and I would appreciate your support so I can keep doing this work. A million thanks to all of those of you who’ve already pledged support and signed up with next to no prompting; you’re the very best, and this General Ludd salutes you. Honestly it means so much.

Until next time — hammers up.

bcm


Subscribe now

1

Hat tip to the great Alex Press for flagging today’s tale of woe.

Read the whole story
tante
3 days ago
reply
"I used that example [...] to illustrate how pervasive the effects all these little examples of automation are.[...] But collectively they’re a corrosive force that erodes social bonds and spoils personal interactions and generally makes it less pleasant to go about our days as human beings."
Berlin/Germany
Share this story
Delete

How to Leave Twitter

1 Comment
Several interrupted Midjourney generations was used as a source image and “blended,” over multiple generations.

I have been busy this week with writing and revisions for part two of The Ghost Stays in the Picture — part one is over at the Flickr Foundation blog. Because of that, there’s no typical newsletter content this week.

However, I thought I’d flag that I was also a guest on WXXI’s “Connections with Evan Dawson,” joining a panel conversation about AI and the music industry that veered into AI and creativity more generally. It’s meant for a general listening audience and it was a pretty fun chat.

I also joined some awesome folks for a FAccT2024 panel launching “Bits in the Machine: A Time Capsule of Worker's Stories in the Age of Generative AI,” a zine from folks at the DAIR Institute, Collective Action School and Collective Action in Tech. As the title implies, the zine contains interviews from various perspectives on the way workers in the creative industry are thinking about generative AI. I’m one of a few interviewees. You can check out the zine here.

On June 27th I’ll be a keynote speaker, alongside Beth Coleman from the University of Toronto, at the Design Research Society conference (DRS2024) at Northeastern University. Details here.

The summer, plus the workload of editing the ARRG! Zine (which is slowly coming together) and wrapping up my Flickr fellowship means a little less content from the newsletter — apologies in advance if it goes biweekly for a bit! I’m still here and still writing.

What follows is a short rant on leaving Twitter, which may or may not be interesting. But I quit the site, and I had some thoughts. Consider this a rant in lieu of a proper post.

How to Leave Twitter

I have been trying to leave Twitter (X) for a while now and finally deleted the account. If you want the messy details, I promise you they are extremely dull. There was Twitter drama, and I was invested in it, and that was disorienting. An artist was sharing AI generated work; another artist realized it was AI generated and felt betrayed, so they started copying the AI generated artwork by hand and posting it as their own (titled, literally, “Spite.” I’ll remind you that the person who did this was upset because a work they enjoyed had been generated by AI. They then felt entitled to claiming ownership over it. A large contingent of anti-AI people celebrated this, because they didn’t believe the original artist was an artist at all.

I found this celebration confusing — AI generated art, drawn by hand, was still generated by AI, and still contains traces of style features learned from the artists whose work was taken without permission. If that was your concern, then this use of AI is a weird thing to celebrate.

I have a nuanced view on this, which meant I was about to have a horrible experience on Twitter. Sure enough, I was quickly called an AI bro. But several comments into a thread with someone I disagreed with (but whom I was learning from), another user stepped in to confront me about my use of “y’all” (“you all,” more specifically). “Y’all” is a thing I say in online conversations, sometimes, and I was accused of using it to “generalize the positions of all anti-AI people”.

It was a small annoyance but in the context of all that was happening, I finally collapsed from the weight of bad faith arguments that has come to define what posting on Twitter means in 2024.

It is helpful that the X icon is a literal reminder not to use it. A few months ago I took it off my phone, but then I was traveling for conferences and trying to talk to journalists so I had to add it back. Journalists still love X, because Twitter used to be where important people said things. Nobody is saying anything important there now.

The journalists didn’t quote me anyway.

The friction of X was always more motivating than the conversations on any of those other platforms, because I see more extreme examples of bad-faith, decontextualized techno-utopian gaslighting on X than anywhere else. It was fodder for rebuttal. Much of this newsletter has been fueled by other people being wrong on the internet. As someone dedicated to understanding and critiquing techno-solutionism, it’s a source for uncovering that ideology. LinkedIn has it too, but it’s more carefully controlled by your selected network.

But it’s too tempting to mistake all that for something relevant. A lot of the conversation on X about AI is simply wrong: people who don’t read papers telling us what they say, people who read about how LLMs work in online forums dedicated to the singularity spend time assuring me that I don’t understand them.

For a long time I believed that there was some remarkable change in OpenAI’s infrastructure that made my understanding of LLMs obsolete. Then I did the research and read the papers, etc. There isn’t. People just misunderstand how they work, and wanted to assure me that I didn’t know how they worked.

Unfortunately, a great number of smart AI people still seem to be relying on X to share their research, critiques and ideas. I suspect there is a timing issue here: Generative AI was the last big cultural shift to take place before Twitter died and X rose from its ashes, like the stench from a putrid corpse. Many of these AI people found themselves with tons of engagement and followers that they’re simply unwilling to give up, or find that they have less leverage in other sites.

But it was telling that I was on the website because I wanted access to that knowledge. In other words: even the best, most thoughtful writers about AI were causing me to use X in order to read what they had to say. It seemed perverse. I understand why they are there, but I need everyone to understand that they are why we are there, and by all counts, we should all be somewhere else.

Me, I had a paltry 4k followers. Without paying for a blue check, it was unlikely to go much higher than that. Friends in the AI space range from 14k to 40k. That’s astounding to me. The fact is, the audience numbers really has nothing to do with how many people see the free stuff you give away. X is designed around the concept that what I wrote had no value, and that I had to pay Elon Musk to share my free content that other users enjoyed engaging with. That’s ridiculous.

So in short, I’ve deleted my Twitter account, and I’m available on LinkedIn, Instagram and Mastodon. Come find me, my accounts need the help. :)

Read the whole story
tante
8 days ago
reply
"It is helpful that the X icon is a literal reminder not to use it. A few months ago I took it off my phone, but then I was traveling for conferences and trying to talk to journalists so I had to add it back. Journalists still love X, because Twitter used to be where important people said things. Nobody is saying anything important there now. "
Berlin/Germany
Share this story
Delete

What's in a name? "AI" versus "Machine Learning"

1 Comment
“An Android programming a desktop computer” image generated using Stability AI.

Today I want to fuss over language for a bit. I’ve begun to suspect that the term “Artificial Intelligence” manages to obscure more than it reveals.


I recently read Ethan Mollick’s new book, Co-Intelligence: Living and Working with AI. The book makes a pragmatic case for technological optimism. Mollick is convinced that we are at the dawn of a new era. He urges his readers to dive right in and get an early start mapping the “jagged frontier” of the technology.

As regular readers of this newsletter certainly know, I am much more skeptical of generative A.I. My hunch is that once the hype bubble fades, these large language models are mostly going to be understood as a substantial-but-incremental advance over existing machine learning tools. (As John Herrman puts it in his column this week, “Siri, but this time it works.”)

Reading Mollick, I was struck by the conceptual pivot where he sets out the claim that generative A.I. is a “general purpose technology.” Many techno-optimists use this same terminology. It places large language models in the same rarified category as the steam engine, electricity, and the internet — major, distinct innovations that have multiple uses, and broad spillover effects.

If we grant that LLMs are a general purpose technology, then it follows that the social impacts will be far-reaching, even if we aren’t on the path to artificial general intelligence. Radical new capabilities have just been made available to society. The first networked computers and the first electric lights might not have impressed the skeptics either, but the smart, entrepreneurial move was to fixate on how the world was about to change.

But should we accept the premise? Are LLMs a genuinely new phenomenon, like the steam engine, or are they a significant incremental advance like, say, broadband-speed internet. The internet with broadband can do a lot of things that the pre-broadband internet cannot do. (Netflix started out in the mail-order DVD business. Streaming Video on Demand was not yet possible.) But broadband isn’t considered a distinct “general purpose technology.” It just gets lumped in as part of the internet’s developmental trajectory.

Think about ChatGPTs actual use-cases. It’s a better Siri. A better Clippy. A better Powerpoint and Adobe. A better Eliza. A better Khan Academy and WebMD. None of these are new. They all exist. They all make use of machine learning. They are all hazy shadows of their initial promises. Many had new features unveiled during the “Big Data” hype bubble, not too long ago. Those features have proven clunky. We’ve spent over a decade working with frustrating beta test versions of most of this functionality. The promise of Generative A.I. is “what if Big Data, but this time it works.”

(Andohbytheway, it isn’t so clear that it actually works this time either.)

There’s an old saying. I’m having trouble tracking it down, because Google search is garbage now. It goes something like this: “Machine learning is what people call everything that computers already do. Artificial intelligence is what they call everything we computers can do someday.”

I used to often hear that point from people insisting that the goal posts for A.I. keep getting moved. No matter what benchmarks they reach, it’s never really A.I. And that means the field doesn’t get enough credit for how far they’ve come!

But in the midst of the current hype cycle, the reasoning has been turned on its head. We’re now calling every new product “A.I.-powered,” even when all they’re actually doing is just a mild upgrade on existing machine learning practices.

Those upgrades and incremental advances will indeed sometimes matter, just as the switch from mass dial-up to mass broadband mattered. But we don’t have to invoke the steam engine and electricity as points of comparison. The advances are narrower and more manageable in scope and function.


Mollick's strongest evidence for the stupendous power of actually-existing A.I. is (paraphrasing) “look how much better it makes my students at brainstorming business ideas and slogans! Look at how useful it is to consultants working for the Boston Consulting Group!" And if we believe these are difficult/high-level tasks, then generative A.I. looks like a general purpose technology that has already arrived.

But, to reference an older piece from this substack (“Bullet Points: Oh-just-shut-up edition”):

Mollick and his coauthors find that GPT-4 improves consultant productivity and work quality on all these tasks. The gains were strongest for the low-performers. But, also, he writes that AI is a “jagged frontier” — the technology excels at some tasks, is terrible at others, and it requires significant expertise to differentiate between the two. To Mollick, this means that (1) the business opportunities are phenomenal and (2) the people who get rich will be the first-movers who really develop their skills in this grand new landscape.

And, I mean… sure? One could read the findings that way.

But an alternate reading would be something like “hey! I hear you think A.I. is a bullshit generator. Well, we gave a whole profession of bullshit generators access to A.I., and you’ll never believe how much more productive they became at generating bullshit! This is such a big deal for the Future of Work!”

Of course ChatGPT is useful to underwhelming consultants in generating ideas for their slidedecks and reports that no one bothers to read. Of course its useful for entrepreneurship students brainstorming 25 new business ideas. Those activities are bullshit to begin with!

ChatGPT doesn’t look like a disruptive, revolutionary tool to me. It looks like an incremental advance over the status quo ante. Students can use ChatGPT to cheat on writing assignments. That’s catastrophic for the extant cheating-on-writing-assignments cottage industry. But it doesn’t have much bearing on my pedagogy or syllabus.

Notably, all of Mollick’s big ideas for how to adapt higher education — stuff like flipped classrooms and experiential activities — are perfectly nice ideas that he and others were already pursuing. This isn’t a brave new world situation. It’s a now-more-than-ever scenario.


The reason this all matters is that if we think of LLMs as just better machine learning, then it stands to reason that we should be intensely concerned about whether the new machine learning still suffers from all the well-established problems with older machine learning (Garbage In Garbage Out, encoding biases from the training data, etc).

(Spoiler alert: they haven’t fixed any of those existing problems. That’s why the rebrand was needed.)

It also provides further reason not to take folks like Leopold Aschenbrenner seriously. Aschenbrenner is the guy who became briefly internet-famous last week for tweeting that we’d have AGI by 2027 if we “believe in straight lines on a graph.” Machine learning didn’t start from scratch in 2018. The whole artifice crumbles if we refuse to act as though the field of machine learning was born five years ago.

We ought to approach LLMs not as an imminently world-altering technology, but as an incremental advance on existing technologies. There are areas where that incremental advance will matter a lot. LLMs aren’t going to be the next metaverse, gone and basically forgotten two years later. But they also aren’t such a dramatic break from the recent past.

So when the AI folks insist that this time edtech is going to revolutionize learning for everyone, we should pay attention to how that worked last time. When they insist it will revolutionize medicine and art and government and science, we should reflect on why those same claims haven’t panned out over the past couple decades.

Generative Artificial Intelligence is machine learning. Any time Sam Altman and his pals talk about the wonders of Generative Artificial Intelligence, just substitute the words “machine learning” in your head, and ask yourself whether the claims, stripped of futurity, make any sense at all.

These LLMs are a significant advance on existing technology. It would be a mistake, I think, to pretend otherwise. But the only reason I can see for treating it like a distinct new general purpose technology is to shield these tools from the track record of dashed expectations and abject failures that recently preceded them.

Maybe it’ll really work this time. Maybe we’re on the verge of experiencing transformative versions of Siri and Clippy that work great.

But we should start by asking whether the new models in fact succeed where the previous ones failed, not by declaring that we stand at the dawn of a new industrial revolution.

Subscribe now

Read the whole story
tante
8 days ago
reply
"We ought to approach LLMs not as an imminently world-altering technology, but as an incremental advance on existing technologies. There are areas where that incremental advance will matter a lot. LLMs aren’t going to be the next metaverse, gone and basically forgotten two years later. But they also aren’t such a dramatic break from the recent past."
Berlin/Germany
Share this story
Delete

Teilzeitquote steigt auf Rekordniveau

1 Comment
Fast vier von zehn Deutschen arbeiten in Teilzeit: Das zeigen aktuelle Zahlen des Instituts für Arbeitsmarkt- und Berufsforschung. Ein Wissenschaftler sagt: Jeder Einzelne hat noch nie so wenig gearbeitet wie derzeit.

Read the whole story
tante
16 days ago
reply
Teilzeitquote geht hoch, was ja oft problematisiert wird. Aber es ist doch viel mehr ein Zeichen, dass Menschen das 40h/Woche Ding nicht machen wollen und andere Modelle suchen.
Ich und meine Partnerin arbeiten auch beide nur 32 h/Woche, die Zusatzzeit geht in Zeit mit unserem Sohn.
Berlin/Germany
Share this story
Delete

The Three Little Pigs

1 Comment and 3 Shares
PERSON:
Read the whole story
tante
17 days ago
reply
"Three little pigs"
Berlin/Germany
Share this story
Delete
Next Page of Stories