Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2510 stories
·
139 followers

The Media's Pivot to AI Is Not Real and Not Going to Work

1 Comment

On May 23, we got a very interesting email from Ghost, the service we use to make 404 Media. “Paid subscription started,” the email said, which is the subject line of all of the automated emails we get when someone subscribes to 404 Media. The interesting thing about this email was that the new subscriber had been referred to 404 Media directly from chatgpt.com, meaning the person clicked a link to 404 Media from within a ChatGPT window. It is the first and only time that ChatGPT has ever sent us a paid subscriber.

From what I can tell, ChatGPT.com has sent us 1,600 pageviews since we founded 404 Media nearly two years ago. To give you a sense of where this slots in, this is slightly fewer than the Czech news aggregator novinky.cz, the Hungarian news portal Telex.hu, the Polish news aggregator Wykop.pl, and barely more than the Russian news aggregator Dzen.ru, the paywall jumping website removepaywall.com, and a computer graphics job board called 80.lv. In that same time, Google has sent roughly 3 million visitors, or 187,400 percent more than ChatGPT. 

This is really neither here nor there because we have tried to set our website up to block ChatGPT from scraping us, though it is clear this is not always working. But even for sites that don’t block ChatGPT, new research from the internet infrastructure company CloudFlare suggests that OpenAI is crawling 1,500 individual webpages for every one visitor that it is sending to a website. Google traffic has begun to dry up as both Google’s own AI snippets and AI-powered SEO spam have obliterated the business models of many media websites. 

This general dynamic—plummeting traffic because of AI snippets, ChatGPT, AI slop, Twitter no workie so good no more—has been called the “traffic apocalypse” and has all but killed some smaller websites and has been blamed by executives for hundreds of layoffs at larger ones. 

Despite the fact that generative AI has been a destructive force against their businesses, their industry, and the truth more broadly, media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked, and pretending that they are forward thinking or have any clue what they are doing will perhaps allow a specific type of media executive to squeeze out a few more months of salary.

But pivoting to AI is not a business strategy. Telling journalists they must use AI is not a business strategy. Partnering with AI companies is a business move, but becoming reliant on revenue from tech giants who are creating a machine that duplicates the work you’ve already created is not a smart or sustainable business move, and therefore it is not a smart business strategy. It is true that AI is changing the internet and is threatening journalists and media outlets. But the only AI-related business strategy that makes any sense whatsoever is one where media companies and journalists go to great pains to show their audiences that they are human beings, and that the work they are doing is worth supporting because it is human work that is vital to their audiences. This is something GQ’s editorial director Will Welch recently told New York magazine: “The good news for any digital publisher is that the new game we all have to play is also a sustainable one: You have to build a direct relationship with your core readers,” he said.

Becoming an “AI-first” media company has become a buzzword that execs can point at to explain that their businesses can use AI to become more ‘efficient’ and thus have a chance to become more profitable. Often, but not always, this message comes from executives who are laying off large swaths of their human staff.

In May, Business Insider laid off 21 percent of its workforce. In her layoff letter, Business Insider’s CEO Barbara Peng said “there’s a huge opportunity for companies who harness AI first.” She told the remaining employees there that they are “fully embracing AI,” “we are going all-in on AI,” and said “over 70 percent of Business Insider employees are already using Enterprise ChatGPT regularly (our goal is 100%), and we’re building prompt libraries and sharing everyday use cases that help us work faster, smarter, and better.” She added they are “exploring how AI can boost operations across shared services, helping us scale and operate more efficiently.” 

Last year, Hearst Newspapers executives, who operate 78 newspapers nationwide, told the company in an all-hands meeting audio obtained by 404 Media that they are “leaning into [AI] as Hearst overall, the entire corporation.” Examples given in the meeting included using AI for slide decks, a “quiz generation tool” for readers, translations, a tool called Dispatch, which is an email summarization tool, and a tool called “Assembly,” which is “basically a public meeting monitor, transcriber, summarizer, all in one. What it does is it goes into publicly posted meeting videos online, transcribes them automatically, [and] automatically alerts journalists through Slack about what’s going on and links to the transcript.”

The Washington Post and the Los Angeles Times are doing all sorts of fucked up shit that definitely no one wants but are being imposed upon their newsrooms because they are owned by tech billionaires who are tired of losing money. The Washington Post has an AI chatbot and plans to create a Forbes contributor-esque opinion section with an AI writing tool that will assist outside writers. The Los Angeles Times introduced an AI bot that argues with its own writers and has written that the KKK was not so bad, actually. Both outlets have had massive layoffs in recent months.

The New York Times, which is actually doing well, says it is using AI to “create initial drafts of headlines, summaries of Times articles and other text that helps us produce and distribute the news.” Wirecutter is hiring a product director for AI and recently instructed its employees to consider how they can use AI to make their journalism better, New York magazine reported. Kevin Roose, an, uhh, complicated figure in the AI space, said “AI has essentially replaced Google for me for basic questions,” and said that he uses it for “brainstorming.” His Hard Fork colleague Casey Newton said he uses it for “research” and “fact-checking.” 

Over at Columbia Journalism Review, a host of journalists and news execs, myself included, wrote about how AI is used in their newsrooms. The responses were all over the place and were occasionally horrifying, and ranged from people saying they were using AI as personal assistants to brainstorming partners to article drafters.

In his largely incoherent screed that shows how terrible he was at managing G/O Media, which took over Deadspin, Kotaku, Jezebel, Gizmodo, and other beloved websites and ran them into the ground at varying speeds, Jim Spanfeller nods at the “both good and perhaps bad” impacts of AI on news. In a truly astounding passage of a notably poorly written letter that manages to say less than nothing, he wrote: “AI is a prime example. It is here to a degree but there are so many more shoes to drop [...] Clearly this technology is already having a profound impact. But so much more is yet to come, both good and perhaps bad depending on where you sit and how well monitored and controlled it is. But one thing to keep in mind, consumers seek out content for many reasons. Certainly, for specific knowledge, which search and search like models satisfy in very effective ways. But also, for insights, enjoyment, entertainment and inspiration.” 

At the MediaPost Publishing Insider Conference, a media industry business conference I just went to in New Orleans, there was much chatter about AI. Alice Ting, an executive for the Daily Mail gave a pretty interesting talk about how the Daily Mail is protecting its journalism from AI scrapers in order to eventually strike deals with AI companies to license their content.  

“What many of you have seen is a surge in scraping of our content, a decline in traffic referrals, and an increase in hallucinated outputs that often misrepresent our brands,” Ting said. “Publishers can provide decades of vetted and timestamped content, verified, fact checked, semantically organized, editorially curated. And in addition offer fresh content on an almost daily basis.” 

Ting is correct in that several publishers have struck lucrative deals with AI companies, but she also suggested that AI licensing would be a recurring revenue stream for publishers, which would require a series of competing LLMs to want to come in and license the same content over and over again. Many LLMs have already scraped almost everything there is to scrape, it’s not clear that there are going to consistently be new LLMs from companies wanting to pay to train on data that other LLMs have already trained on, and it’s not clear how much money the Daily Mail’s blogs of the day are going to be worth to an AI company on an ongoing basis. Betting that this time, hinging the future of our industry on massive, monopolistic tech giants will work out is the most Lucy with the football thing I can imagine.

There is not much evidence that selling access to LLMs will work out in a recurring way for any publisher, outside of the very largest publishers like, perhaps, the New York Times. Even at the conference, panel moderator Upneet Grover, founder of LH2 Holdings, which owns several smaller blogs, suggested that “a lot of these licensing revenues are not moving the needle, at least from the deals we’ve seen, but there’s this larger threat of more referral traffic being taken away from news publishers [by AI].”

In my own panel at the conference I made the general argument that I am making in this article, which is that none of this is going to work.

“We’re not just competing against large-scale publications and AI slop, we are competing against the entire rest of the internet. We were publishing articles and AI was scraping and republishing them within five minutes of us publishing them,” I said. “So many publications are leaning into ‘how can we use AI to be more efficient to publish more,’ and it’s not going to work. It’s not going to work because you’re competing against a child in Romania, a child in Bangladesh who is publishing 9,000 articles a day and they don’t care about facts, they don’t care about accuracy, but in an SEO algorithm it’s going to perform and that’s what you’re competing against. You have to compete on quality at this point and you have to find a real human being audience and you need to speak to them directly and treat them as though they are intelligent and not as though you are trying to feed them as much slop as possible.”

It makes sense that journalists and media execs are talking about AI because everyone is talking about AI, and because AI presents a particularly grave threat to the business models of so many media companies. It’s fine to continue to talk about AI. But the point of this article is that “we’re going to lean into AI” is not a business model, and it’s not even a business strategy, any more than pivoting to “video” was a strategy or chasing Facebook Live views was a strategy. 

In a harrowing discussion with Axios, in which he excoriates many of the deals publishers have signed with OpenAI and other AI companies, Matthew Prince, the CEO of Cloudflare, said that the AI-driven traffic apocalypse is a nightmare for people who make content online: “If we don’t figure out how to fix this, the internet is going to die,” he said.

So AI is destroying traffic, ripping off our work, creating slop that destroys discoverability and further undermines trust, and allows random people to create news-shaped objects that social media and search algorithms either can’t or don’t care to distinguish from real news. And yet media executives have decided that the only way to compete with this is to make their workers use AI to make content in a slightly more efficient way than they were already doing journalism. 

This is not going to work, because “using AI” is not a reporting strategy or a writing strategy, and it’s definitely not a business strategy.

AI is a tool (sorry!) that people who are bad at their jobs will use badly and that people who are good at their jobs will maybe, possibly find some uses for. People who are terrible at their jobs (many executives), will tell their employees that they “need” to use AI, that their jobs depend on it, that they must become more productive, and that becoming an AI-first company is the strategy that will save them from the old failed strategy, which itself was the new strategy after other failed business models.

The only journalism business strategy that works, and that will ever work in a sustainable way, is if you create something of value that people (human beings, not bots) want to read or watch or listen to, and that they cannot find anywhere else. This can mean you’re breaking news, or it can mean that you have a particularly notable voice or personality. It can mean that you’re funny or irreverent or deeply serious or useful. It can mean that you confirm people’s priors in a way that makes them feel good. And you have to be trustworthy, to your audience at least. But basically, to make money doing journalism, you have to publish “content,” relatively often, that people want to consume. 

This is not rocket science, and I am of course not the only person to point this out. There have been many, many features about the success of Feed Me, Emily Sundberg’s newsletter about New York, culture, and a bunch of other stuff. As she has pointed out in many interviews, she has been successful because she writes about interesting things and treats her audience like human beings. The places that are succeeding right now are individual writers who have a perspective, news outlets like WIRED that are fearless, publications that have invested in good reporters like The Atlantic, publications that tell you something that AI can’t, and worker owned, journalist-run outlets like us, Defector, Aftermath, Hellgate, Remap, Hearing Things, etc. There are also a host of personality-forward, journalism-adjacent YouTubers, TikTok influencers, and podcasters who have massive, loyal audiences, yet most of the traditional media is utterly allergic to learning anything from them.

There was a short period of time where it was possible to make money by paying human writers—some of them journalists, perhaps—to spam blog posts onto the internet that hit specific keywords, trending topics, or things that would perform well on social media. These were the early days of Gawker, Buzzfeed, VICE, and Vox. But the days of media companies tricking people into reading their articles using SEO or hitting a trending algorithm are over.

They are over because other people are doing it better than them now, and by “better,” I mean, more shamelessly and with reckless abandon. As we have written many times, news outlets are no longer just competing with each other, but with everyone on social media, and Netflix, and YouTube, and TikTok, and all the other people who post things on the internet. They are not just up against the total fracturing of social media, the degrading and enshittification of the discovery mechanisms on the internet, algorithms that artificially ding links to articles, AI snippets and summaries, etc. They are also competing with sophisticated AI slop and spam factories often being run by people on the other side of the world publishing things that look like “news” that is being created on a scale that even the most “efficient” journalist leveraging AI to save some perhaps negligible amount of time cannot ever hope to measure up to. 

Every day, I get emails from AI spam influencers who are selling tools that allow slop peddlers to clone any website with one click, automatically generate newsletters about any topic, or generate plausible-seeming articles that are engineered to perform well in a search algorithm. Examples: “Clone any website in 9 seconds with Clonely AI,” “The future of video creation is here—and it’s faceless, seamless & limitless,” “just a straightforward path to earning 6-figures with an AI-powered newsletter that’s working right now.” These people do not care at all about truth or accuracy or our information ecosystem or anything else that a media company or a journalist would theoretically care about. If you want an example of what this looks like, consider the series of “Good Day” newsletters, which are AI generated and are in 355 small towns across America, many of which no longer have newspapers. These businesses are economically viable because they are being run by one person (or a very small team of people) who disproportionately live in low cost of living areas and who have essentially zero overhead.

And so becoming more “efficient” with AI is the wrong thing to do, and it’s the wrong thing to ask any journalist to do. The only thing that media companies can do in order to survive is to lean into their humanity, to teach their journalists how to do stories that cannot be done by AI, and to help young journalists learn the skills needed to do articles that weave together complicated concepts and, again, that focus on our shared human experience, in a way that AI cannot and will never be able to.

AI as buzzword and shiny object has been here for a long time. And I actually do not think AI is fake and sucks (I also don’t really believe that anyone thinks AI is “fake,” because we can see the internet collapsing around us). We report every day on the ways that AI is changing the web, in part because it is being shoved down our throats by big tech companies, spammers, etc. But I think that Princeton’s Arvind Narayanan and Sayash Kapoor are basically correct when they say that AI is “normal technology” that will not change everything but that over time will lead to modest improvements in people’s workflows as they get integrated into existing products or as they help around the edges. We—yes, even you—are using some version of AI, or some tools that have LLMs or machine learning in them in some way shape or form already, even if you hate such tools.  

In early 2023, when I was the editor-in-chief of Motherboard, I was asked to put together a presentation for VICE executives about AI, and how I thought it would change both our journalism and the business of journalism. The reason I was asked to do this was because our team was writing a lot about AI, and there was a sense that the company could do something with AI to make money, or do better journalism, or some combination of those things. There was no sense or thought at the time, at least from what I was told, that VICE was planning to use AI as a pretext for replacing human journalists or cutting costs—it had already entered a cycle where it was constantly laying off journalists—but there was a sense that this was going to be the big new opportunity/threat, a new potential savior for a company that had already created a “virtual office” in Decentraland, a crypto-powered metaverse that last year had 42 daily active users.

I never got to give the presentation, because the executive who asked me to put it together left the company, and the new people either didn’t care or didn’t have time for me to give it. The company went bankrupt almost immediately after this change, and I left VICE soon after to make 404 Media with my co-founders, who also left VICE. 

But my message at the time, and my message now two years later, is that AI has already changed our world, and that we have the opportunity to report on the technology as it already exists and is already being used—to justify layoffs, to dehumanize people, to spam the internet, etc. At the time, we had already written 840 articles that were tagged “AI,” which included articles about biased sentencing algorithms, predictive policing, facial recognition, deepfakes, AI romantic relationships, AI-powered spam and scams, etc. 

The business opportunity then, as now, was to be an indispensable, very human guide to a technology that people—human beings—are making tons of money off of, using as an excuse to lay off workers, and are doing wild shit with. There was no magic strategy in which we could use AI to quadruple our output, replace workers, rise to the top of Google rankings, etc. There was, however, great risk in attempting to do this: “PR NIGHTMARE,” one of my slides about the risks of using AI I wrote said: “CNET plagiarism scandal. Big backlash from artists and writers to generative AI. Copyright issues. Race to the bottom.”

My other thought was that any efficiencies that could be squeezed out of AI, in our day-to-day jobs, were already being done so by good reporters and video producers at the company. There could be no top-down forced pivot to AI, because research and time-saving uses of AI were already being naturally integrated into our work by people who were smart in ways that were totally reasonable and mostly helpful, if not groundbreaking. The AI-as-force-multiplier was already happening, and while, yes, this probably helped the business in some way, it helped in ways that were not then and were never going to be actually perceptible to a company’s bottom line. AI was not a savior then, and it is not a savior now. For journalists and for media companies, there is no real “pivot to AI” that is possible unless that pivot means firing all of the employees and putting out a shittier product (which some companies have called a strategy). This is because the pivot has already occurred and the business prospects for media companies have gotten worse, not better. If Kevin Roose is using AI so much, in such a new and groundbreaking way, why aren’t his articles noticeably different than they ever were before, or why aren’t there way more of them than there were before? Where are the journalists who were formerly middling who are now pumping out incredible articles thanks to efficiencies granted by AI?

To be concrete: Many journalists, including me, at least sometimes use some sort of AI transcription tool for some of their less sensitive interviews. This saves me many hours, the tools have gotten better (but are still not perfect, and absolutely require double checking and should not be used for sensitive sources or sensitive stories). YouTube’s transcript feature is an incredible reporting tool that has allowed me to do stories that would have never been possible even a few years ago. YouTube’s built-in translations and subtitles, and its transcript tool are some of the only reasons that I was able to do this investigation into Indian AI slop creators, which allowed me to get the gist of what was happening in a given video before we handed them to human translators to get exact translations. Most podcasts I know of now use Descript, Riverside, or a similar tool to record and edit their podcasts; these have built-in AI transcription tools, built-in AI camera switching, and built-in text-to-video editing tools. Most media outlets use captioning that is built into Adobe Premiere or CapCut for their vertical videos and their YouTube videos (and then double check them). If you want to get extremely annoying about it, various machine learning algorithms are in ProTools, Audition, CapCut, Premiere, Canva, etc for things like photo editing, sound leveling, noise reduction, etc. 

There are other journalists who feel very comfortable coding and doing data analysis and analyzing huge sets of documents. There are journalists out there who are already using AI to do some of these tasks and some of the resulting articles are surely good and could not have been done without AI. 

But the people doing this well are doing so in a way where they are catching and fixing AI hallucinations, because the stakes for fucking up are so incredibly high. If you are one of the people who is doing this, then, great. I have little interest in policing other people’s writing processes so long as they are not publishing AI fever dreams or plagiarizing, and there are writers I respect who say they have their little chats with ChatGPT to help them organize their thoughts before they do a draft or who have vibecoded their own productivity tools or data analysis tools. But again, that’s not a business model. It’s a tool that has enabled some reporters to do their jobs, and, using their expertise, they have produced good and valuable work. This does not mean that every news outlet or every reporter needs to learn to shove the JFK documents into ChatGPT and have it shit out an investigation.

I also know that our credibility and the trust of our audience is the only thing that separates us from anyone else. It is the only “business model” that we have and that I am certain works: We trade good, accurate, interesting, human articles for money and attention. The risks of offloading that trust to an AI in a careless way is the biggest possible risk factor that we could have as a business. Having an article go out where someone goes “Actually, a robot wrote this,” is one of the worst possible things that could ever happen to us, and so we have made the brave decision to not do that. 

This is part of what is so baffling about the Chicago Sun Times’ response to its somewhat complicated summer guide AI-generated reading list fiasco. Under its new owners, Chicago Public Media, The Sun Times has in recent years spent an incredible amount of time and effort rebuilding the image and good will that its previous private equity owners destroyed. And yet in its apology note, Melissa Bell, the CEO of Chicago Public Media, said that more AI is coming: “Chicago Public Media will not back away from experimenting and learning how to properly use AI,” she wrote, adding that the team was working with a fellow paid for by the Lenfest Institute, a nonprofit funded by OpenAI and Microsoft. 

Bell does realize what makes the paper stand apart, though: “We must own our humanity,” Bell wrote. “Our humanity makes our work valuable.”

This is something that the New York Times’s Roose recently brought up that I thought was quite smart and yet is not something that he seems to have internalized when talking about how AI is going to change everything and that its widespread adoption is inevitable and the only path forward: “I wonder if [AI is] going to catalyze some counterreaction,” he said. “I’ve been thinking a lot recently about the slow-food movement and the farm-to-table movement, both of which came up in reaction to fast food. Fast food had a lot going for it—it was cheap, it was plentiful, you could get it in a hurry. But it also opened up a market for a healthier, more artisanal way of doing things. And I wonder if something similar will happen in creative industries—a kind of creative renaissance for things that feel real and human and aren’t just outputs from some A.I. company’s slop machine.”

This has ALREAAAAADDDDYYYYYY HAPPPENEEEEEDDDDDD, and it is quite literally the only path forward for all but perhaps the most gigantic of media companies. There is no reason for an individual journalist or an individual media company to make the fast food of the internet. It’s already being made, by spammers and the AI companies themselves. It is impossible to make it cheaper or better than them, because it is what they exist to do. The actual pivot that is needed is one to humanity. Media companies need to let their journalists be human. And they need to prove why they’re worth reading with every article they do.



Read the whole story
tante
2 hours ago
reply
"AI is a tool (sorry!) that people who are bad at their jobs will use badly and that people who are good at their jobs will maybe, possibly find some uses for. People who are terrible at their jobs (many executives), will tell their employees that they “need” to use AI, that their jobs depend on it, that they must become more productive, and that becoming an AI-first company is the strategy that will save them from the old failed strategy, which itself was the new strategy after other failed business models."
Berlin/Germany
Share this story
Delete

Verkehrswende mit Straßenbahn: In vielen Städten kommt die Tram zurück

1 Comment
Weltweit wird dieses alte Transportmittel in immer mehr Städten wieder oder neu eingeführt. Sie bietet viele Vorteile – nicht nur fürs Klima. mehr...
Read the whole story
tante
8 hours ago
reply
Straßenbahnen sind mehr als nur alternative Verkehrsmittel: Sie sparen Platz, schaffen lebenswerte Urbanität.
Berlin/Germany
Share this story
Delete

Standing Aside Athwart History

1 Comment


This cartoon is by me and Jenn Manley Lee.


TRANSCRIPT OF CARTOON

This cartoon has four panels. Each panel shows the same two characters, wealthy men, as they relax inside an exclusive country club. Servants wearing butler tuxes wait on them.

PANEL 1

One of them – let’s call him RACQUET – is waving a racquetball racquet and ranting, while his friend – let’s call him FRIEND – listens patiently.

RACQUET: William F. Buckley wrote “A conservative is a fellow who is standing athwart history yelling Stop!”

PANEL 2

The two are now playing darts.

RACQUET: “Prudence” is the conservative watchword! But today’s republican party is the opposite of prudential. January sixth, reinterpreting the constitution, destroying old alliances… and the tariffs! Dear God!

PANEL 3

They’ve moved to the club’s fancy dining area. Racquet pounds his fist on the table while Friend is looking at his phone.

RACQUET: it’s obscene! it’s what conservatives have always opposed! What’s become of our principles?!

PANEL 4

Now in what appears to be a demonic sacrifice room, they talk while Racquet prepares to plunge a dagger into one of the butler-like guys.

FRIEND: So you’ve stopped voting for Republicans?

RACQUET: I would, but I want the tax cuts.

CHICKEN FAT WATCH

“Chicken fat” is a long-obscure cartoonists’ term for unimportant but hopefully fun details in the art.

Panel 2: The mounted heads of Rocky and Bullwinkle are on the wall. The dartboard is being held up by one of the butler dudes; there is a dart sticking out of his head.

Panel 3: Both the silverware and the pheasants they’re eating are sparkling as if they’ve been plated with gold.

Panel 4: The two of them are now wearing red and black robes and are preparing to sacrifice a butler, who is tied to a stone table. The butler seems surprisingly calm about this. Displayed on a shelf in the background are the decapitated heads of George Washington, Batman, Underdog, Sherlock Holmes, Dick Tracy, Garfield the cat, and the Monopoly Man.

Read the whole story
tante
11 hours ago
reply
Modern US republicans.
Berlin/Germany
Share this story
Delete

systemd has been a complete, utter, unmitigated success

2 Comments and 3 Shares

Comments

Read the whole story
tante
5 days ago
reply
Systemd is one of the best things in moving Linux forward in a long time.
Berlin/Germany
Share this story
Delete
1 public comment
fxer
4 days ago
reply
I remember the systemd hate being so widespread, I thought I must be a terrible admin because making unit files seemed great to me but I was just too dense to understand why it was terrible
Bend, Oregon
kazriko
4 days ago
I still avoid it where possible. Runit is so much faster.

“Fascist AI” talk at LOOPS

1 Comment

The New Practice space at TU Berlin runs a series of talks called LOOPS. Together with my friend Malte I was invited to talk a bit about “Fascist AI”. How capitalism, fascism and AI narratives are very closely aligned and (re)produce one another.

I enjoyed giving the talk (I rarely do those together with others) and the Q&A and conversations afterwards immensely. You can check out a recording of the event here:

You can get Malte’s slides from his website and mine from my cloud.

Read the whole story
tante
7 days ago
reply
Last week I had the great honor to give a talk with my friend Malte about capitalism, (tech-)fascism and how it connects to "AI".
Berlin/Germany
Share this story
Delete

So what are you saying?

1 Comment

I had a conversation with a friend who described sitting in a workshop where clients and the people they hired were working on problems. As you do. Happens in every consultingy kind of job.

But what has totally become normalized is for the people hired, the “experts” to use ChatGPT and similar services to come up with solutions. Think that the conversation leads to a problem and the people hired to solve problems now openly turn to OpenAIs incompetence machine to have solutions generated.

I was a bit shocked TBH. Not for ethical reasons or other reasons that make me not use those services in my work. But because I do not understand the strategy.

If you work as consultant or in a consulting/expert capacity your client’s trust is everything. You get hired and get your (sometimes quite significant) rates paid because you have specific skills, specific expertise that makes you that expensive, that legitimizes what you are paying. This reputation is massively important because that is what might get you a contract even if you are not the cheapest offer.

So what are you saying when pulling out your phone in front of a client “prompting” ChatGPT? That you are super on top of the technical State of the Art? Or are you saying that you are worth 23 bucks a month (the cost of a ChatGPT subscription in EUR at the point of writing)?

I get people wanting to cut corners. That is consistent. But openly showing that not even you think your work is worth anything is just … confusing. What’s the endgame here?

Read the whole story
tante
7 days ago
reply
People hired for their skill are openly using ChatGPT and I wondered, what the long term strategy is?
Berlin/Germany
Share this story
Delete
Next Page of Stories