I dunno… what on EARTH could I be talking about here?*
(*I always jokingly say this and then am surprised when people come up with other things a strip could mean, unaware that ALL my metaphors/subtext/moaning satire are always about A.I.)
Thank you for reading folks! If you know someone you think also might enjoy my comics, please share this with them and help spread the word!
For paying subscribers, today’s post also comes with extra stuff! Fridays are where I share behind the scenes stuff, WIP versions of strips and other projects i’m currently working on, and also stuff from my cartooning past. You also get to claim your own DT! avatar. Why not upgrade and join us?
Little painting I did a couple of weeks ago. There’s a stick so it looks like a protest sign, but I cropped it out.
Enjoying the newsletter? Gimme $2 for sea-monkeys.
This week’s question comes to us from Dana Chisnell:
How do I get through a day without hearing or seeing anything about AI?
Let’s talk about basketball. My very favorite day of the year is the Saturday the NBA Playoffs kick off, which this year fell on April 19th. (Holy shit, the playoffs last forever.) On Playoff Saturday I get to watch four back-to-back-to-back-to-back basketball games starting at 10am. Same thing on Sunday. Then they start scattering them throughout the week. But that first weekend is pure joy and chaos. You’ve got teams pacing themselves because they believe they’re making a deep run, you’ve got teams that know their only chance to make it to the next round is to do very weird things that the other team isn’t expecting, and you’ve got guys on those latter teams playing for new contracts so they’re doing even weirder things to get noticed. All of this makes for very entertaining basketball. Which is not what you asked me about. We’re getting there.
This also means that for the last two months I’ve been watching a lot of TV ads. I am now an expert on three things: online gambling, GLP-1 drugs, and AI.
Here’s the thing about AI ads: they’re amazing. According to some of the ads I’ve seen, AI will help you write a paper, grade your students’ papers, sheetrock your wall, raise your children, send out invoices, design your website, write your résumé, schedule your aunt’s funeral, give you a good recipe for chicken wings, help you put together an outfit that slays, help you build a LEGO, write a sales report, summarize a sales report, tell you what kind of music you like, put together a good dating profile, teach you how to fire a gun, tell you who to vote for, and recite interesting facts about Hitler.
Here’s the thing about actual AI: aside from the Hitler part, it does precious fucking little of that. But it demos really really well. It’s really easy to make a good AI ad. You come up with a thing that a lot of people hate doing, you decide that AI can do that thing, then you shoot the ad that backs it up. You show it 17 times during Game 6 of the Knicks/Pacers series and 8 million people see it. Subtract a percentage of people who naturally distrust whatever is being advertised on TV and even if that number is 50%, you’ve still convinced 4 million people that Google AI can tell you how to repair a giant hole in your living room wall, which it cannot.
When I was a kid I was mesmerized by the comic book ads for Sea-Monkeys. “Enter the wonderful world of amazing live Sea-Monkeys! Own a bowlful of happiness! Instant pets!” This was the headline next to an amazing illustration of a family of Sea Monkeys posing in front of their Sea Monkey castle. With their big smiles, three-antennae heads, Sea-Monkey dad’s tail strategically covering his Sea-Monkey dick, and Sea-Monkey mom looking like she could get it, with her little Mary Tyler Moore flip-do. There was another illustration of a human family, straight out of the John Birch society playbook overseeing their Sea-Monkeys living in a fishbowl. Reader, I wanted these Sea-Monkeys. They were going to be my new family. So I waited for my Dad to be in a good mood (about to leave for the evening) and I asked him for a dollar. “Only $1.00!” just like the ad told me to say. I filled out the coupon in the ad very very carefully, cut it out very carefully with my mom’s good scissors, put it in an envelope, which I also addressed very carefully, asked my mom for a stamp, and the next day I deposited the envelope in a mailbox on my way to school. Then I waited. And waited. And waited some more.
About 6–8 weeks later I received an envelope back. I ran to my room, where I had a goldfish bowl ready and waiting, opened the envelope, which contained a second smaller envelope, and dumped the contents of that envelope into the fishbowl. I watched as maybe three tiny-as-fuck brine shrimp made their way slowly through the water in the fishbowl, and landed on the bottom with the sound of a deflating dream. From the future, I saw Nelson Muntz point at me and say “ha ha!” And still, I checked that fishbowl on the hour for days. Maybe it just took a little time for my Sea-Monkey family to wake up. It never happened.
AI is Sea-Monkeys.
The promise is there and it’s exciting. The hope of new friends that will live, laugh, love in a little fishbowl next to my bed. Keeping me company. Laughing at my jokes. Saying things like “We wish you could come down here and play with us in our super cool Sea-Monkey castle.” The reality is three dead brine shrimp at the bottom of a fishbowl that your mother eventually flushes down the toilet after calling you an idiot. At least dead brine shrimp don’t tell you that Hitler had some good ideas, actually.
Can AI be useful in certain circumstances? Sure. So are brine shrimp. (They’ve been to space!) Is it all that? It’s not. AI is a Sea-Monkey ad being peddled as a promise of something that it is not.
Sea-Monkeys were my first experience with hype cycles. In retrospect, $1 was a good price for that lesson.
But again, this was not your question. And you already know that, which is why you’re asking the question. Still, it’s worth exploring why we’re seeing and hearing so much about AI right now. Ironically, this will add one more essay to the amount of essays that you’re wishing you could get away from. But having asked the question, you kind of did that to yourself.
Why are we seeing 17 AI ads during a Knicks/Pacers game? One answer is that the tech companies running the ads can afford it. (Ads run around $300k during the later rounds. That number may be wrong, it came from a Google AI summary.) But that speaks more to the how than the why. The why is simple, though. These companies need a hit, and they need a hit bad. Silicon Valley is in a slump. After striking out with blockchain, crypto, NFTs, web3, the metaverse, stupid shit you wear on your face (or more honestly, don’t wear on your face), and pretending (but not really able to actually fool anyone) that they gave a shit about minorities for a brief moment in 2020, the tech industry was losing the room. Mind you, they were still making money hand over fist, but the bloom was coming off the rose. They were coming off an insane couple of decades of innovating at a furious pace, being seen as gods, being invited to all the good parties, and settling into a mature age of “running things while making incremental improvements, with the occasional breakthrough” which, frankly, doesn’t make the cocaine flow. At the same time they were being asked hard questions about weird little things like “was your platform instrumental in a genocide in Myanmar” and “what’s with all the Nazis?” Which, to be fair, are bummer questions. Especially when you’re trying to enjoy cocaine.
So when AI got to the point of almost-kinda-sorta-semi maturity (but not really) they ran with it. Then they tossed in anything else that kinda-sorta felt like AI and tossed that onto the pile as well. Suddenly everything is AI and AI is in everything. Stuff that’s been around forever, like auto-complete and speech-to-text, is now “AI.” It’s not. Suddenly, Google Drive is asking me if it wants me to let AI write these newsletters. I don’t. Suddenly, Google search—the backbone of the internet—is a piece of shit. (OMG, were those em dashes?! Is this AI. Butterfly meme!) Suddenly, students and professors are arguing about who’s writing and grading papers. Suddenly, we’re firing up Three Mile Island so incels can generate six-fingered girlfriends that don’t give them shit for being useless. Suddenly, designers who were previously tasked with making things “user-centered” (This was never a thing, by the way, but that’s beyond the purview of today’s newsletter.) are being tasked with creating good prompts, and then staying up all night manually fixing the slop that was generated while also fearing for their livelihood because… suddenly everyone is unemployed. (Oh, did you think Silicon Valley was trying to artificially extend the bubble for your benefit? My sweet summer child.)
And the Nazi problem was solved by just becoming Nazis themselves. (What if the bug was the feature? S-M-R-T!)
I have yet to answer your question. Which is how you avoid all this shit. Well, it’s hard. Seeing that I can’t even watch a basketball game without being inundated with it. The honest answer is that you’re not going to be able to completely. At least for a little while. The enshittification of everything that previously worked just fine is still speedrunning to the lower circles of hell where venture capitalists count their money. I’m currently hanging on to an old laptop, which mostly works just fine because I know that a new one will be swarming with AI crap. I’m currently hanging on to an old phone for the same reason. Oddly, the shit they thought would re-energize our interest in these things back to a time when people would line up for the latest model of both, has seemingly had the opposite effect. And the folks getting suckered in because the ads are amazing, and because AI demos very well, will eventually realize that what they were sold, and what arrived at the door are two very different things.
It is a tool designed to render the populace helpless, to make people doubt their innate intelligence, and to foster overreliance on technology.
AI is Sea-Monkeys. AI is hope in exchange for something that’s dead.
Fun fact I just discovered! The company that sells Sea-Monkeys still exists! They just celebrated their 65th anniversary. Good for them. They’ve rebranded to some vague environmental toy you’d find at an aquarium gift shop. But now you get the whole thing at once. No more advertising in comics. No more sending in an envelope with a dollar bill and waiting. Some things never change though. From their FAQ: I HAD A BUNCH OF SEA-MONKEYS BUT THEN THE NUMBER DWINDLED. WHY? Sadly, that’s common in nature. Many babies will hatch knowing that only the strongest will make it to adulthood.
Nature is fucking brutal.
Will AI still be with us in 65 years? Well, as much as I’m confident that anything might still be here in 65 years, sure. The hype cycle will eventually crash, and the parts of AI that actually make people’s lives easier will possibly live on, having safely extracted itself from the hype cycle. And before you write your “well, actually” response… you can’t get mad at people for conflating all the different types of AI, when you purposely threw them all together to build your hype cycle. You did this to yourselves.
I don’t know which parts of AI will survive, but they’re mostly likely not in generative AI. Turns out people like making things.
Silicon Valley’s era of innovation is over. This is their villain era. The era of the con. Having bled themselves dry of ideas, and all sense of moral decency, they’re now attempting to bleed us dry of our own humanity. And lest you think I’m being cynical, my cynicism towards technology comes from a belief in people. I believe that people are capable of good things. I believe people are even capable of great things. I believe that people make great art. I believe that people enjoy making all types of art. I believe that people write amazing things. (People don’t save each other’s love letters because they’re great literature.) I believe that people, at their best, want to communicate, not just with each other in the here and now, but also with those that will hopefully come after us. We want our descendents to know we were here, we want them to know we made things, we want them to know that we talked funny (our descendents will think we talked funny.)
I know this because I’ve seen us do this. I’ve seen us examine the past. I’ve seen us look for evidence of our ancestors. (I’ve also seen us hide evidence of our ancestors.) I’ve seen us gather in museums to see the art our ancestors made. I’ve seen us gather in movie houses to see the movies our ancestors made. Every Nina Simone song. Every Velvet Underground album. Every Ibsen play. Every Cindy Sherman photo. Every Greek myth. Every letter written from a Birmingham jail cell. Every note from Coltrane’s saxophone. It’s the indestructible beat of humankind. Calling from the past to let us know that we love to make ourselves heard, seen, felt and touched.
It’s what we do.
🙋 Got a question? Ask it! I might answer it. Or more likely, pretend to answer it while writing about what’s already swimming in my head.
📣 There’s a few slots left in next week’s Presenting w/Confidence workshop. You should sign up.
"Silicon Valley’s era of innovation is over. This is their villain era. The era of the con. Having bled themselves dry of ideas, and all sense of moral decency, they’re now attempting to bleed us dry of our own humanity."
iNaturalist is a website that crowdsources pictures of plants and animals to help identify species. Its tagline is “A Community for Naturalists.”
iNaturalist is administered by its own small charity, but the work is done by a huge number of volunteer contributors — a bit like Wikipedia.
Sometimes a charity where volunteers do all the work forgets who does all the work and that these are volunteers, not minions. If someone waves a bit of money at them.
Every year, Google tries to launder its reputation by sending a bit of blood money to charities to greenwash its AI. This year’s round included $1.5 million to iNaturalist, who excitedly announced this on Twitter (and nowhere else). [Google; Twitter]
The volunteers — the ones who do all the work — were less than delighted. [iNat forum]
Two days later, iNaturalist explained the grant: [blog post]
By using generative AI (GenAI), we hope to synthesize information about how to distinguish different species and accurately convey that to iNaturalist users.
iNaturalist plans to use a Google chatbot to make up some hallucinations about data that had been uploaded by the volunteers. So how was this AI slop going to be fact-checked?
We will incorporate a feedback process for the AI-generated identification tips so that we can maintain high standards of accuracy.
That is, the volunteers would work for free to improve Google’s bot. This plan didn’t go down so well.
It turns out people do free work for knowledge because they hold principles and stuff. Many deleted their accounts — which also deletes their observations from iNaturalist — because they didn’t volunteer to feed a lying slop machine that’s an environmental disaster. And they no longer wanted anything to do with a charity so lost it didn’t see why this was not a good idea. [Scientific American]
iNaturalist has tried very hard to backpedal without backpedaling. Executive director Scott Loarie posted on the forum: [iNat forum]
I can assure you that I and the entire iNat team hates the AI slop that’s taking over the internet as much as you do.
… there’s no way we’re going to unleash AI generated slop onto the site.
Those are nice words, but AI-generated slop is still explicitly the plan. iNaturalist’s grant deliverable is “to have an initial demo available for select user testing by the end of 2025.”
You can tell what happened — Google promised iNaturalist free money if they would just do something, anything, that had some generative AI in it. iNaturalist forgot why people contribute at all, and took the cash.
The iNaturalist charity is currently “working on a response that should answer most of the major questions people have and provide more clarity.” [Twitter]
They’re sure the people who do the work for free hate this whole plan only because there’s not enough “clarity” — and not because it’s a terrible idea.
"You can tell what happened — Google promised iNaturalist free money if they would just do something, anything, that had some generative AI in it. iNaturalist forgot why people contribute at all, and took the cash."
As anyone who has ever seen anything I have created knows: I am not a designer. So take my reasoning and thinking here with a grain of salt. I do have many designers amongst my friends and I do think that a lot of software engineering falls within the domain of design to a certain degree. Let me explain.
To me design is the process of exploring, potentially formalizing a problem space and possible solutions for that problem culminating in a solution based on that exploration. Design (to me) is about creating artefacts that do certain specific things for specific target audiences/users. The specific thing can be evoking an emotion or an abstract though process or just doing a certain task really well. Beauty is a quality of good design because things being pleasant can help with their use, acceptance, but anyone who has every had to sit on a chair designed mostly to look good knows that beauty only gets you so far.
So using that understanding of design I keep thinking about the current trend of everything being turned into chatbots or “conversational systems”. And I can shake the feeling that these paradigms are – for most cases – just bad design.
Broken promises
A chat/conversational interface implies a whole lot of things: We chat with other people, a chat implies a level of social experience, of a shared space for people to meet. Which is great if you want to tell your users “this part of the app is where you talk to people”. But what happens when that “promise” the design made is broken?
We’ve all been forced to interact with chatbots for support. There’s an issue with your phone contract, the underpants they sent you don’t fit or some other thing and you need to talk to someone to sort things out. Enter the chatbot who keeps answering your questions with useless links or other strategies to keep you away from the “expensive” humans to talk to. The company wants to save on support so they try to distract you with a bot which either accidentally helps you find a solution or makes you think “it’s not worth it, I’ll just have to live with my crotch being strangled”. It’s a distraction to avoid investing in you and the relationship. How does that make you feel?
This is not just a support question: Onlyfans recently had issues where people realized that they were not actually chatting to the adult performers they paid for but someone or something else. Now the other side of the chat might have been underpaid staff but the dynamic is the same: Chat interfaces make a promise of social experience and trust that LLM Chatbot can never fulfill. It’s a deception. And good design should not deceive.
Guideless
A good interface guides you to solving the problem you have, good design makes it easy for you to do what the thing is supposed to do. Think of a program you really like to use: It probably shows you the steps you need to take in a structured way asks you the few things it needs from you and then does the thing you want it to do. Because that is what good design does.
What does a chatbot guide you to do? Maybe it asks you a question but for many chatbots it’s just “how can I help you?” or “Ask me anything”. Does that help you use it? Does that structure your path towards solving your problem? The huge number of people whose whole identity has become telling others how to write prompts begs to differ.
I have already argued before that “AI” systems are not tools. Because they don’t contain clear and specific descriptions of problems and corresponding strategies for solving said problem. But let’s pretend that we have a system that can solve a certain problem really well and efficiently: Is a chatbot a better interface than a structured form or UI that let’s you just go through the required steps and then get the result? Chatbots don’t narrow down the path towards a solution, they leave everything open. Which might be great for engagement and keeping people hooked but is that an efficient use of your time?
Outsourcing of work
Let’s talk about your time a bit. I am very protective of mine, I hate it when objects or processes mindlessly waste mine. I do of course waste my time on weird shit, thankyouverymuch, but I want to make that call.
A bad design that forces me to waste time or do a lot of unnecessary busywork is bad because it didn’t do its job: It didn’t make the process easier and more structured for me, it leaves me to do that labor. And I have a similar feeling towards this as I have towards self-checkout terminals at stores: Why do I have to do unpaid work for you and still pay the same? Why should I make it easier for you to employ fewer people getting a worse service while paying the same? That feels dumb. And wrong.
Chatbots don’t make my work easier. Instead of getting a predictable, understandable result based on my needs in a specific situation I get extra work assigned: I need to phrase my query the right way in order to get the machine to lie maybe a bit less. Need to add magic words to the input to stop it from going off the rails. That is my labor I have to put in to make a bad design work. Feels like I am not just doing my job but also the work the operator of the service or product I am having to use through chat should have paid professionals to do. And I’m not getting paid for it.
Like, why should companies get away with refusing to do the work of designing their products in a meaningful way and still get paid?
I want solutions
I do not need the one magic machine that claims to solve all my issues and then makes me jump through conversational hoops to get a mediocre result. That is actually the opposite of what I need.
I want people who know their shit to externalize all they know into tools I can use to benefit off of all that embodied knowledge. And chatbots do not help me with that at all (regardless of the capabilities or lack thereof of LLMs).
I want simple tools that do specific things build by people who were paid fairly and go home on time.
"I do not need the one magic machine that claims to solve all my issues and then makes me jump through conversational hoops to get a mediocre result. That is actually the opposite of what I need."
AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline, according to a new survey published today. While the impact of AI bots on open collections has been reported anecdotally, the survey is the first attempt at measuring the problem, which in the worst cases can make valuable, public resources unavailable to humans because the servers they’re hosted on are being swamped by bots scraping the internet for AI training data.
“I'm confident in saying that this problem is widespread, and there are a lot of people and institutions who are worried about it and trying to think about what it means for the sustainability of these resources,” the author of the report, Michael Weinberg, told me. “A lot of people have invested a lot of time not only in making these resources available online, but building the community around institutions that do it. And this is a moment where that community feels collectively under threat and isn't sure what the process is for solving the problem.”
The report, titled “Are AI Bots Knocking Cultural Heritage Offline?” was written by Weinberg of the GLAM-E Lab, a joint initiative between the Centre for Science, Culture and the Law at the University of Exeter and the Engelberg Center on Innovation Law & Policy at NYU Law, which works with smaller cultural institutions and community organizations to build open access capacity and expertise. GLAM is an acronym for galleries, libraries, archives, and museums. The report is based on a survey of 43 institutions with open online resources and collections in Europe, North America, and Oceania. Respondents also shared data and analytics, and some followed up with individual interviews. The data is anonymized so institutions could share information more freely, and to prevent AI bot operators from undermining their countermeasures.
Of the 43 respondents, 39 said they had experienced a recent increase in traffic. Twenty-seven of those 39 attributed the increase in traffic to AI training data bots, with an additional seven saying the AI bots could be contributing to the increase.
“Multiple respondents compared the behavior of the swarming bots to more traditional online behavior such as Distributed Denial of Service (DDoS) attacks designed to maliciously drive unsustainable levels of traffic to a server, effectively taking it offline,” the report said. “Like a DDoS incident, the swarms quickly overwhelm the collections, knocking servers offline and forcing administrators to scramble to implement countermeasures. As one respondent noted, ‘If they wanted us dead, we’d be dead.’”
One respondent estimated that their collection experienced one DDoS-style incident every day that lasted about three minutes, saying this was highly disruptive but not fatal for the collection.
“The impact of bots on the collections can also be uneven. Sometimes, bot traffic knocks entire collections offline,” the report said. “Other times, it impacts smaller portions of the collection. For example, one respondent’s online collection included a semi-private archive that normally received a handful of visitors per day. That archive was discovered by bots and immediately overwhelmed by the traffic, even though other parts of the system were able to handle similar volumes of traffic.”
Thirty-two respondents said they are taking active measures to prevent bots. Seven indicated that they are not taking measures at this time, and four were either unsure or currently reviewing potential options.
The report makes clear that it can’t provide a comprehensive picture of the AI scraping bot issue, the problem is clearly widespread though not universal. The report notes that one inherent issue in measuring the problem is that organizations are unaware bots are scraping their collections until they are flooded with enough traffic to degrade the performance of their site.
“In practice, this meant that many respondents woke up one morning to an unexpected stream of emails from users that the collection was suddenly, fully offline, or alerts that their servers had been overloaded,” the report said. “For many respondents, especially those that started experiencing bot traffic earlier, this system failure was their first indication that something had changed about the online environment.”
Just last week, the University of North Carolina at Chapel Hill (UNC) published a blog that described how it handled this exact scenario, which it attributed to AI bot scrapers. On December 2, 2024, the University Libraries’ online catalog “was receiving so much traffic that it was periodically shutting out students, faculty and staff, including the head of User Experience,” according to the school. “It took a team of seven people and more working almost a full week to figure out how to stop this stuff in the first instance,” Tim Shearer, an associate University librarian for Digital Strategies & Information Technology, said. “There are lots of institutions that do not have the dedicated and brilliant staff that we have, and a lot of them are much more vulnerable.”
According to the report, one major problem is that AI scraping bots ignore robots.txt, a voluntary compliance protocol which sites can use to tell automated tools, like these bots, to not scrape the site.
“The protocol has not proven to be as effective in the context of bots building AI training datasets,” the report said. “Respondents reported that robots.txt is being ignored by many (although not necessarily all) AI scraping bots. This was widely viewed as breaking the norms of the internet, and not playing fair online.”
We’ve previously reported that robots.txt is not a perfect method for stopping bots, despite more sites than ever using the tool because of AI scraping. UNC, for example, said it deployed a new, “AI-based” firewall to handle the scrapers.
Making this problem worse is that many of the organizations that are being swamped by bot traffic are reluctant to require users to log in, or complete CAPTCHA tests to prove they’re human before accessing resources, because that added friction will make people less likely to access the materials. In other cases, even if institutions did want to implement some kind of friction, it might not have the resources to do so.
“I don't think that people appreciate how few people are working to keep these collections online, even at huge institutions,” Weinberg told me. “It's usually an incredibly small team, one person, half a person, half a person, plus, like their web person who is sympathetic to what's going on. GLAM-E Lab's mission is to work with small and medium sized institutions to get this stuff online, but as people start raising concerns about scraping on the infrastructure, it's another reason that an institution can say no to this.”
"AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline"