Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2476 stories
·
137 followers

The AI Slop Presidency

1 Comment

Trump has found an aesthetic to define his second term: grotesque AI slop.

Over the weekend, the Trump administration posted at least seven different pieces of AI generated or AI altered media, ranging from Trump imagining himself as a pope and a Star Wars Jedi (or Sith?) to Obama-esque “Hope” posters featuring people the administration has deported

This has become the Slop Presidency, and AI-generated images are the perfect artistic medium for the Trump presidency. They're impulsively created, grotesque, and low-effort. Trumpworld’s fascination with slop is the logical next step for a President that, in his first term, regularly retweeted random memes created by his army of supporters on Discord or The Donald, a subreddit that ultimately became a Reddit-clone website after it was banned. AI allows his team to create media that would never exist otherwise, a particularly useful tool for a President and administration that has a hostile relationship with reality. 

Trump’s original fascination with AI slop began last summer, after he said legal Haitian immigrants in Springfield, Ohio were “eating the cats…they’re eating the pets” in his debate with Kamala Harris. The internet’s AI slop factories began spinning up images of Trump as cat-and-dog savior. Since then, Trump and the administration have occasionally shared or reposted AI slop. In his first week in office, Trump shared an AI-generated “GM” car image that was promoting $TRUMP coin. “What a beautiful car. Congrats to GM!,” he posted.

At the end of February, Trump shared a video on his Truth Social account that imagined a world where Gaza was turned into a Trump Casino

But this weekend, Trump began sharing AI slop on a level we’ve not seen before.

Trump’s AI-tinged weekend began on Friday night with a photo-realistic picture of himself as the Pope on his Truth Social account. The White House reposted a screenshot of the image on X, which pissed off the Catholic Church.

“This is deeply offensive to Catholics especially during this sacred time that we are still mourning the death of Pope Francis and praying for the guidance of the Holy Spirit for the election of our new Pope. He owes an apology,” Thomas Paprocki, an American Bishop in Illinois, said on X.

During a press conference on Monday, Trump dismissed the accusation that the Trump Pope was offensive and then said he didn’t post it. “The Catholics loved it. I had nothing to do with it,” Trump said. “Somebody made up a picture of me dressed like the Pope and they put it out on the internet,” Trump said. “That’s not me that did it, I have no idea where it came from. Maybe it was AI. But I know nothing about it. I just saw it last evening.”

All political movements are accompanied by artists who translate the politics into pictures, writing, and music. Adolf Ziegler captured the Nazi ideal in paintings. Stalin’s Soviet Union churned out mass produced and striking propaganda posters that wanted citizens about how to live. The MAGA movement’s artistic aesthetic is AI slop and Donald Trump is its king. It is not concerned with convincing anyone or using art to inform people about its movement. It seeks only to upset people who aren’t on board and excite the faithful because it upsets people.

Not content to just aggravate Catholics, the Trump administration then used AI to offend adherents of another of America’s major religions: Star Wars fans. On May the 4th, the official White House X account posted an AI-generated image of a muscle bound Trump wielding a red light saber and flanked by two bald eagles.

“Happy May the 4th to all, including the Radical Left Lunatics who are fighting so hard to to [sic] bring Sith Lords, Murderers, Drug Lords, Dangerous Prisoners, & well known MS-13 Gang Members, back into our Galaxy. You’re not the Rebellion—you’re the Empire,” it said in the post. “May the 4th be with you.” As the replies pointed out, red light sabers are typically used by villains.

This was just one of a series of Star Wars related AI-generated cringe that went out from official Trump admin accounts over the weekend. DOD Rapid Response on X (an account that publishes propaganda on behalf of Secretary of Defense Pete Hegseth) posted a five minute video that contained a Star Wars intro style scroll of Trump’s “accomplishments” before treating viewers to a pic of Trump and Hegseth as Jedi. The account for the U.S. Army’s Pacific Command sent out an “AI-enhanced” image of soldiers doing a training exercise. Both of the soldiers’s weapons were replaced with lightsabers.

Trump and the people who created AI-image generators do not respect artists. There is no style that either will not exploit or sully. OpenAI reduced Hayao Miyazaki's life’s work to a gross meme, and the White House played along. For years the Lofi Girl has sat in windows on screens across the planet while people studied, read, and worked. Over the weekend the White House YouTube channel ran “Lo-Fi MAGA Video to Relax/ Study To” while an animated President Trump sat at a desk, mimicking the Lofi Girl.

Maybe you don’t like Star Wars, are unmoved by Studio Ghibli films, or have never chilled to lofi beats. It doesn’t matter, the message is clear: if you love something Trump will pervert it. Nothing will be untouched. Sacred objects and beloved art exist only to be desecrated. AI has made that as easy as pushing a button.

AI generated slop content is part of a brute force attack on the algorithms that control reality and the Trump administration’s constant use of AI art reflects its own brute force attack on American democracy. It’s not just that its aesthetics are useful for Trump, its entire mode of being is useful for how his administration has governed so far, by brute forcing the Presidency with a slew of executive orders, budget cuts, attacks on institutions, and sloppily executed deportations. The strategy is to overwhelm the American bureaucracy and the legal system, and to exhaust his enemies with an endless stream of bullshit; by the time we shake out what’s legal and what’s not, much of the damage has already been done. 

One of the wonderful things about making art is the process. A lot happens between conception and execution. An idea pops into an artist's head and it changes dramatically while they attempt to render that idea into reality. That doesn’t happen with AI-generated images. There is no creation process, there is only instant gratification. Whatever impulsive and grotesque thought pops into the mind of the creator can immediately be realized.

And so every revenge fantasy Trump and his followers ever wanted can be made real at a moment’s notice. On March 27, the White House X account posted a Ghibli-style AI image of a crying woman being arrested by ICE. 

Here is a real woman who has been accused of a crime, her image appropriated by the state and rendered into a cartoon. America has total power over this woman. Arrested for drug trafficking, her image has been plastered all over the internet. She’ll be deported. Not content with total control over her body and future, the administration has made her into a caricature and invited its followers to mock her online.



Read the whole story
tante
1 day ago
reply
"It’s not just that [AI slop's] aesthetics are useful for Trump, its entire mode of being is useful for how his administration has governed so far"
Berlin/Germany
Share this story
Delete

AfD gesichert rechtsextrem: Drei Wörter: AfD, Verbot, jetzt

1 Comment
Der Verfassungsschutz stuft die gesamte AfD als gesichert rechtsextrem ein. Jetzt sollte ein Verbotsverfahren der Partei angestrebt werden. mehr...
Read the whole story
tante
3 days ago
reply
"Die AfD hat sich selbst entschieden, rechtsextrem zu werden. Die Hochstufung ist folgerichtig. Herzlichen Glückwunsch, der Preis dafür muss lauten: Verbotsverfahren!"
Berlin/Germany
Share this story
Delete

The Myth of Plastic Recycling

1 Comment


A cartoon by me and Becky Hawkins.


TRANSCRIPT OF CARTOON

This cartoon has four panels. There’s also a tiny “kicker” panel under the strip.

PANEL 1

A researcher wearing a white lab coat and carrying a thick bound report walks into an executive’s office. The executive is sitting with his feet on a big desk.

RESEARCHER: Here’s my report on plastic recycling… I’m afraid it’s bad news. Recycling plastic just won’t work.

PANEL 2

A close up of the researcher, who looks very nervous.

RESEARCHER: Recycling plastic costs so much that recycled plastic will never compete with new plastic. The only thing it might do is deceive the public into thinking there’s no problem.

PANEL 3

The executive is now holding the report. Behind the researcher, two toughs are creeping up, one raising a bludgeoning tool up to hit the researcher, the other holding out a sack big enough to hide a body.

RESEARCHER: To avoid an ecological crisis, we have to stop making so much plastic.

EXECUTIVE: I see. By the way, is this the only copy of the report?

RESEARCHER: Yes, why?

PANEL 4

CAPTION: And so, for the next fifty years…

A spokesmodel woman stands in front of cameras, next to a table overflowing with plastic products.

SPOKESMODEL: Use all the plastic you want! We’ll recycle!

TINY KICKER PANEL

The spokesmodel yells at Barry.

SPOKESMODEL: Use somewhat less plastic? You want us to live like cavemen?

CHICKEN FAT WATCH

“Chicken fat” is obsolete cartoonists’ jargon for unimportant but fun details.

PANEL 1 – A framed graph on the wall seems to show profits moving up. The caption under the graph says “Sales of profit/loss charts up 47%”

PANEL 2 – One of the pens in the researcher’s breast pocket is actually a little test tube containing bubbling green liquid.

PANEL 4 – The backdrop says “Plastic: It’s what’s for dinner.” A little toy plastic car is being driven by a plastic kitten and unicorn. A label of a large bottle says “5 GAL background details.”


The Myth of Plastic Recycling | Patreon

Read the whole story
tante
4 days ago
reply
"Recycling plastic costs so much that recycled plastic will never compete with new plastic. The only thing it might do is deceive the public into thinking there’s no problem."
Berlin/Germany
Share this story
Delete

Are “AI” systems really tools?

1 Comment

I was on a panel on “AI” yesterday (was in German so I don’t link it i this post, specifics don’t matter too much) and a phrase came up that stuck with me on my way home (riding a bike is just the best thing for thinking). That phrase was

AI systems are just tools and we need to learn how to use them productively.

And – spoiler alert – I do not think that is true for most of the “AI” systems we see sold these days.

When you ask people to define what a “tool” is they might say something like “a tool is an object that enables or enhances your ability to solve a specific problem”. We think of tools as something augmenting our ability to do stuff. Now that isn’t false, but I think it hides or ignores some of the aspects that make a tool an actual tool. Let me give you an example.

I grew up in a rural area in the north of Germany. Which means there really wasn’t a lot to to TBH. This lead to me being able to open a beer bottle with a huge number of objects: Another bottle, a folding ruler, cutlery, a hammer, a piece of wood, etc. But is the piece of wood a tool or is it more of a makeshift kind of thing that I use tool-like?

Because an actual tool is designed for a certain way of solving a set of problems. Tools materialize not just intent but also knowledge and opinion on how to solve a specific problem, ideas about the people using the tools and their abilities as well as a model of the problem itself and the objects related to it. In that regard you can read a tool like a text.

A screwdriver for example assumes many things: For example about the structural integrity of the things you want to connect to each other and whether you are allowed to create an alteration to the object that will never go away (the hole that the screw creates). It also assumes that you have hands to grab the screwdriver and the strength to create the necessary torque.

I think there is a difference between fully formed tools (like a screwdriver or a program or whatever) and objects that get tool-like usage in a specific case. Sometimes these objects are still proto-tools, tools on their way of solidifying, experiments that try o settle on a model and a solution of the problem. Think a screwdriver where the handle is too narrow so you can’t grab it properly. Other objects are “makeshifts”, objects that could sometimes be used for something but that usage is not intended, not obvious. That’s me using a folding ruler to open a beer bottle (or another drink with a similar cap, but I learned it with beer).

Tools are not just “things you can use in a way”, they are objects that have been designed with great intent for a set of specific problems, objects that through their design make their intended usage obvious and clear (specialized tools might require you to have a set of domain knowledge to have that clarity). In a way tools are a way to transfer knowledge: Knowledge about the problem and the solutions are embedded in the tool through the design of it. Sure I could tell you that you can easily tighten a screw my applying the right torque to it, but that leaves you figuring out how to get that done. The tool contains that. Tools also often explicitly exclude other solutions. They are opinionated (more or less of course).

In the Python community there is a saying: “There should be one – and preferably only one – obvious way to do it.” This is what I mean. The better the tool, the clearer it’s guiding you towards a best practice solution. Which leads me to thinking about “AI”.

When I say “AI” here I am not talking about specialized machine learning models that are intended for a very specific case. Think a visual model that only detects faces in a video feed. I am thinking about “AI” as it is pushed into the market by OpenAI, Anthropic etc.: “AI” is this one solution to everything (eventually).

And here the tool idea falls apart: ChatGPT isn’t designed for anything. Or as Stephen Farrugia argues in this video: AI is presented as a Swiss army knife, “as something tech loves to compare its products to, is something that might be useful in some situations.

This is not a tool. This is not a well-designed artifact that tries to communicate you clear solutions to your actual problems and how to implement them. It’s a playground, a junk shop where you might eventually find something interesting. It’s way less a way to solve problems than a way to keep busy feeling like you are working on a problem while doing something else.

Again, there are neural networks and models that clearly fit into my definition of a tool. But here we are at the distinction of machine learning an “AI” again: Machine learning is written in Python, AI is written in LinkedIn posts and Powerpoint presentations.

Tool making is a social activity. Tools often do not emerge fully formed but go through iterations withing a community, take their final shape through the use by a community of practitioners and their feedback. All tools we use today are deeply social, historical objects that have embedded the knowledge and experiences of hundreds or thousands of people in order to create “progress”, to formalize certain solutions so we can spend our brain capacity on figuring out the next thing or to just create something beautiful or fun. Our predecessors have suffered through proto-tools and all the hurt that comes from using them so we wouldn’t have to. And this social, temporal context is all part of a tool.

And the big “AI” systems that supposedly are “just tools” now do not have any of that. They are a new thing but for most problems they hope that you find ways of using them. They do in a way take away hundreds of years of social learning and experience and leave you alone in front of an empty prompt field.

So no, I do not think that the “AI” systems that big tech wants us to use (and rent from them) are tools. They are makeshifts at best.

Read the whole story
tante
10 days ago
reply
"All tools we use today are deeply social, historical objects that have embedded the knowledge and experiences of hundreds or thousands of people in order to create “progress”[…]And the big “AI” systems that supposedly are “just tools” now do not have any of that."
Berlin/Germany
Share this story
Delete

Thermal imaging shows xAI lied about supercomputer pollution, group says

1 Comment

Elon Musk raced to build Colossus, the world's largest supercomputer, in Memphis, Tennessee. He bragged that construction only took 122 days and expected that his biggest AI rivals would struggle to catch up.

To leap ahead, his firm xAI "removed whatever was unnecessary" to complete the build, questioning "everything" that might delay operations and taking the timeline "into our own hands," xAI's website said.

Now, xAI is facing calls to shut down gas turbines that power the supercomputer, as Memphis residents in historically Black communities—which have long suffered from industrial pollution causing poor air quality and decreasing life expectancy—allege that xAI has been secretly running more turbines than the local government knows, without permits.

Read full article

Comments



Read the whole story
tante
11 days ago
reply
Musk's xAI seems to be run in one of the dirtiest data centers. Power comes from unlicensed gas turbines.
Berlin/Germany
Share this story
Delete

Forcing the world into machines

1 Comment

When people talk about “AI” these days – which depending on how healthy and well-adjusted your social environment is can be very little or if you are on LinkedIn all the fucking time – the main focus is on what is called “generative AI”, sometimes shortened to “genAI”.

Generative AI (in the way most people understand that term today) are stochastic models that are able to produce something that would traditionally have been a human work product. A new text for example, a summary of an existing text, an illustration, a piece of program code, a video, whatever.

A few days ago I wrote about the infrastructures that the current AI boom will leave behind and this text is a bit of a follow up. Because with all the narratives about all the magical things that “AI” will supposedly do, will (quoting Sam Altman) “solve all of physics” for example [EDIT: I had a reference to Eric Schmidt claiming that AI will use 99% of energy in the future but that seems to have been a bit of a misunderstanding by the reporting so I cut that reference] calmer minds might wonder what those systems are actually for.

Many people will know Stafford Beer’s famous heuristic for thinking about systems abbreviated POSIWID: “The purpose of a system is what it does”. Beer kinda shifted system interpretation from the usual future tense (“this system will do X”) to the present tense (“this system has this effect on the real world now”).

For generative AI this purpose is quite clear: It’s about putting pressure on human labour. If you can generate somewhat passable prose or images or software code using an “AI” you can either try to run your business with fewer people (who tend to want to be paid) or (more realistically) you can push people’s wages down by always pointing at “the AI” when someone wants a raise or does not want to do unpaid overtime. The actual problem generative AI is trying to solve is having to pay people for their work. And that is the effect we see: Working conditions getting worse, our media landscape getting gunked up with generated slop. But as I wrote in “These are not the same“:

“So there is a structural reason why those companies probably can’t be economically valid: Digital spaces are built around winner-takes all scaling but for genAI providers that is economical suicide.”

But while generative AI has been sucking all the air out of the room for a while now, there is a different kind of AI whose situation is a bit different but whose power can be that much bigger: Discriminative AI.

Discriminative AI is not any AI that discriminates against groups of people (though most AI systems of course do that), discriminative AI are neural networks that you can apply to big or messy data and get some kind of classification from it. Give the discriminative AI system a picture and it might tell you which objects it estimates are in the picture, give it a text and it might try to guess the level of education of the person who wrote it.

Technically generative and discriminative AI systems are not that different: Both are basically lossy compression of the data used in the training process. Both are the most dominant patterns in the training data crystalized. GenAI reproduces those patterns, discriminative AI applies them. And here the actual purpose emerges.

If you run a somewhat complex operation you care about, say a business, you probably will not just have a statistical black box run it the way LinkedIn influencers describe the future. Because we have digitized many processes before the current AI fad happened, build ERP systems and workflow engines and all that, all in order to “optimize” and “rationalize” existing social and business processes already and those things are very deliberate and controlled.

Those digital automations or workflows are built based on domain knowledge about your company, your machines and organizational structure, the market you are in and the requirements your clients have. They are mostly just very complex rules – often expressed in somewhat byzantine source code in some enterprise software system – that operate on data. That is why all business software looks like a hot mess: It’s not written for you as a person to work with, it’s written for the machine to get the data in the way it needs it.

But the world has this problem of being very messy and chaotic and very badly structured. While software is usually built based on a somewhat well-defined domain model (not an “AI” model but just an abstracted description of the objects and structure the organization deploying the software considers relevant) those models still are often too simple, too abstract. You feel that every time that reality presents you with a situation the software developers did not see coming. But now we have discriminative AI.

Computers cannot see. Cannot perceive the world. They can only operate on data that might describe the world if interpreted in a certain way. And creating this data often used to be hard. Sure, adding a thermometer or other simple sensor is easy but how could you add more complex interpretations of the world? Say you wanted to determine how “lazy” your workers are, how would you formalize that? For some jobs you could define an average amount of production per hour and compare people’s performance but that only works in very specific cases. Solving this is the promise of discriminative AI.

As a serious person you do not want your organization to be run by a stochastic parrot: What if you hit a fabrication while processing a big order for your most important customer: They won’t let you off the hook by pointing at the “AI made a whoopsie” sign. You still want defined, clearly modelled, transparent processes that you can iterate on, that you can adapt based on the changing landscape you operate in.

But you used to need expertise for that. You needed not only expensive outside consultants to mold your ERP system into something bordering on useful but you also needed the domain experience of your workers to tell you what characterizes a successful production run and how to detect a flawed run early. Now the hope is that one can extract those patterns (and more!) from all the data being constantly collected: You don’t need your workers, you have all the sensor and machine data and know which productions runs failed so your can create a “soft sensor” that just gives you a binary “OK”/”Not OK”. A simple data type that your computer systems can operate on, that you can built your business logic around.

Discriminative AI turns the complex and contradictory, the wild and weird world of physics and chemistry and people into simple data. It is the tool to build tools. The mold that you can use to shape the world into computer-processable bits of information of higher abstractions.

Sure, computers can somehow process all data. Obviously your computer can handle a live camera feed, we see that every day in video conferences. But a video stream is not what you want if you want your digital process to handle the world for you: You need to know which relevant objects are in the feed. The video information is of a lower abstraction, often just recorded in absence of better data, checked by people in case things go out of line.

With its totally agnostic approach discriminative AI promises to take any form of lower abstraction data and refine it into higher abstraction data computers can use. It is the foundation that Marc Andreessen’s “Software is eating the world” screed will be actually materialized on. Discriminative AI is the ultimate adapter, no the ultimate mold to shape the world into simple data. To process the world to feed it to software systems.

And that is why my outlook on the lasting impact of the data centers and infrastructures that the generative AI hype brought with it is a bit bleak (yes, quoting myself yet again being the pretentious bastard that I am):

“The AI crash won’t leave us with infrastructures that are useful to democratic and humane societies, with useful tools to do something productive with, but with infrastructures tailor-made to suppress it.”

A data center that Microsoft had build to generate slop can easily be repurposed to datafy more of the world in order to automate even more, pull even more decision making out of social systems into centralized, often opaque machines run by god knows who. Because the tech works the same.

All the things I described above are the things companies like Palantir and others are pitching to any government and big organization willing to listen. And those listening do not always have democratic or humane values as priorities.

There is another problem though: By datafying more of the world we are narrowing down our modes of interacting with the world, of coming to decisions – not only as individuals but as communities and societies. With more and more of the world being fed to the machine we lose many ways of engaging with the world that are deeply human but don’t fit in with that mode of abstraction and rationalization.

Human beings are not rational. And while a certain level of rationality is helpful in order to be able to communicate with one another and find consensus, many other aspects of our lives are just as important: Feelings, spiritual beliefs are not easy to abstract and put into well-defined models based on logic. Those are ways of being in the world, being with us and others that are core to the human experience, that are core to how we make decisions, core to how we think about, hope for, strive for better futures. And that gets sanded off by applying neural networks to the world to turn it into small standardized data bricks to build workflows and processes on.

That is maybe the most lasting effect I see the current AI fad having. A further solidification of the cultural logic of computation at the expense of other modes of being. And if one thing has become crystal clear in the last decade or so it is that the (neo-)liberal belief that everything just needs to be based on better data and more rationality will not keep Fascism at bay. It will become its tool and catalyst.

Read the whole story
tante
14 days ago
reply
"Discriminative AI turns the complex and contradictory, the wild and weird world of physics and chemistry and people into simple data. It is the tool to build tools. The mold that you can use to shape the world into computer-processable bits of information of higher abstractions."
Berlin/Germany
Share this story
Delete
Next Page of Stories