Palantir CEO Alex Karp is a man in charge of one of the most important and frightening companies in the world. Karp's new book, cowritten with Nicholas Zamiska, is called The Technological Republic. After claiming "because we get asked a lot," Palantir posted a 22-point summary of the book that reads like a corporate manifesto. It evokes both weird reactionary shit and also trilby-wearing Reddit comments from the early 2010s.
Palantir's summary of the book is ominous. But even the company's name is unironically ominous. The palantíri are crystal balls in The Lord of the Rings that let Middle-earth's worst tyrants spy on the heroes of the st …
"We’ve attempted to translate these 22 points from Alex Karp’s alien words into something more reasonable, like human words from someone who might play him in the biopic."
“AI” exists to disenfranchise labor. That’s what it’s for. Regardless of how good these stochastic systems are or the flaws they have just being able to point at the non-unionized robot whenever the employees ask for raises or anything really is incredibly valuable for business. The existence of “AI” and the supporting narrative mean that you defending your value and therefore price on the market will have a way harder time.
But that’s the impact for people on the top. That’s why the C-Suite wants those tools to exist and to be deployed. I think there is also a strong effect on the way “AI” affects the social relationships between peers and coworkers.
Work is not really about finding friends. It happens but it’s not necessary. But there is – ideally – a level of respect for one another. The acknowledgement of each other’s expertise, skills and contributions as well as one’s weaknesses. All those allow the forming of communities based on solidarity: If the people touching computers realize that the workers in the warehouse are part of the same struggle all those groups together can actually develop some power to make their needs heard and acted upon. If for example a company manages to separate the different groups and departments, plots them against each other, this weakens everyone but management.
And I think “AI” has this massive effect on feeling connected with each other, in seeing each other as peers.
Let me tell you a story. A few weeks ago a colleague was talking about an issue he had with a specific piece of software that we are – as we usually do – using a bit outside of what it is intended for. He outlined the problem and then went into the things he tried to solve it culminating in an explanation of what probably will work. Another colleague I respect a lot then responded with “did you ask the AI?” and it felt like a scene from a movie where the protagonist is just sitting there, hearing something and suddenly a big dude comes in and slaps him in the face with a fish. I was irritated and it really took me a while to understand why.
It was not the absurdity of the statement (What even is “the AI”?) or the weird dynamic of asking a colleague who just presented his solution with the suggestion to use an unreliable search engine instead. It was me feeling drifting away from that person. Because that statement made it so obvious how every “here’s a problem” is now connected to “let’s ask the spicy autocomplete” for that person.
In the weeks after that whenever there was an issue with the stuff coming from that team, when something wasn’t up to par, or had a weird structure I realized how I kept attributing it to them leaning heavily on slop machines. Which isn’t necessarily true: Shit happens, software and hardware is complicated and sometimes things end up weird or hackish or broken – regardless of what tech you use.
But I realized that this event (and all the little similar events before) ate away from a trust relationship that had been developed over years. I was having sort of the perceived “workslop” experience.
“Workslop” is a term that defines bad, “AI” generated work product that someone produces to fulfill their work duties on a surface level that their coworkers will have to clean up: I generate a bunch of code that kinda works and someone else realizes that it’s a hot mess when trying to run it and has to clean up the mess. In that example I would have produced workslop (but might have been very efficient!).
The experience of workslop (whether it’s real or mostly perceived as I showed in my short story) directly erodes social connection, erodes trust in one another and in the end erodes solidarity. Because why would you stand with a person who does not do their job and offloads their work on you?
Pride in one’s work under capitalism is a bit of a weird thing: You can be proud of what you did but that great work will more often than not not be especially rewarded. But what it does is signal something to your coworkers: Doing good work means that you respect the people working with you, working off of what you did. It means that you try to make their lives as easy as possible because you’re all in the same boat.
“AI” pushes everyone towards slop. “Just do it. It’s easy. You look innovative. It’s fast. The others can also just use the slop machine to fix the mess if it occurs.” But the dissolving effects on the social fabric of the workplace cannot be underestimated: We are already working in conditions that make building the foundations of solidarity harder. Teams get pitted against each other based on KPIs, everyone is working from home, never having to meet their coworkers. As usual “AI” is just gasoline on the fire. But maybe we should start putting out some fires?
"The experience of workslop directly erodes social connection, erodes trust in one another and in the end erodes solidarity. Because why would you stand with a person who does?"
Aus Erschütterung über die digitale Gewalt gegen Frauen wollen die Linken im Bundestag ihre Einkommen deckeln und geschlossen spenden. Ein Streitpunkt bleibt. mehr...
"Die Linksfraktion will geschlossen auf einen Großteil ihrer Diäten verzichten," und das Geld in einen Fonds für gewaltbetroffene Frauen und Queers investieren.
Das Auf und Ab der Tech-Konzerne an der Nasdaq hat ein Ende - nun geht es stetig abwärts. Um ihre Versprechen zu halten, agieren OpenAI und Co zunehmend verzweifelt. Eine Analyse von Erich Moechel (Wirtschaft, Oracle)
"Das Auf und Ab der Tech-Konzerne an der Nasdaq hat ein Ende - nun geht es stetig abwärts. Um ihre Versprechen zu halten, agieren OpenAI und Co zunehmend verzweifelt."
Well, yes. This is, in fact, how "workload" works under capitalism: labor is perpetually squeezed to do more, to generate more surplus value, to create more profit for the boss. Technological advancements -- that's what "AI" purports to be -- enable more to be done during the work day (which certainly extends well beyond some 40-hour week as everyone checks their email, their texts, their messages after hours and on weekends). Computing has not made us more productive, even though we feel as though we're doing more, and doing it more quickly, more intensely.
I am reminded, no surprise, of the children's book Danny Dunn and the Homework Machine (which I talk in Teaching Machines), published in 1958 -- which is to say we’ve known this exploitation is happening for a very long time now. In it, the titular character Danny and his friends Irene and Joe program his next door neighbor’s mainframe computer -- remarkably for the era, housed in Professor Bullfinch’s laboratory at the back of his house -- to do their homework for them. The trio believe they’ve discovered a great time-saving device, but when their teacher ascertains what they’ve done, she assigns them even more homework to do.
Another story from Teaching Machines: when I was researching the book, I poured through hundreds and hundreds of letters sent to and from Sidney Pressey and B. F. Skinner. It’s easy to imagine their world of letters -- pre-computer, pre-Internet, pre-email -- as slow: slow to be written, slow to be delivered; their wording careful, their responses deliberate. But as both psychologists struggled with the commercialization of the machines they’d designed, the tone and frequency of their correspondence became more frenetic. Sometimes they would send two, three, four letters a day to the same person, dashing off angry, half-baked responses before stewing for a couple of hours and dashing off another one.
They were manic. But they were scientists; they were entrepreneurs.
So maybe it’s a side note, and maybe my main point: I think the media has been focused on only a small sliver of “‘AI’ psychosis,” the stories that are the most violent and tragic. The delusions and mania are much more widespread, but most of these are tolerated, even encouraged, as long as people continue to perform “productively” at their jobs.
Much like the furious quest for “personalization” in the digital classroom, one side effect of “AI” will be the further loss of community. Everyone works in isolation, clicking away endlessly with their chatbot of choice that sycophantically assures them that they don’t need anyone else. No longer will people turn to their colleagues for collaboration, for support, for advice, for mentorship. With “AI,” solidarity and trust are deliberately undermined -- the classic labor-busting tactic. “I can do it myself” (or rather “Claude tells me that it can do it for me, but I can put my name on the project”), people tell themselves; while everyone else second-guesses as to whether or not Claude actually has.
It’s the sad sociopathy of the tech elite, the sad paranoia of the conspiracy theorist, “democratized.”
This week, venture capitalist and techno-authoritarian Marc Andreessen triumphantly pronounced that he has “zero” levels of introspection — “as little as possible.” This is the Randian ideal, something every entrepreneur should aspire to, he tells the podcast audience, adding “and you know, if you go back 400 years ago, it never would have occurred to anybody to be introspective.”
According to Andreessen, civilization had none of that until that “guilt based whammy showed up from Europe, a lot of it from Vienna” -- a remarkably stupid reading of history, religion, culture, literature, so much so you might wonder if the man has ever opened a book, let alone his mind, in his life.
It is notable that Andreessen – one of the biggest proponents of (and, he certainly hopes, profiteers from) “AI” would dismiss introspection, arguably a core facet of “intelligence” that computers do not cannot will not ever possess. “AI” does not “know” anything really, but even more, it does not “know” about its “knowing.” It has no introspection; no meta-cognition; no embodied awareness of how it feels when it learns and when it knows; no meta-contextual awareness of where and when and why and with and from whom it knows; no reflexivity; no self-efficacy. It serves Andreessen’s interests then to deride and dismiss other ways of knowing; to limit “intelligence” to the cognitive flexes of what his “AI” machinery can quickly spew; and to imply, in turn, that humans are inferior, irrelevant.
But mostly, I'd argue, when Andreessen proudly states that he rejects introspection what he really means to say is that he eschews accountability. He will take no responsibility for his actions. He is a billionaire; he doesn’t believe he has to.
This is a moral problem, of course – a grossly immoral one at that. But it is also a policy problem, and one we can rectify, I’m certain.
Today’s bird is the red-throated loon, the smallest and lightest of the loon species. Its feet are located quite far back on its body, making it incredibly clumsy on land. And yet it is the only loon that can take off into flight from land. The bird is associated with weather prediction -- its cries supposedly indicate whether or not it will rain.
Thanks for reading Second Breakfast. Please consider becoming a paid subscriber, as your financial support makes this work possible.
"his is, in fact, how "workload" works under capitalism: labor is perpetually squeezed to do more, to generate more surplus value, to create more profit for the boss. Technological advancements -- that's what "AI" purports to be -- enable more to be done during the work day"