Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2549 stories
·
139 followers

Desire to Pop

1 Comment and 2 Shares

Abstractions are powerful tools. Given enough abstraction everything gets somewhat simple. Somewhat clear. Also somewhat wrong. Abstraction turns everything real, material, consequential into mostly nothing. The abstraction of “a relationship” hides all the love and care and desire it might entail. The abstraction of “the border” hides the violence its defense entails.

Given enough abstraction every monstrosity becomes just a kind of mental gymnastics. This is the domain of devil’s advocates kind of people form whom the real often political meaning of certain statements or movements has become fully replaced by a game of debate. Who cares about the position, let’s win!

I keep thinking about abstractions when looking at the state of the current “AI” bubble and with it the state of a lot of the global economy.

As most people reading this know, I am not a fan of “AI”. If I never have to hear or read or see the term “agentic” ever again that would still be too early. Not only do I despise the political project that is “AI”, the mental damage these systems do to us, the harm to the environment they create, etc. but the way the narrative has turned a narrowly useful technology (stochastic pattern generation and recognition using neural networks) into the hammer to demolish all established codes and practices that ensure some base level of quality: But hey, who wants their software be developed by a team of professionals who can actually understand, model and solve or mitigate security issues when you can also just use a slot machine?

I’ve found myself saying how much I’d love the “AI” bubble to pop if just to shut “AI” influencers up, if just to have some space to talk about real solutions to real problems again. But that only works in abstraction. Only works when nothing means too much, really.

For worse (there’s no better in this sentence) we have made “AI” the foundation of many core parts of our economy. Not “AI” systems – those don’t really work, or meaningfully increase productivity – but the belief in the (future) value of the handful of tech companies building these systems. The US would be in a recession without the data center buildout that tech is throwing all its savings at. A buildout that is not connected to any form of successful business yet. “AI” does not scale the way digital services usually scale but all we see currently is still the old “increase user numbers and hope a business plan will manifest itself” scheme. Maybe ChatGPT can give Sam Altman some form of strategy besides lying.

This has material consequences. When (not if) the bubble pops we will see a few things: The stock market will take a dive which will affect many people living in countries without pension systems who rely on the money they have invested in ETFs or stocks. I am not talking about some VC dude or millionaire losing a few millions or billions, just normal people who wanted to retire at some point. Companies who have bet on “AI” now no longer can claim that one “needs to get onboard” and ride the hype but will have to fix their budgets – quick. This will lead to squeezing employees even more, firing people to make the next quarter’s numbers look better. In the current political landscape the instability that right leaning tech oligarchs and their fans will have created will probably benefit the right. We’ll see a lot of blame going around (remember: “AI” can never fail us, we can only fail “AI”. If this thing crashes, we did not believe enough!) and cuts to social services and anything that gives people the feeling of living in a functioning society. Which only helps right wingers but well that’s Neoliberalism for you.

I love the abstraction of the “AI” bubble popping. But the very probably effects haunt me.

This shouldn’t be read as a “well, ‘AI’ is here to stay so let’s make the best of it” kind of thing. I neither think “AI” will automatically stay (again: see this) nor am I sedated enough to believe that political values and action have no meaning. While I think there are a few narrow use cases for machine learning, whatever is called “AI” today has very few redeeming qualities being built on extraction, violence, domination, colonialism and right wing, anti-labor politics. This is not a moment of capitulation but of reflection.

It’s always easy to cheer for a revolution. Shit is fucked up and bullshit and our institutions and structures don’t seem to be willing, capable or motivated to meaningfully move towards a better world so let’s fuck shit up! Burn down data centers. Get out some guillotines. The thing is: Revolutions mean that people get hurt. That ultimately people die. That doesn’t mean that revolutions are always wrong, but it means that the abstraction is again doing a lot of work hiding harm to people who just want to get through their workday to be home with their family.

My criticism of “AI” is about limiting injustice and suffering. The suffering of the communities who get data centers put in their midst drinking up all the water, taking the electricity while producing metric fucktons of emissions, e-waste and noise. The injustice done to kids who get to chat to an LLM instead of building meaningful relationships to teachers and mentors, who don’t get to figure out what they are good at cause everything they do looks worse than “AI” output so they use that instead of getting to be so much better than slop machines. The way that non-working stochastic parrots undermine labor power and therefore putting the livelihood of thousands of families at risk. My criticism comes from love and care for people, communities and societies. So my actions can’t abstract the effects of those away.

But what can be done? In a better world we’d see governments segmenting the toxic “AI” part of the economy off and insulating the actual economy against it. Slowly moving big public investments out of those companies (not the best word, they seem more like cults these days but we still call them companies). Preparing for that bubble to deflate without taking thousands of lives with it. But we see the opposite: Europe where I live keeps wanting to throw all the money they can find at more “AI”. Just do more “AI”, it will be so good, bro. Trust us, bro. Just 10 more billion. It’ll be super cool, bro. Governments treat hyped tech like a stoner threats hits of their bong.

So what can we do? That is the question. Moving our savings out of the stock market? Maybe – if you have some. Stopping to criticize “AI” systems and the narratives that push them? Never: We mustn’t prop up or protect those dangerous and harmful systems by inaction.

I don’t have all the answers (I rarely have any TBH) but I do think that we should be at least careful wit glorifying the bubble popping a bit. Sure it’s fun to predict when it will go down and how much money Softbank is gonna set on fire but I think that it is also our job as critics to make sure that the public understands that “the AI bubble popping” has material consequences for them. That joining a union might be a good idea right now. That getting smart and knowledgeable people on Works councils is more important than ever. That tech companies are not your friends or benevolent and that they’d sell your kids if it made the stock perform well.

The “AI” bubble will deflate. But as cathartic as a “POP” might feel right now, we need to build the structures that ensure that this event doesn’t put a lot of harm on people who had nothing to do with it. Let Marc Andreessen or Satya Nadella or Sundar Pitchai and all those tech bros lose their money, make sure that after the storm nobody forgets that these people gambled with our lives and societies to make number go up. But we can’t just focus on holding those men accountable – as righteous and good that might feel (and holy cow do I want to see many of those people put on trial). We need to start building lifeboats and barriers protecting our peers, neighbors, families and communities.

We are all we have. Only solidarity will get us through this.

Read the whole story
tante
7 hours ago
reply
I love the abstraction of the "AI" bubble popping. But the very probable effects haunt me.
Berlin/Germany
Share this story
Delete

Google Has Chosen a Side in Trump's Mass Deportation Effort

1 Comment

Google is hosting a Customs and Border Protection (CBP) app that uses facial recognition to identify immigrants, and tell local cops whether to contact ICE about the person, while simultaneously removing apps designed to warn local communities about the presence of ICE officials. ICE-spotting app developers tell 404 Media the decision to host CBP’s new app, and Google’s description of ICE officials as a vulnerable group in need of protection, shows that Google has made a choice on which side to support during the Trump administration’s violent mass deportation effort.

Google removed certain apps used to report sightings of ICE officials, and “then they immediately turned around and approved an app that helps the government unconstitutionally target an actual vulnerable group. That's inexcusable,” Mark, the creator of Eyes Up, an app that aims to preserve and map evidence of ICE abuses, said. 404 Media only used the creator’s first name to protect them from retaliation. Their app is currently available on the Google Play Store, but Apple removed it from the App Store.

“Google wanted to ‘not be evil’ back in the day. Well, they're evil now,” Mark added.

The CBP app, called Mobile Identify and launched last week, is for local and state law enforcement agencies that are part of an ICE program that grants them certain immigration-related powers. The 287(g) Task Force Model (TFM) program allows those local officers to make immigration arrests during routine police enforcement, and “essentially turns police officers into ICE agents,” according to the New York Civil Liberties Union (NYCLU). At the time of writing, ICE has TFM agreements with 596 agencies in 34 states, according to ICE’s website.

After a user scans someone’s face with Mobile Identify, the app tells users to contact ICE and provides a reference number, or to not detain the person depending on the result, a source with knowledge of the app previously told 404 Media. 404 Media also examined the app’s code and found multiple references to face scanning.

A Google spokesperson told 404 Media in an email “This app is only usable with an official government login and does not publicly broadcast specific user data or location. Play has robust policies and when we find a violation, we take action.”

A screenshot of Mobile Identify's Google Play Store page.

Last month, Google removed an app called Red Dot. That app, in much the same vein as the more well-known ICEBlock, lets ordinary people report sightings of ICE officials on a map interface. People could then receive alerts of nearby ICE activity. “Anonymous community-driven tool for reporting and receiving ICE activity alerts,” Red Dot’s website reads.

Red Dot’s removal came after a cascading series of events starting in September. That month 29-year-old Joshua Jahn opened fire at an ICE facility in Dallas, killing two detainees and wounding another. Authorities say Jahn used his phone to search for ICE-spotting apps, including ICEBlock, before the shooting, Fox reported. A short while after, the Department of Justice contacted Apple and demanded it remove ICEBlock, which Apple did, despite such an app being First Amendment protected speech

Both Apple and Google then removed Red Dot, which works similarly, from their respective app stores. Google previously told 404 Media it did not receive any outreach from the Department of Justice about the issue at the time. The company said it removed apps that share the location of what it describes as a vulnerable group: a veiled reference to ICE officials.

A representative for Red Dot told 404 Media in an email they “see 100% dissonance” in Google’s position. Google removed the app claiming it harms ICE agents “while continuing to host a CBP app that uses facial recognition to identify immigrants for detention and deportation.”

“This is unequivocally morally and ethically wrong. We are deeply concerned about the number of violations that must be occurring to deploy AI facial recognition on people for the purpose of making arrests. It is a clear and unacceptable case of selective application of their policies,” they added. The representative did not provide their name.

Google’s decision to host CBP’s immigrant-hunting app while removing one designed to warn people about the presence of ICE has concerned free speech experts.

“Providing tech services to supercharge ICE operations while blocking tools that support accountability of ICE officers is entirely backwards,” Kate Ruane, director of the Center for Democracy & Technology’s Free Expression Project, told 404 Media. “ICE is currently deploying armed, masked agents to take people from daycares, street corners, parking lots, and even their own homes, often based on paper thin suspicion and frequently with unjustifiable use of force. It is the mothers, fathers, children, friends, neighbors and coworkers being targeted by ICE who are most vulnerable in this situation.”

“ICE agents don’t want to face accountability for their actions, but documenting ICE and other police activities is essential to guard against abuse of power and improper conduct. Courts have recognized for decades that tracking and reporting on law enforcement activities is an important and time honored public accountability mechanism,” she continued. 

Ruane said apps like this are an exercise of First Amendment protected rights. “As with any other app, if someone misuses it to engage in unlawful activity, they can be held accountable. Google should restore these services immediately,” she added.

Joshua Aaron, the creator of ICEBlock, told 404 Media “Big tech continues to put profit and power over people, under the guise of keeping us safe. Right now we are at a turning point in our nation’s history. It is time to choose sides; fascism or morality? Big tech has made their choice.”



Read the whole story
tante
9 days ago
reply
"Don't be evil" is so far in the past, it's not even a memory anymore
Berlin/Germany
Share this story
Delete

65daysofstatic’s new No Man’s Sky album searches for humanity in an AI-filled world

1 Comment

It's not often that a band returns to soundtrack the same game nine years after its release - then again, most games aren't No Man's Sky. Once demoed on The Tonight Show with Jimmy Fallon and at splashy E3 press conferences, in 2016, No Man's Sky was heralded as gaming's future. And it was all made possible by the procedural generation that spawned its vast, sci-fi universe.

Nearly a decade later, as post-rock band 65daysofstatic returns to re-score the ever-evolving game, generated content is no longer the exciting futurism it once seemed. With AI slop flooding social media and AI-generated bands sneaking their way onto Spotify, the tech t …

Read the full story at The Verge.

Read the whole story
tante
14 days ago
reply
“Who cares if computers can make music? That’s not what music is,” says Wolinski. “The moving of the speakers to generate sound waves is such a tiny part of what gives music meaning. It’s all about the social relations around [it], the human dialogue between one person and another, even if they never meet. This is what art is — and it’s why generative AI completely misses the point.”
Berlin/Germany
Share this story
Delete

Jordan Petridis: DHH and Omarchy: Midlife crisis

2 Comments

Couple weeks ago Cloudflare announced it would be sponsoring some Open Source projects. Throwing money at pet projects of random techbros would hardly be news, but there was a certain vibe behind them and the people leading them.

In an unexpected turn of events, the millionaire receiving money from the billion-dollar company, thought it would be important to devote a whole blog post to random brokeboy from Athens that had an opinion on the Internet.

I was astonished to find the blog post. Now that I moved from normal stalkers to millionaire stalkers, is it a sign that I made it? Have I become such a menace? But more importantly: Who the hell even is this guy?

D-H-Who?

When I was painting with crayons in a deteriorating kindergarten somewhere in Greece, DHH, David Heinemeier Hansson, was busy with dumping Ruby on Rails in the world and becoming a niche tech celebrity. His street cred for releasing Ruby on Rails would later be replaced by his writing on remote work. Famously authoring “Remote: Office Not Required”, a book based on his own company, 37signals.

That cultural cache would go out the window in 2022 when he got in hot water with his own employees after an internal review process concluded that 37signals had been less than stellar when it came to handling race and diversity. Said review process culminated in a clash, where the employees were interested in further exploration of the topic, which DHH responded to them with “You are the person you are complaining about” (meaning: you, pointing out a problem, is the problem).

No politics at work

This incident lead the two founders of 37signals to the executive decision to forbid any kind of “societal and political discussions” inside the company, which, predictably, lead to a third of the company resigning in protest. This was a massive blow to 37signals. The company was famous for being extremely selective when hiring, as well as affording employees great benefits. Suddenly having a third of the workforce resign over disagreement with management sent a far more powerful message than anything they could have imagined.

It would become the starting point for the downwards and radicalizing spiral along with the extended and very public crashout DHH will be going through in the coming years.

Starting your own conference so you can never be banned from it

Subsequently, DHH was uninvited from keynoting at RailsConf on the account of everyone being grossed out about the handling of the matter and in solidarity with the community members along the employees that quit in protest.

That, in turn, would lead to the Rails Foundation starting Rails World. A new conference about Rails that 100%-swear-to-god was not just about DHH having his own conference where he can keynote and would never be banned.

In the following years DHH would go to explore and express all the spectrum of “down the alt-right pipeline” opinions, like:

Omarchy

You either log off a hero, or you see yourself create another linux distribution, and having failed the first part, DHH has been pouring his energy into creating a new project. While letting everyone know how he much prefers that than going to therapy. Thus, Omarchy was born, a set of copy pasted Window Manager and Vim configs turned distro. One of the two projects that Cloudflare will be proudly funding shortly. The only possible option for the compositor would be Hyperland, and even though it’s Wayland (bad!), it’s one of the good-non-woke ones. In a similar tone, the project website would be featuring the tight integration of Omarchy with SuperGrok.

Rubygems

On a parallel track, the entire Ruby community more or less collapsed in the last two months. Long story short, is that one of the major Ruby Central sponsors, Sidekiq, pulled out the funding after DHH was invited to speak at RailsConf 2025. Shopify, where DHH sits in the boards of directors, was quick to save the day and match the lost funding. Coincidentally an (allegedly) takeover of key parts of the Ruby Infrastructure was carried out by Ruby Central and placed under the control of Shopify in the following weeks.

This story is ridiculous, and the entire ruby community is imploding following this. There’s an excellent write-up of the story so far here.

In a similar note, and at the same time, we also find DHH drooling over Off-brand Peter Thiel and calling for an Anduril takeover of the Nix community in order to purge all the wokes.

On Framework

At the same time, Framework had been promoting Omarchy in their social media accounts for a good while. And DHH in turn has been posting about how great Framework hardware is and how the Framework CEO is contributing to his Arch Linux reskin. On October 8th, Framework announced its sponsorhip of the Hyprland project, following 37signal doing the same thing couple weeks earlier. On the same day they made another post promoting Omarchy yet again. This caused a huge backlash and overall PR nightmare, with the apex being a forum thread with over 1700 comments so far.

The first reply in forum post, comes from Nirav, Framework’s CEO, with a very questionable choice of words:

We support open source software (and hardware), and partner with developers and maintainers across the ecosystem. We deliberately create a big tent, because we want open source software to win. We don’t partner based on individual’s or organization’s beliefs, values, or political stances outside of their alignment with us on increasing the adoption of open source software.

I definitely understand that not everyone will agree with taking a big tent approach, but we want to be transparent that bringing in and enabling every organization and community that we can across the Linux ecosystem is a deliberate choice.

Mentioning twice a “big tent” as the official policy and response to complains about supporting Fascist and Racist shitheads, is nothing sort of digging a hole for yourself so deep it that it reemerges in another continent.

Later on, Nirav would mention that they were finalizing sponsorship of the GNOME Foundation (12k/year) and KDE e.V. (10k/year). In the linked page you can also find a listing of Rails World (DHH’s personal conference) for a one time payment of 24k dollars.

There has not been an update since, and at no point have they addressed their support and collaboration with DHH. Can’t lose the money cow and free twitter clout I guess.

While I would personally would like to see the donation be rejected, I am not involved with the ongoing discussion on the GNOME Foundation side nor the Foundation itself. What I can say is that myself and others from the GNOME OS team, were involved in initial discussions with Framework, about future collaborations and hardware support. GNOME OS, much like the GNOME Flatpak runtime, is very useful as a reference point in order to identify if a bug, in hardware or software, is distro-specific or not.

It’s been a month since the initial debacle with Framework. Regardless of what the GNOME Foundation plans on doing, the GNOME OS team certainly does not feel comfortable in further collaboration given how they have handled the situation so far. It’s sad because the people working there understand the issue, but this does not seem to be a trait shared by the management.

A software midlife crisis

During all this, DHH decided that his attention must be devoted to get into a mouth-off with a greek kid that called him a Nazi. Since this is not violence (see “Words are not violence” essay), he decided to respond in kind, by calling for violence against me (see “Words are violence” essay).

To anyone who knows a nerd or two over the age of 35, all of the above is unsurprising. This is not some grand heel turn, or some brainwashing that DHH suffered. This is straight up a midlife crisis turned fash speedrun.

Here’s a dude who barely had any time to confront the world before failing into an infinite money glitch in the form of Ruby on Rails, Jeff Bezos throwing him crazy money, Apple bundling his software as a highlighted feature, becoming a “new work” celebrity and Silicon Valley “Guru”. Is it any surprise that such a person later would find the most minuscule kind of opposition as an all-out attack on his self-image?

DHH has never had the “best” opinions on a range of things, and they have been dutifully documented by others, but neither have many other developers that are also ignorant of topics outside of software. Being insecure about your hairline and masculine aesthetic to the point of adopting the Charles Manson haircut to cover your balding is one thing. However, it is entirely different to become a drop-shipped version of Elon, tweeting all day and stopping only to write opinion pieces that come off as proving others wrong rather than original thoughts.

Case in point: DHH recently wrote about how “men who’d prefer to feel useful over being listened to”. The piece is unironically titled “Building competency is better than therapy”. It is an insane read, and I’ll speculate that it feels as if someone, who DHH can’t outright dismiss, suggested he goes to therapy. It’s a very “I’ll show you off in front of my audience” kind of text.

Add to that a three year speedrun decrying the “theocracy of DEI” and the seemingly authoritarian powers of “the wokes”, all coincidentally starting after he could not get over his employees disagreeing with him on racial sensitivities.

How can someone suggest his workers read Ta-Nehisi Coates’s “Between the World and Me” and Michelle Alexander’s “The New Jim Crow” in the aftermath of George Floyd’s killing and the BLM protests. While a couple of months later writing salivating blogposts after the EDL eugenics rally in England and giving the highest possible praise to Tommy Robinson?

Can these people be redeemed?

It is certainly not going to help that niche celebrities, like DHH, still hold clout and financial power and are able to spout the worst possible takes without any backlash because of their position.

A bunch of Ruby developers recently started a petition to get DHH distanced from the community, and it didn’t go far before getting brigaded by the worst people you didn’t need to know existed. This of course was amplified to oblivion by DHH and a bunch of sycophants chasing the clout provided by being retweeted by DHH. It would shortly be followed by yet another “I’m never wrong” piece.

Is there any chance for these people, who are shielded by their well-paying jobs, their exclusively occupational media diet, and stimuli all happen to reinforce the default world view?

I think there is hope, but it demands more voices in tech spaces to speak up about how having empathy for others, or valuing diversity is not some grand conspiracy but rather enrichment to our lives and spaces. This comes hand in hand with firmly shutting down concern trolling and ridiculous “extreme centrist” takes where someone is expected to find common ground with others advocating for their extermination.

One could argue that the true spirit of FLOSS, which attracted much of the current midlife crisis developers in the first place, is about diversity and empathy for the varied circumstances and opinions that enriched our space.

Conclusion

I do not know if his heart is filled with hate or if he is incredibly lost, but it makes little difference since this is his output in the world.

David, when you read this I hope it will be a wake-up call. It’s not too late, you only need to go offline and let people help you. Stop the pathetic TemuElon speedrun and go take care of your kids. Drop the anti-woke culture wars and pick up a Ta-Nehisi Coates book again.

To everyone else: Push back against their vile and misanthropic rhetoric at every turn. Don’t let their poisonous roots fester into the ground. There is no place for their hate here. Don’t let them find comfort and spew their vomit in any public space.

Crush Fascism. Free Palestine.

Read the whole story
samuel
18 days ago
reply
I had no idea this was a brouhaha until now and after reading this contradictory screed, I'm on whatever side is against this type of thinking.

It's poisonous and detrimental to the environment of building software, and that includes ensuring diversity of customers is matched by diversity of the team. How can somebody claim the higher ground when their argument is full of ad hominem, strawman, and poor faith arguments?

I'm particularly attuned to people being called nazis, considering my family is 1/16th the size it should have been as of 1945. Fascism? Nazism? I don't agree with everything (or even much) of what DHH writes, but words have power and throwing slings this gross and inappropriate does little to advance their cause.

I hope for the author of this piece to come to terms with how they are coming across.
Cambridge, Massachusetts
tante
18 days ago
reply
"This is not some grand heel turn, or some brainwashing that DHH suffered. This is straight up a midlife crisis turned fash speedrun."
Berlin/Germany
Share this story
Delete

Platform Temperance

1 Comment

Greetings from Read Max HQ! This week, a collection of thoughts about a new trend in tech criticism masquerading as a lumpy and overstuffed essay.

A reminder: This piece, and all the pieces you read on the Read Max newsletter, is funded almost entirely by paying subscribers of this newsletter. I am able to write these columns and record the podcasts and videos thanks to the support of nearly 4,000 people who value what I do, for whatever strange reason. Unfortunately, because of the reality of subscription businesses, I need to keep growing in order to not shrink, which means every week I need to beg more people to sign up. Do you like Read Max? Do you find it entertaining, educational, distracting, fascinating, or otherwise valuable, such that you would buy me a cheap-ish beer at a bar every month? If so, consider signing up for the low price of $5/month or $50/year

Subscribe now

A new wave of techlash

The new moderate-liberal Substack publication The Argument ran a fascinating piece by civil rights attorney (and Tottenham blogger) Joel Wertheimer last week arguing that policymakers should “Treat Big Tech like Big Tobacco”:

The problem with Big Tobacco was not that it could charge excess prices because of its market power. The problem with Big Tobacco was that cigarettes were too cheap. Cigarettes caused both externalities to society and also internalities between the higher-level self that wanted to quit smoking and the primary self that could not quit an addictive substance. So, we taxed and regulated their use.

The fight regarding social media platforms has centered around antitrust and the sheer size of Big Tech companies. But these platforms are not so much a problem because they are big; they are big because they are a problem. Policy solutions need to actually address the main problem with the brain-cooking internet.

Wertheimer argues that the famous Section 230 of the Communications Decency Act--which protects companies from liability for content posted by users to their websites--needs to be reinterpreted to exclude “platforms that actively promote content using reinforcement learning-based recommendation algorithms.” I’m not exactly qualified to weigh in on the legal questions, but I find the logic of the argument persuasive in its broad strokes: The idea is that while message boards and blog comment sections--which host third-person speech but do nothing active to promote it--deserve Section 230 protection, platforms that use algorithmic recommendations (i.e. Facebook, Instagram, TikTok, and X.com) are not simply “passively hosting content but actively recommending” it, an act that should be considered “first-person speech” and therefore subject to liability claims.

But what really strikes me about Wertheimer’s piece is the public-health metaphor he uses to explain the particular harms of social-media platforms (and that, in turn, justify his remedy). The contemporary web is bad for us, the argument goes, in the way cigarettes are bad for us: Cheap, readily available, highly addictive, and making us incredibly sick at unbelievably high cost.

In this, Wertheimer is following a line of argument increasingly prominent among both pundits and politicians. In April, David Grimes made a less policy-focused version of the same argument in Scientific American; just last month, speaking with Ezra Klein on his podcast, Utah Governor Spencer Cox drew on both Big Tobacco and the opioid industry:

The social graphs that they use, which know us better than we know ourselves, that allow us, as you so eloquently stated and better than I could, to understand what makes us emotional and what keeps our eyeballs on there — so that when a kid is somehow, even if they don’t want to be, on TikTok at 3 a.m., just going from video to video, and they’ve given up their free will — that is unbelievably dangerous.

When tobacco companies addicted us, we figured out a way out of that. When opioid companies did that to us — we’re figuring our way out of that. And I’m just here to say that I believe these tech companies, with trillion-dollar market caps combined, are doing the same thing — the same thing that tobacco companies did, the same thing that the opioid companies did. And I think we have a moral responsibility to stand up, to hold them accountable and to take back our free will.

A few days after Wertheimer’s piece, Abundance author Derek Thompson posted a podcast interview of Massachusetts Representative Jake Auchincloss, who has proposed a digital value-added tax designed, like Wertheimer’s proposal around Section 230, to internalize the costs of social media. In his introduction, Thompson directly compared the digital V.A.T. to “sugar taxes and cigarette taxes”:

Massachusetts Congressman Jake Auchincloss has a proposal that he calls a digital sin tax, a way to push back on the business model of social media platforms that profit from hijacking our attention, especially our kids’ attention.

You’ve heard of sugar taxes and cigarette taxes. Well, this would be an attempt to price the harms of the attention economy and route the proceeds to public goods. I think it’s an interesting idea.

Comparing Big Social to Big Tobacco (or Big Opioid or Big Sugar) is in some sense a no-brainer, and certainly such analogies have been drawn many times over the last few decades. But the increasing popularity of this conceit is less a coincidence, I’d argue, than a function of the gathering power of a new wave of the now-decade old “techlash.”

Subscribe now

This burgeoning movement seeks to root criticism of (and response to) Big Tech in ideas of health (public, social, intellectual, and spiritual) and morality rather than size and power, positioning the rise of social media and the platform giants as something between a public-health scare and a spiritual threat, rather than (solely) a problem of political economy or market design. I see versions of this school of thought not just in speeches and op-eds from Auchincloss and Cox or blog posts from Thompson, but in Chris Hayes’ book The Siren’s Call and in the inescapable work of Jonathan Haidt. (You might broadly think of Hayes and Haidt as representing “left” and “right” tendencies of the broader movement.) Notably, all of the above-mentioned have found platforms on Klein’s podcast. Back in a January interview with Hayes, Klein offered up a kind of political vision or prediction rooted in this tendency:

I think that the next really successful Democrat, although it could be a Republican, is going to be oppositional to [the tech industry]. In the way that when Barack Obama ran in ’08 — and I really think people forget this part of his appeal — he ran against cable news, against 24-hour news cycles, against political consultants.

People didn’t like the structure and feeling of political attention then. And I don’t think there was anywhere near the level of disgust and concern and feeling that we were being corroded in our souls as there is now.

And I think that, at some point, you are going to see a candidate come up who is going to weaponize this feeling. They are going to run not against Facebook or Meta as a big company that needs to be broken up. They’re going to run against all of it — that society and modernity and politics shouldn’t feel like this.

And some of that will be banning phones in schools. It’ll have a dimension that is policy. But some of it is going to be absolutely radiating a disgust for what it is doing to us and to ourselves. I mean, your book has a lot of this in it. I think that political space is weirdly open, but it seems very clear to me somebody is going to grab it.

Thompson dubs this loose movement, or at least the version touted by Auchincloss, “touch-grass populism,” but I think this is wrong: The framework in question is distinctly not “populist” (unlike, say, the neo-Brandeisian “new antitrust” movement that has been a major focus of the “techlash” to date) so much as progressive in the original sense, a reform ideology rooted in middle-class concerns for general social welfare in the wake of sweeping technological change. At its broadest you could maybe call this budding program of restriction, restraint, and regulation “Platform Temperance,” and regard the scattered campaign to ban smartphones in schools as its first stirrings as a movement.

Why Platform Temperance now?

One way of thinking about the past half-decade or so of life on the internet is that we’ve all become test subjects in a grand experiment to see just how bad “good enough” can be. Since Elon Musk’s purchase of Twitter in 2022, and the subsequent industry-wide cutbacks to “trust and safety” teams meant to moderate content, most of the major social platforms have been flooded with fraud, bait, and spam--a process exponentially accelerated by the arrival of ChatGPT and its generative-A.I. peers. Take, e.g., these YouTube ads discovered by BlueSky user Ken Plume:

As I tweeted at the time, I’ve been covering tech companies for years and I still find myself taken aback at how completely they’ve abdicated any kind of oversight or moderation. Setting aside the increasingly toxic and directly corrosive “politics” now inescapable on social platforms, Facebook and Instagram and YouTube are utterly awash in depressing low-rent non-political slop, and no one who owns, runs, or even works at these platforms seems even to be embarrassed, let alone appalled.

But why would they be? People working at tech giants are watching the metrics and seeing that the depressing low-rent slop is getting engagement--probably even to a greater extent than whatever expensive, substantive, wholesome content it’s being placed next to on the feed. Their sense, backed up by unprecedentedly large data sets, is that slop of various kinds is what people want, because it’s what they click on, watch, and engage with. (I would even go so far as to suggest that some portion of Silicon Valley’s broad reactionary turn since 2020 can be chalked up to what I think of as “black-pilling via metrics”: The industry’s longtime condescension toward its users finally curdling into outright contempt.)

For much of the past decade, this revealed preference for fake news, engagement bait, sexualized content, and other types of feedslop has been blamed on “platform manipulation”: Bad, possibly foreign actors were “manipulating” the platforms, or, worse, the platforms themselves were “manipulating” their users, deploying “dopamine feedback loops” and “exploiting a vulnerability in human psychology,” as Sean Parker said back in 2017.

But these accounts have never been wholly satisfying: Too technical, too determinist, too reliant on the idea that there is “authentic” or “innocent” desire being “manipulated.” In some sense they don’t blame we, the users, enough, or assign us the kind of agency we know we have. Most of us are aware from everyday experience that even absent “manipulation” we desire all kinds things that are bad for us, and that we give in or restrain from temptation based on any number of factors.

Subscribe now

This, I think, is the basic dynamic from which Platform Temperance evolves: a general, non-partisan, somewhat moralistic disgust at even the non-political outcomes of unregulated and unmoderated platforms; a dissatisfaction with both the “revealed preference” framework and the more rigidly behavioralist explanations of platform activity; and a sense of an accelerating downward spiral with the advent of generative A.I., which both further debases the platforms and provides a possibly even more tractable and dangerous user experience itself. In response, Platform Temperance offers a focus on health, social welfare, and the idea of discipline and restraint in the face of unmoderated consumption--that is, temperance.

The politics of platform temperance

There’s another aspect to mention here. Platform Temperance as it has evolved recently seems to be largely a school of thought from the (broad) center--adjacent, in its membership and institutional affiliations, to the “Abundance” faction of elite politics.

There’s an obvious cynical reading here: As Klein says, this “political space is weirdly open,” and “somebody is going to grab it.” Thompson’s podcast interview with Auchincloss is framed around the idea of Platform Temperance (or “Touch-Grass Populism”) as a “big idea” around which moderates can rally:

Winning elections has to be more about articulating a mission than arriving at the midpoint of every possible debate… The center has not been nearly as successful [as the Trumpian right] at pouring itself into truly bold visions of a better future. So today we’re going to talk about one very big idea from a politician who doesn’t maybe neatly fit into any particular Twitter tribe. […]

I think [digital V.A.T.] a somewhat problematic idea. But more than interesting and problematic, I think it’s a big idea. And I’m excited that someone in the political center is offering it.

It’s easy, given this kind of positioning, to read Platform Temperance as a new front in an ongoing factional war within the Democratic party--and, indeed, the tendency is often pointedly positioned against the more populist and anti-establishment New Antitrust movement that has been among the most prominent strains of the Techlash thus far.

Subscribe now

But while the political valence is important for context, I don’t think Platform Temperance is wholly (or even mostly) a cynical Trojan Horse for intra-Democrat political battles. Versions of the ideas, remedies, and rhetoric emerging from the big tent that I’m calling Platform Temperance have been circulating in the Techlash for many years, usually from left-wing critics and academics. I’m thinking here of, e.g., James Bridle’s A New Dark Age, Jenny Odell’s How to Do Nothing, and Richard Seymour’s The Twittering Machine, three excellent books that take psychosocial and psychoanalytic approaches to the problems posed by platforms. In the early days of this newsletter I myself made a kind of proto-Platform Temperance argument under the headline “Maybe we need a moral panic about Facebook”:

I’ve written a lot about Facebook and its peers over the last half decade or so, and I can say pretty conclusively that people are, in general, much less interested in structural accounts, no matter how rigorous and explanatory, than they are in writing about the affective dimension of tech power. “Structure” is abstract and difficult to touch; feelings are immediate and intuitive. Readers want help articulating how life on the platforms makes them feel, and why.

I think the change-vs.-continuity debate, focused as it is on Facebook’s role in the political and media spheres, can miss entirely that the affective critique of Facebook — Facebook is bad because it makes people feel bad — is the most powerful. That’s not to say it’s empirically rigorous, or that it’s wholly new, or that it’s not a way to blame the fundamental dynamic of capitalist alienation on a new technology. But if the point is to transform Facebook from something that works on us to something that works for us — and, barring that, to shut it down — it’s useful to remember what people hate about it.

I still believe--as Klein does--that there is a lot of political power in harnessing people’s deep ambivalence about (or outright disgust with) a platform-mediated social, cultural, and political life. And I often find myself eager to reach for public-health metaphors when discussing the experience of life under the thumb of the software industry, if not outright spiritual ones. (I don’t believe in the “soul,” but I am hard-pressed to think of a better way to succinctly describe the effects of, say, TikTok than to say it’s bad for your soul.)

But I also want to be conscious of how easy it is for this kind of rhetoric to slip into reactionary moral panics. As David Sessions recently wrote:

My working hypothesis is that neo-atomization discourses have formed a knot of moral panic about technology that is being used as a Trojan horse for social conservatism. In some quarters, like the drafters of Project 2025 or Michigan’s outrageous proposed ban on porn and VPNs—that revived social conservatism is simply the usual Christian-right suspects doing their thing. But even among Christian social conservatives, explicit appeals to God and natural order have been replaced by a pseudo-public health language of “epidemics,” “addiction,” and social breakdown. What’s perhaps even worse is the extent to which essentially the same ideas have conquered liberal discourse in places like the New York Times op-ed page; a liminal figure like Christine Emba stands as something of a bridge between the two.

Emba’s book Rethinking Sex is a perfect example of this: you don’t even have to say that porn is immoral and casual sex is damaging (though she does that, too); you can say technology is alienating; apps are disrupting our natural, healthy forms of relation and association, harming women and children; our sex and relationships are dehumanizing and socially corrosive because we basically treat them like ordering DoorDash. You can even throw in some superficial anti-capitalism to give it a progressive spin. I’m not sure people even realize how dominant these views have become among liberals and how, in the absence of true ethnographic curiosity about the forms of life the social web has created, authoritarian policy responses are starting to sound like bipartisan common sense. […]

Here are just a few reasons contemporary tech panic needs to be pressed further than it often is: social scientific research about the “loneliness epidemic” is in fact highly contested, as are linkages of depression and poor mental health to phones and social media. “Porn addiction” is a made-up, faux-medical rebrand of classic right-wing evangelical ideology, and the evidence that porn is all somehow “increasingly” violent and misogynistic is flimsy enough to basically constitute a folk myth, no matter how many times it’s repeated in the New York Times.

I think Platform Temperance as an affective and political framework has a lot to offer tech skeptics, critics, and “the left” more broadly. But like the progressive movements that emerged in the late 19th century, it can produce both grounded, persuasive, important liberal-technocratic visions, and paternalistic, pseudoscientific moral panics. We should be careful about which we’re pursuing.



Read the whole story
tante
23 days ago
reply
"Wertheimer argues that the famous Section 230 of the Communications Decency Act--which protects companies from liability for content posted by users to their websites--needs to be reinterpreted to exclude “platforms that actively promote content using reinforcement learning-based recommendation algorithms.”"
Berlin/Germany
Share this story
Delete

Chimps Are Capable of Human-Like Rational Thought, Breakthrough Study Finds

1 Comment

Chimpanzees revise their beliefs if they encounter new information, a hallmark of rationality that was once assumed to be unique to humans, according to a study published on Thursday in Science.

Researchers working with chimpanzees at the Ngamba Island Chimpanzee Sanctuary in Uganda probed how the primates judged evidence using treats inside boxes, such as a “weak” clue—for example, the sound of a treat inside a shaken box—and a "strong" clue, such as a direct line of sight to the treat. 

The chimpanzees were able to rationally evaluate forms of evidence and to change their existing beliefs if presented with more compelling clues. The results reveal that non-human animals can exhibit key aspects of rationality, some of which had never been directly tested before, which shed new light on the evolution of rational thought and critical thinking in humans and other intelligent animals.      

“Rationality has been linked to this ability to think about evidence and revise your beliefs in light of evidence,” said co-author Jan Engelmann, associate professor at the department of psychology at the University of California, Berkeley, in a call with 404 Media. “That’s the real big picture perspective of this study.”

While it’s impossible to directly experience the perspective of a chimpanzee, Engelmann and his colleagues designed five controlled experiments for groups of anywhere from 15 to 23 chimpanzee participants. 

In the first and second experiments, the chimps received a weak clue and a strong clue for a food reward in a box. The chimpanzees consistently made their choices based on the stronger evidence, regardless of the sequence in which the clues were presented. In the third experiment, the chimps were shown an empty box in addition to the strong and weak clues. After this presentation, the box with the strong evidence was removed. In this experiment, the chimpanzees still largely chose the weak clue over the empty box. 

In the fourth experiment, chimpanzees were given a second “redundant” weak clue—for instance, the experimenter would shake a box twice. Then, they were given a new type of clue, like a second piece of food being dropped into a box in front of them. They were significantly more likely to change their beliefs if the clue provided fresh information, demonstrating an ability to distinguish between redundant and genuinely new evidence.

Finally, in the fifth experiment, the chimpanzees were presented with a so-called “defeater” that undermined the strong clue, such as a direct line of sight to a picture of food inside the box, or a shaken box containing a stone, not a real treat. The chimps were significantly more likely to revise their choice about the location of the food in the defeater experiments than in experiments with no defeater. This experiment showcased an ability to judge that evidence that initially seems strong can be weakened with new information.

“The most surprising result was, for sure, experiment five,” Engelmann said. “No one really believed that they would do it, for many different reasons.”

For one thing, he said, the methodology of the fifth experiment demanded a lot of attention and cognitive work from the chimpanzees, which they successfully performed. The result also challenges the assumption that complex language is required to update beliefs with new information. Despite lacking this linguistic ability, chimpanzees are somehow able to flexibly assign strength to different pieces of evidence.

Speaking from the perspective of the chimps, Engelmann outlined the responses to experiment five as: “I used to believe food was in there because I heard it in there, but now you showed me that there was a stone in there, so this defeats my evidence. Now I have to give up that belief.” 

“Even using language, it takes me ten seconds to explain it,” he continued. “The question is, how do they do it? It’s one of the trickiest questions, but also one of the most interesting ones. To put it succinctly, how to think without words?”

To hone in on that mystery, Engelmann and his colleagues are currently repeating the experiment with different primates, including capuchins, baboons, rhesus macaques, and human toddlers and children. Eventually, similar experiments could be applied to other intelligent species, such as corvids or octopuses, which may yield new insights about the abundance and variability of rationality in non-human species.

“I think the really interesting ramification for human rationality is that so many people often think that only humans can reflect on evidence,” Engelmann said. “But our results obviously show that this is not necessarily the case. So the question is, what's special about human rationality then?”

Engelmann and his colleagues hypothesize that humans differ in the social dimensions of our rational thought; we are able to collectively evaluate evidence not only with our contemporaries, but by consulting the work of thinkers who may have lived thousands of years ago. Of course, humans also often refuse to update beliefs in light of new evidence, which is known as “belief entrenchment” or “belief perseveration” (many such cases). These complicated nuances add to the challenge of unraveling the evolutionary underpinnings of rationality.

That said, one thing is clear: many non-human animals exist somewhere on the gradient of rational thought. In light of the recent passing of Jane Goodall, the famed primatologist who popularized the incredible capacities of chimpanzees, the new study carries on a tradition of showing that these primates, our closest living relatives, share some degree of our ability to think and act in rational ways.

Goodall “was the first Western scientist to observe tool use in chimpanzees and really change our beliefs about what makes humans unique,” Engelmann said. “We're definitely adding to this puzzle by showing that rationality, which has so long been considered unique to humans, is at least in some forms present in non-human animals.”

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.


Read the whole story
tante
23 days ago
reply
"Chimpanzees revise their beliefs if they encounter new information, a hallmark of rationality that was once assumed to be unique to humans"
Berlin/Germany
Share this story
Delete
Next Page of Stories