Back in the 1990s, in the first flowering of the World Wide Web, the Silicon Valley guys were way into their manifestos. “A Declaration of the Independence of Cyberspace”! The “Cluetrain Manifesto!” [EFF; Cluetrain Manifesto]
The manifestos were feel-good and positive — until you realised the guys writing them were Silicon Valley libertarians. The “Declaration of the Independence of Cyberspace” was written at the World Economic Forum in Davos.
They demanded true freedom … for their money. The manifestos were marketing pitches.
If you want to get a feel for the times, read “The Californian Ideology” by Richard Barbrook and Andy Cameron. It’s a great essay from 1995 that nails these guys precisely. And the same guys in the 2020s. [essay, PDF]
The Resonant Computing Manifesto is another ’90s Internet manifesto, thirty years after the fact. It sets out the problem — huge monopolist rentier platforms that damage society as they nickel-and-dime us just trying to live our lives. [Resonant Computing]
Resonant Computing presents itself as a manifesto for a human-centred internet. By the fifth paragraph, we see what this document is actually for:
With the emergence of artificial intelligence, we stand at a crossroads.
Oh dear.
So. What can AI do about our terrible situation?
This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human — at scale … Where once our digital environments inevitably shaped us against our will, we can now build technology that adaptively shapes itself in service of our individual and collective aspirations.
Yep, magic AI coding that understands “JUST DO WHAT I MEAN” sure would be great! But it doesn’t exist. The AI marketers lie that it does exist, and it just doesn’t.
And “adaptively shapes itself” isn’t a thing. Chatbots aren’t adaptive! That’s why they keep having to retrain and release new ones. Maybe “maladaptively shapes itself.”
Anyone signing or even talking up the Resonant Computing Manifesto, ask them to hand you the software that “can now” do these things. No, it won’t be with next quarter’s model.
Also, if you wanted to just run the chatbot at home — cranking out a few tokens a second on an expensive computer — the AI vendors themselves just sent the price of the hardware through the roof. Every household is not going to have have one of these.
There’s a pile of nice principles in the manifesto — “private, dedicated, plural, adaptable, prosocial” — and AI chatbots satisfy none of these.
If a core part of your plan requires actual magic, it’s not a plan yet.
The Manifesto promises to fix everything that’s wrong on the internet right now. But you look at the authors and the signers, you’ll see the same guys who caused the present problems. These guys made it rich on the Torment Nexus and they’re now claiming they can fix it.
I’m not sanguine the venture capitalists signing this manifesto who were pushing Web3 and NFTs, and are now pushing AI, are going to be helpful here. Or that they’re joining in for any reason that doesn’t continue to decentralise the money upward and into their pockets.
None of this manifesto addresses power or accountability. Their whole pitch is: the right computer program will fix the social structural problems! They absolutely do not want to change any of how the world works.
So I’ve got an absolute minimum plan. And it isn’t “write a new computer program.” My plan’s not sufficient, but you will need all of these:
Actual regulations on the companies’ behaviour. With teeth.
Actual consumer protections. With teeth.
Antitrust enforcement and break up the monstrously huge companies that these guys just said were the problem.
Much bigger fines for violations. Percentage of global revenue, European Union style!
That’s the extremely minimal solution, it doesn’t even feature any guillotines. But your fundamental problem is politics. Not the lack of a particular magical computer program.
Building a bigger chatbot is not the answer to the problem. The guys who built the chatbots and wrote this manifesto are the problem. It’s you guys! It’s you!
The Resonant Computing Manifesto is the usual AI marketing from the usual AI suspects. These guys got ChatGPT to summarise one 1990s issue of Wired, and it oneshotted them.
"The [resonant computing] Manifesto promises to fix everything that’s wrong on the internet right now. But you look at the authors and the signers, you’ll see the same guys who caused the present problems. These guys made it rich on the Torment Nexus and they’re now claiming they can fix it."
"Die Bundesregierung beweist mit dieser Reform, wie wichtig ihr die Bekämpfung von Kinderarmut ist: absolut gar nicht. Die Reform ist deshalb auch ein düsterer Ausblick auf das kommende Jahr: Der Abbau des Sozialstaates hat begonnen."
Lobbygruppen wissen genau, wie sie sich einen Weg in die Politik bahnen können. Dagegen braucht es ein Bewusstsein und institutionelle Schranken. mehr...
"Es braucht institutionelle Schranken für Lobbyaktivitäten, wenn sich das neoliberale Credo des „schlanken“ Staates nicht länger in politischen Entscheidungen niederschlagen soll."
Spend enough time in the world of corporate sustainability, and you’ll hear a reproachful complaint: it’s unfair to criticise companies that are saying the right things on climate. If you let the perfect be the enemy of the good and punish the few trying to make a difference they’ll get spooked and not bother at all.
I don’t really buy that. Faking climate action can often be more harmful than doing nothing. For-profit companies gain financially by lying about the harm they’re doing. But even beyond that, fabricated cleanliness instils complacency and dissuades us from asking deeper systemic questions. The integrity of claiming to be doing good should be protected, and doing that involves critiquing those making the loudest, most strident claims of goodness.
I wrote about Microsoft as a key recent example: a company with rapidly growing emissions but a persistent claim to climate hero status. I found that the harm they do far outweighs the good. And I think even non-profit organisations create permission space for the broader trend of generative digital bloat currently burning through my digital world like a raging bushfire. I want to tell you about a recent example of this problem.
Why do you need to kill your search engine
Ecosia is an alternative sustainability-themed search engine that operates as a not-for-profit. The idea is that for every search you perform, the company plants a tree (it has planted millions). It has been around for a while, and always felt pretty harmless. There is nothing at all to dislike about a search engine that effectively functions as a fundraising tool for tree-planting and clean energy projects, and it seems to have actively funded a pretty stunning number of good, environmentally and socially-aware climate projects.
Their latest change is a step away from that good track record. That is because they’ve followed the big-tech trend by installing a layer of automatic plagiarised text generation on top of search results.
Instead of providing a link to a blog post Ketan has written, Ecosia will call on a software service that has ingested all of my written work, and that service will re-word and re-publish my work , often botching and mutating it thanks to its clumsy probabilistic word-guessing design. This isn’t “search” anymore: it’s a roadblock that stops you searching and funnels you away from engaging with human-created content.
As a person-who-writes-internet-content, you can guess how deep this burns. It isn’t just traditional plagiarism: it is unfeeling, automated, non-consensual and, worst of all, wrong enough to matter but right-sounding enough to be convincing. This isn’t an evolution of search: it is the active murder of search.
To demonstrate this point, I entered a question a normal human could easily answer with traditional methods: which books has the climate writer Ketan Joshi written? Ecosia immediately blended me with another brown guy with the same name:
Even if you use ‘traditional’ search, you still get an AI overview that blends me with travel-writer Ketan Joshi at the top of the page that you can turn off, but will always appear by default. A query I tried last week was worse: it got the title of my book right but manufactured the exact opposite meaning with the subtitle (in addition to attributing a bunch of books I didn’t write to me):
And when you ask it for quotes from the climate analyst Ketan Joshi, it either fabricates sentences and puts them in quotation marks, or pulls real quotes written as comments, or it even attributes lines I was criticising to me:
Quote 1: it's a COMMENT on my @techwontsave.us chat posted on youtube!2: from my IEA AI report thread; not said in the post or the quoted images *at all* 3: A quote from a video FROM A DC DEVELOPER, not from me!! I posted the vid of him!!4: headline, not a quote?5: Also a headline…..
I mention this because it’s the important half of the sustainability question: why do it in the first place? “Everyone else is doing it” is not only not a justification, it should in fact be the thing that triggers more hesitation and scrutiny of what’s being implemented.
I assume the actual reason is the same as most other search engines: users acquire the empty calories false satisfaction of a more-easily-obtained answer. By definition, people searching for an answer don’t know the answer. That means they don’t detect how egregious and false the result is (particularly when the text engine is tuned to sound plausible and authoritative, and never output “I don’t know”).
It’s worth mentioning here that iFixit, an organisation that has been fighting against device waste through advocating for repair rights, also recently implemented a “chatbot” function on top of their many user-created repair guides. The verdict: “Having tried it, I would definitely not trust iFixit’s FixBot to guide amateurs like me through a pricey or dangerous repair”. On top of that: they too do not disclose any information about energy use compared to the old way of doing things. What is the point?
Although Ecosia is a not-for-profit, they seem to be re-enacting the for-profit erosion of human knowledge and the active murder of the open web currently being orchestrated by Google and the other tech giants. But, it’s green, right?
Green slop > Grey slop
As an extension of their green image, they’ve also presented this text-generating tool as the “world’s greenest AI“:
As I bet you already know, the mass-generation of text, images and videos using machine learning tools trained on repositories of all human-created digital information consume a lot of energy, and in doing so, materially boosts the burning of fossil fuels, doing damage to our atmospheric safety.
Ecosia isn’t doing this for profit, but the underlying idea of their claim – that “AI” can be green – is important for the profit-seeking tech industry, striking at a fast-growing tension between the goals of companies and the biophysical vulnerabilities of living things.
So, let’s check some of their claims.
"Reducing AI’s footprint isn’t enough — we’re here to make a positive impact. That’s why we generate more renewable energy than our AI features use, from 100% clean sources like solar and wind"
You need two numbers to check this: how much renewable energy they generate, and how much energy their “AI” features use. I asked Ecosia CEO Christian Kroll about both, and he referred me to this post, which very much does not contain either.
Honestly, this claim doesn’t even sound that implausible, which makes it even more odd that the numbers aren’t being shared. Sharing these types of figures openly would actually be a genuinely useful exercise, and give us an open, transparent insight into their energy consumption both pre and post text-generation. Despite that: nothing.
More problematic is the idea that Ecosia is “carbon negative“1 thanks to both causing emissions but also funding renewable energy. This is the logic of carbon offsetting and it fundamentally doesn’t make real, physical sense: no matter how much more renewable energy you fund, emissions are emissions and they still cause climate change. We solve this problem when we stop emitting.
"We use OpenAI’s GPT-4.1 mini for the best balance of performance and efficiency. It uses far less energy than larger models, while still delivering the answers you’re looking for"
Not entirely sure this needs saying but OpenAI is comfortably an industry leader in both lacking transparency and actively, directly incentivising the burning of fossil fuels to power their energy-hungry data centres. I cannot think of a worse choice, when it comes to trying to make a chatbot “green”.
Close to 1 full goddamn gigawatt of gas (open-cycle, so the most inefficient kind) for the "Project Jupiter" site in New Mexico.This whole article is a stunning illustration of how data centres are incentivising new fossil infrastructure: eastdaley.com/daley-note/p…
Notably, they also seem to exclude the training costs, or any analysis of which servers are doing the training or results (“inference”) (and what grids they’re operating on).
"We use tools like the AI Energy Score and Ecologits to select efficient models and track their energy use — keeping our process transparent, and ourselves accountable"
"As a not-for-profit company, we can afford to do things differently. AI Search uses smaller, more efficient models"
As I mentioned, Ecosia don’t disclose energy use: not for a single query, and not for their entire organisation (particularly before and after the implementation of incorrect text generation replacing links to the open web). It would be exceedingly easy for them to implement a live energy usage and estimated emissions score for interactions with text generation and compare them to estimates of traditional search. A not-for-profit is better-placed than most to do something like that. But it really seems like they want the accolades of “green AI” without having to disclose a single actual number.
"Prefer the classic experience? You can turn Overviews off with a single click"
"We avoid energy-heavy features like video generation altogether"
9 months ago, Ecosia posted a clip of prominent AI and sustainability expert Dr Sasha Luccioni highlighting that one of the reasons she uses Ecosia is thanks to them not forcing the use of generative systems on search users. Unlike Google and Microsoft, Ecosia does allow users to ‘turn off’ generative summaries.
But I think this is important: while they do offer the option to ‘turn off’ the activation of text generation, it is on by default. The site’s design uses ‘dark patterns’ to usher you into replacing web search with inaccurate text plagiarism by default – hitting ‘enter’ takes you to a search, but the only button on the bar is to activate the chatbot, resulting in a confusing interface at best:
As I wrote in my post about Google, there has been a very noticeable trend towards actively pressuring and mandating chatbot use. Ecosia seem to sit in the middle: even offering an option to turn it off is radical, but having it on and prominent by default will do plenty of harm that could be avoided through much more honest communication.
Hedonistic sustainability
A few months ago, I visited my brother in Denmark. He works in an oddly tall office building in Copenhagen, and from it, you can see the gorgeous near-shore collectively-owned wind turbines. Directly in the foreground, obscuring the view of at least one of them, was this thing:
That is a giant facility that generates electricity by burning household waste.
When you burn a plant that sucked carbon from the air as it grew you’re hypothetically just returning that carbon to the atmosphere: ‘carbon neutral’. But when you burn the plastic packaging it came in, that plastic is made from carbon extracted from deep underground, which you’re then transferring to the sky. That is carbon pollution, and ‘waste to energy’ (WTE) plants burning plastic are becoming a shocking contributor to rising global temperatures.
“Copenhill” was meant to be different. It was meant to puff out its smoke in ‘smoke rings’ that would remind Copenhagen residents to reduce waste (“those released at night will be illuminated by lasers connected to a heat-sensitive tracking system”). That never happened, and importantly, neither did the planned carbon capture and storage project. That failure in particular directly contributed to Copenhagen missing its 2025 net zero pledges, despite the city making massive progress thanks to wind power, reduced consumption and electrification.
The rate at which locals recycled plastic instead of discarding it was underestimated: meaning the oversized plant now has to import waste from other regions to burn to make financial sense. That imported waste has a higher share of plastic and therefore higher emissions2.
You can still ski down Copenhill on a plastic surface. When I visited, the elevator still told me of the impending CCS facility. The bar at the top is sponsored by Coca-Cola, one of the world’s worst sources of plastic pollution.
Copenhill was developed by Bjarke Ingels, of architecture firm BIG. Ingels coined the phrase “hedonistic sustainability”, “to demonstrate how the ‘seemingly contradictory’ ideas of sustainable development and the pursuit of pleasure can, and indeed should, co-exist”. In one 2016 video, Ingels specifically cites the Tesla Model S as an example of where the clean alternative can be better, thanks specifically to its greater acceleration.
"It would be great for the owner of the power plant because he wouldn't have to or she wouldn't have to make expensive ad campaigns or print pamphlets where people could read that her technology was clean because when you go and see it it's like 'wow what is this? This is like a completely different kind of power plant'"
Honestly: this isn’t ‘hedonistic sustainability’. It’s not even really hedonism: the ski slope is pretty mid. I didn’t get the impression from my conversations with locals that it’s much loved or heavily-used feature of Copenhagen’s cityscape. The guy working in the bar at the top looked beyond bored.
Being able to hop on a freely available share e-bike outside my brother’s office and cycle on absurdly safe roads en-route to Copenhill? That really was ‘pure pleasure’. The city-choking and pedestrian-killing over-acceleration of heavy Tesla EVs gets cited as the prime case of ‘hedonistic sustainability’ because I think these folks don’t have a good idea of what either word really means.
Ecosia is embarking on a similar process: offering up a vision of unethical, bloated, oversized and financially unstable overconsumption as compatible with sustainability. If they had simply said “we’re not replacing search with an interface that spits out inaccurate estimations of human content because it’s not worth the climate cost”, and instead invested more time and money into making search as useful and effective as it used to be, it would’ve been genuinely worth celebrating. And outside the capitalist corporate sustainability world, there is a huge, hungry audience for acts of resistance against the corrosive, life-worsening trends being enforced by big tech without any of us asking for it.
You can’t be ‘sustainable’ without asking whether something even needs to be done. This is true for replacing search gateways to the human-created internet with energy-hogging content plagiarism roadblocks. It’s true for massive waste burning facilities, or oversized road tank electric vehicles. These things are not really “hedonistic” or “pleasurable”. They’re anti-social, soul-crushing and demoralising, and they can only be presented as “green” through the lack of any real disclosures.
Ecosia also claims that their actions are ‘removing’ carbon from the air, which is not the case. Their impact is an addition of greenhouse gases into the atmosphere, no matter how much renewable projects they fund. You cannot undo the damage of pollution. ︎
"Although Ecosia is a not-for-profit, they seem to be re-enacting the for-profit erosion of human knowledge and the active murder of the open web currently being orchestrated by Google and the other tech giants."
I’ve written far more than I should have had to on the performative inclusion of “stakeholders” in post-2016 tech policymaking. This meant being invited into meetings with government and decision-makers, being told to “assume positive intent” as all manipulative types like to insist, only to find that your presence was strictly performative. You were either there so that they could tick the box of saying they had engaged with you, before proceeding to do what they were going to do anyway, or you were there so that they could spin your presence as an endorsement of what they were going to do anyway.
Turns out it wasn’t just me – the behaviour was so widespread that some academics have now done a study into how UKGov wields “stakeholder” engagement.
They conclude:
These findings show that the use of stakeholder tends to performatively entrench the existing power of “industry stakeholders” or nameless but clearly already engaged and empowered “key stakeholders”. Meanwhile, they also construct a false sense of inclusion through the non-performative use of generic or “other stakeholders”. This creates significant risk of a veil of accountability, and raises significant questions over established processes such as consultation. When it is unclear who is influencing policy, whose voices and interests are being represented, then the indicators from specific uses suggest that the stakeholder becomes a foil for amplifying historical power and privilege, often on political and/or economic lines, and in doing so excludes the needs of those most affected by technologies who already suffer a lack of agency in how data, AI, platforms and other areas are used to shape their lives.
"These findings show that the use of stakeholder tends to performatively entrench the existing power of “industry stakeholders” or nameless but clearly already engaged and empowered “key stakeholders”"