Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2593 stories
·
140 followers

NEW REPORT – The AI climate hoax

1 Comment

Read the full report (PDF)

Read the press release (PDF)

Access the raw data, chart files and other documents


I guarantee you’ve heard the harms of data centre expansion justified on the grounds that “AI” will ‘solve climate change’. These range from sci-fi claims of superintelligence through to detailed reports stacked with hundreds of examples of ‘AI for good’ helping energy, transport and industry cut emissions.

In partnership with the good people at the organisations shown below, I’ve created a new report that, for the first time, interrogates both the logic and the evidence for this claim.

We found that most of the ‘benefit’ tends to relate to older, smaller and leaner forms of machine learning, what has been called ‘traditional AI’, while we also know that most of the new harm is likely stemming from consumer generative AI over-deployment.

This distracts from the decisions made by companies that result in their own fossil fuel use rising at an unprecedented rate.

We also found that the evidence presented for examples of climate benefit, regardless of AI type, tend to be weak whether from companies or organisations like the IEA. The potential benefits are overstated, in surprising and significant ways.

What we see is companies veering wildly away from their climate targets. In most cases, this is true whether you use their ‘adjusted’ metrics that incorporate renewable energy offsets and deals or not.

That is a choice, and this focus on ‘AI for climate’ is a distraction from the decision to worsen the pollution of data centres through an unprecedented explosion of digital bloat.

Video and social

3 minute overview: AI vague-washing (watch on Youtube)

3 minute overview: Weak evidence for benefit, strong evidence of harm (watch on Youtube)

Supported by

Very genuine and warm thanks to the good people at the follow organisations, who supported this work and continue to push for accountability from polluters:

About Beyond Fossil Fuels
Beyond Fossil Fuels is a civil society network committed to ensuring a just and rapid transition to a fossil-free, renewables-based future. Building upon the Europe Beyond Coal campaign, its goal is for Europe to be coal-free by 2030 and phase out fossil gas from the power sector by 2035. A clean and flexible energy system will deliver lasting benefits for people, the climate and the broader economy. Beyond Fossil Fuels is a non-profit organisation with an office in Berlin, with staff spread across Europe.

http://www.beyondfossilfuels.org

About Stand.earth
Stand.earth is a global advocacy organization delivering large-scale change for our planet and its people by interrupting the systems that create environmental and climate crises. Its mission is to challenge corporations and governments to treat people and the environment with respect. Stand’s worldwide community of more than one million members advocates for a climate-safe, equitable future, where environmental and climate justice policies uphold the dignity of people everywhere – at the scale our world requires.
https://stand.earth

About Climate Action Against Disinformation
Climate Action Against Disinformation is a global coalition of over 120 leading climate and anti-disinformation organisations across the globe demanding robust, coordinated and proactive strategies to deal with the scale of the threat of climate misinformation and disinformation.
https://caad.info

About Friends of the Earth U.S.
Friends of the Earth U.S. works to reduce the spread of disinformation that potentially affects all of our campaigns. As technology and media companies consolidate their power, our fundamental ability to campaign on any issue is threatened, as corporate polluters gain more control over the basic communications systems that are needed for social change and democracy itself.
https://foe.org/projects/disinformation/

About Green Screen Coalition
The Green Screen Climate Justice and Digital Rights Coalition is a group of funders and practitioners looking to build bridges across the digital rights and climate justice movements. The aim of the coalition is to be a catalyst in making visible the climate implications of technology by supporting emerging on-the-ground work, building networks, and embedding the issue as an area within philanthropy. https://greenscreen.network

About Green Web Foundation
Green Web Foundation is a non-profit organisation working towards a fossil-free internet by 2030 by reducing absolute emissions and phasing out fossil fuels in data centers – fast, fairly and forever. The foundation maintains the world’s largest open dataset of websites that run on green energy and builds open source tools for measuring and mitigating emissions from digital services.
https://greenweb.org/





Read the whole story
tante
5 hours ago
reply
"We also found that the evidence presented for examples of climate benefit, regardless of AI type, tend to be weak whether from companies or organisations like the IEA. The potential benefits are overstated, in surprising and significant ways."
Berlin/Germany
Share this story
Delete

Story About AI (Being Mean) Gets Pulled Because Journalist Used AI (That Made Mistakes)

1 Comment
Story About AI (Being Mean) Gets Pulled Because Journalist Used AI (That Made Mistakes)

On Friday the website Ars Technica published a story about Scott Shambaugh, a coder who made headlines in the tech world last week with his story about AI agents, and in particular one that he claims had written what he called a 'hit piece' on him and published it for the world to see.

Shambaugh’s story is as interesting as it was horrifying. Agents are just the latest front in tech's war on our collective sanity, a type of AI that's essentially a glorified autocorrect that in this case has been given a uniform and sent onto the internet to try to do human things like propose code changes and then, when humans like Shambaugh decline them, writing pissy blogs complaining about it.

Among other sites, Ars Technica covered this last week with a news story that remarkably appeared to have some AI-created filler of its own, citing quotes from Shambaugh that never appeared in the very blog Ars was linking.

Not long after readers--including Ars' own community--began noticing this, the story was pulled (though you can still read the archived original here). It's general journalistic practice that published stories which contain inaccuracies are edited and updated, not deleted entirely.

Ars has since published an editorial statement, bylined by EiC Ken Fisher, addressing the story, its deletion and the outlet's policies on AI:

On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.
That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.
Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.

By citing Ars' clear rules, Fisher's statement points the finger for this disaster at the two authors bylined in the piece, with one (Benj Edwards) having since posted a statement of his own, assuming full responsibility for the incident and saying the other (Kyle Orland) had 'no role in this error':

I have been sick with COVID all week and missed Mon and Tues due to this. On Friday, while working from bed with a fever and very little sleep, I unintentionally made a serious journalistic error in an article about Scott Shambaugh.

Here’s what happened: I was incorporating information from Shambaugh’s new blog post into an existing draft from Thursday.

During the process, I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline.

When the tool refused to process the post due to content policy restrictions (Shambaugh’s post described harassment). I pasted the text into ChatGPT to understand why.

I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words.

Being sick and rushing to finish, I failed to verify the quotes in my outline notes against the original blog source before including them in my draft.

Kyle Orland had no role in this error. He trusted me to provide accurate quotes, and I failed him.

The text of the article was human-written by us, and this incident was isolated and is not representative of Ars Technica’s editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.

I sincerely apologize to Scott Shambaugh for misrepresenting his words. I take full responsibility. The irony of an AI reporter being tripped up by AI hallucination is not lost on me. I take accuracy in my work very seriously and this is a painful failure on my part.

When I realized what had happened, I asked my boss to pull the piece because I was too sick to fix it on Friday. There was nothing nefarious at work, just a terrible judgement call which was no one’s fault but my own.

Look, I understand mistakes can happen when you're sick, but Edwards--who it should be noted is Ars' 'Senior AI Reporter'--has used AI not once but twice here, and in doing so has caused a huge amount of reputational damage for himself and his employer in the process. And it's not like he used it to comb through 800 pages of impenetrable legal documents, either; Shambaugh's original blog was only a couple of pages long (he's since written a follow-up), and written in plain English, making the AI's hallucinations (and Edwards' use of it) even more damning.

It's disappointing someone working in this space felt the need to have to use this garbage, particularly when it violates their employer's own policies. As this whole mess has shown, the tech simply cannot do the most basic things the people selling it keep claiming it can. Citing quotes from a blog for your own story is bread-and-butter stuff for a journalist, it's what the job is, and seeing this busted tech worming its way into a profession that should be its sworn enemy– and fucking the whole thing up in the process--is just a huge bummer.



Read the whole story
tante
1 day ago
reply
"It's disappointing someone working in this space felt the need to have to use this garbage, particularly when it violates their employer's own policies. As this whole mess has shown, the tech simply cannot do the most basic things the people selling it keep claiming it can."
Berlin/Germany
Share this story
Delete

Diffusion of Responsibility

1 Comment

One of the features of “AI” is the diffusion of responsibility: “AI” systems are being put in all kinds of processes and when they fuck up (and they always fuck up) it was just the “AI”, or “someone should have checked things”. “AI” companies want to sell machines to solve every issue but give no warranties, take no responsibilities and the same dynamic often extends to organizations using “AI”: You get the support-chatbot to promise you a full refund and when you claim that you get a half-assed “oh but that was just the bot, those tell bullshit all the time”. That’s where human in the loop setups come into play: What if the company can just hire one sucker to “check” all the “AI” slop and when things fall apart that one person has to take the blame. Fun!

(Sidenote: It should be the law that when you offer or run an “AI” you are 100% liable for everything it does. Sure, that would kill the whole industry but who gives a shit?)

But let’s get to the actual topic here. ClawdBot Moltbot OpenClaw is all the rage these days. It promises to be (quoting the website):

“The AI that actually does things.

Clears your inbox, sends emails, manages your calendar, checks you in for flights.
All from WhatsApp, Telegram, or any chat app you already use.”

It has it’s own “social network” called Moltbook that “AI” influencers treat like it’s proof for emerging actual intelligence in LLM based systems, proof that we should take them seriously and whatnot. Sure, it looks like it’s mostly humans posting or directly triggering posts but that does not change anything, right?

OpenClaw is still very popular among a group of men1 who want to use it to run their life and sure. As long as you know very little about IT, security or risk that surely is a good idea. But everybody needs a dumb hobby.

OpenClaw was vibecoded by Peter Steinberger, an Austrian software developer. He’s very proud about the vibecoding part repeatedly posting how he happily releases code he has never seen or checked.2

In the end of January Steinberger posted something on the other facist social network besides truth.social:

The amount of crap I get for putting out a hobby project for free is quite something.

People treat this like a multi-million dollar business. Security researchers demanding a bounty.
Heck, I can barely buy a Mac Mini from the Sponsors.

It's supposed to inspire people. And I'm glad it does.

And yes, most non-techies should not install this.
It's not finished, I know about the sharp edges.
Heck, it's not even 3 months old.
And despite rumors otherwise, I sometimes sleep.

Because people had been criticizing him for releasing OpenClaw (back then still called Moltbot): For releasing unchecked code and giving it to people to run. For allowing that code to interface with all kinds of relevant external services making purchases for people, posting as them, deleting their files and whatever. You know. Basic responsibility shit.

But OpenClaw is just small beans hobby pwoject. Peter just had some fun wif da computer. You cannot criticize him because he was just trying to inspire. For free! How dare people to expect even the base line of responsibility. HOW DARE THEY!

So I had a quick look at the OpenClaw website. You know to look at this hobby project and be inspired.

Screenshot of the OpenClaw website. It looks very professional, claiming that OpenClaw is the "AI" that can actually do stuff and directly has a "how to run" code snipped without any warnings or anything

Hobby project just to inspire people. Sure thing.

OpenClaw presents itself like a mature and usable product, with testimonials and a convenient “curl | bash” install command: that’s how you know that it is quality software. (For the non software people: curl $URL|bash just downloads some code from the internet and runs it. No checks, no rollbacks. It can just fuck up your whole home directory for shits and giggles. Upload all your private keys and files to a server somewhere. Anything you could do it can do.)

And here we see another kind of diffusion of responsibility that the “AI” wave is creating: People just releasing whatever software they generated into the wild for others to run. Often with huge promises: OpenClaw “ACTUALLY DOES THINGS” as the website says. No “this is experimental”, “this is potentially dangerous”, “this code has not been checked by anyone and running it is the software equivalent of digging a half eaten kebap out of the trash can and eating it”.

Steinberger did not just generate some shit code for himself to do whatever. No: He needed to release it. And not for “inspiration” but for people to run it. He’s doing the “right-wing tech podcast tour” currently going on Lex Fridman’s horror show and talking to startups and whatnot. He wants something and it’s not really to inspire: It’s to be “the inventor” of OpenClaw. He wants the reputation boost you get from running a popular open source project whose name people might actually know. He wants to be important.

What he does not want is the work. The work that made “having a well known open source project” mean something. The reputation that people got from being good stewards of responsible projects that made sure that people’s digital existence was as safe as possible. That software had as few security issues as possible.

I was wondering why this made me so fucking furious and then I remember that I did actually talk about that before: In my FluCon talk last year. Because while formally one could argue that Steinberger did create something Open Source (because you can download whatever code his chatbots generated and it has some open license [might might be irrelevant cause LLM generated code is not under copyright]) that cannot, no must not, be enough. It just shows how “having some code and an open license” is not a sufficient set of requirements for building a sustainable, resilient digital landscape for everyone.

In this case “uwu little open source pwoject” is just by Steinberger to absolve himself from any responsibility for the thing he explicitly put out into the world for people to use. And we have been accepting this kind of behavior for way too long.

I don’t want to focus on Steinberger too much. He’s a random tech bro who wants to impress his other Elon Musk wannabe friends. Fine. But this is a pattern that the whole “AI” acceptance movement is establishing that preys on our experience of being able to rely on open source projects who take their product, their work and their users’ safety seriously and invalidates decades of hard work establishing that kind of trust.

Because up to now trusting open source was – heuristically – not a bad idea. Especially bigger, more mature projects are very professional and take great care about their users’ and smaller, younger projects talk explicitly about being early stage software with flaws and warn against certain use cases.

But now we have “AI” and everyone can generate some code. That might work. Or might mine some crypto or give your laptop an STI. Decades of collective work proving that “open source” is not less but at least as secure as commercial offerings now slowly going down the drain. Because a bunch of men – and it is always all men – just don’t want to be responsible for their actions. Which is fine if you are 5. But after 18 it gets old really fucking fast.

We deserve good software in a world where participation is often connected to having access to a computer, to software, etc. We should push towards more reliable software, more secure software, software that is accessible, that protects people against misuse and allows them to be as safe as possible in doing what they want to do.

What do we get? Slop. Slop generated by guys who – when called out for their irresponsible behavior – just start crying about how they only wanted to “share” or “inspire” or “educate” while handing out running chainsaws to kids.

And that was what makes me fucking furious. Not just these dudes being spineless but the disrespect to those who have run serious projects for decades to build a more humane stack.

And it reminds me that “Open Source” is not enough. Open source code can still be harmful to you and your digital existence, can put you in danger without you realizing it. We need something better. Something more.

We need to be willing to take responsibility for and care of one another. “AI” generated software is the opposite of that.

Coda: Never forget. Nothing only men like is cool.

  1. it’s 99% men. Look at any picture from the OpenClaw meetups. It’s more dudes than in an incel forum. Well. Let’s not speculate about the overlap here
  2. There is a weird parallel between Chrossfit people and Vibecoders: Both cannot just do their thing but need to make that their personality, tell you about it constantly. As if anyone cares.
Read the whole story
tante
3 days ago
reply
"Decades of collective work proving that “open source” is not less but at least as secure as commercial offerings now slowly going down the drain. Because a bunch of men – and it is always all men – just don’t want to be responsible for their actions. Which is fine if you are 5. But after 18 it gets old really fucking fast."
Berlin/Germany
Share this story
Delete

How to raise children

1 Comment
A painting with a stick attached, so it looks like a protest sign. The background is pink, and in lighter pink it says FUCK ICE.
A wee little painting (11×14”, sans stick) I made last week. This one got auctioned off on Bluesky to help folks in Minnesota. I’m making more.

This week’s question comes to us from April Piluso:

My daughter turns 3 this month. I want to help her have fewer troubles than I did by teaching her about boundaries, values, independent thinking etc. I think if more kids learned this stuff, we’d have more good humans and fewer jerks. What do YOU think every kid should grow up knowing?

Every kid should grow up knowing they are loved.

Everything else is pretty close to a rounding error. Ok, maybe not a rounding error. I’m exaggerating to make a point. But honestly, there is nothing a child needs more in life than knowing they are loved. Love can make up for a lack of a lot, but a lack of love is very hard to make up for.

Regular readers of this newsletter will now be familiar that I didn’t grow up in the best household. I grew up in an abusive household. I also grew up poor. And when I look back on my childhood, growing up poor wasn’t really a big deal. It was just a fact of life. And to be clear, poor is very subjective. We always had a roof over our head. We didn’t miss meals. I knew we were poor because every Sunday my parents would pile us in the car and go for a drive around the rich neighborhoods in town, getting progressively more upset about our own circumstances, and blaming each other—and their kids—for not being able to live in one of those fancy houses. Meanwhile, my brothers and I sat in the back seat, being as quiet as possible so as to not draw my father’s growing anger. We didn’t know we were poor until my father started hitting us for being poor.

I’ll tell you a story, but first—some cultural background: in Portugal, where my parents grew up, if you had a house for rent you’d make a paper cutout and tape it to the windows. (This was pre-internet, obviously.) The cutout could be any of a number of things, probably made by whichever kid the landlord deemed to be “the artistic one.” No, I don’t know how this started, and it’s not the point of our story so I’m not looking it up.

One Sunday afternoon, we’re driving around doing our routine wealth tourism on The Mail Line, and my dad stops the car. He pulls over.

“Go see if that house is for rent.”

I turn towards the house he’s pointing at. This thing was an old-school two-story mansion. Very old-Philadelphia money. Whoever built it probably has their name on a hospital now. Anyway, I ask him why he thinks the house (that we obviously cannot afford) is for rent.

“You see the cut-outs on the window?”

“Yeah, it’s Christmas. Those are snowflakes.”

The slap came before I finished the sentence. Followed by the scream to get the fuck out of the car and do what I was told. So off I went, crying. I rang the doorbell. Some unsuspecting stranger opened the door, wondering why some crying kid was standing there and asking if the house was for rent, even though I knew it was not. He seemed understandably confused, but politely told me it was not, then closed the door. Receding, I’m sure, to a nearby curtain that he could peek out of. (Or possibly straight to the phone to call the police about immigrants in the neighborhood.) I walked back to the car, knowing what was coming. And when I told him the house wasn’t for rent, sure enough—it came. Right across the face. We drove home in silence, where he dropped us all off and went off to do something else with people who were not his family, who he hated.

So yeah, when I think back on growing up, it’s not the lack of anything—except the lack of love—that I think about. Love and safety. Made all the more worse because every once in a while I’d get a glimpse of what those things were like. Sometimes he’d come home in a good mood. Sometimes he’d muss my hair on the way in. But those times were rare, but the fact that they existed at all let me know that they were possible, which made it that much crueler.

Fast forward decades to a therapist’s office where my therapist—who I’m sure isn’t reading this—is telling me that my own relationships are falling apart because how am I supposed to love anyone else when I never learned what love was like growing up. (Yes, my therapist is RuPaul.) If you were raised in a similar environment, please believe me when I tell you that it is never too late to learn how to love. You don’t have to carry your parents’ sins into your relationship with your own children.

Every kid should grow up knowing they are loved.

Telling a child you love them is free.

Also, while I by no means an expert in the field, and my opinions should be treated with much salt, I tend to believe that children are born good. They’re born full of love. They’re born full of confidence. (How fucking confident do you have to be to take that first step?!) They’re born curious. They’re born wanting to be part of a community. It’s not so much that we need to teach them these things, as much as we need to encourage them to keep believing these things. And protect them from people who would work to destroy those things.

Yes, this is about AI. The AI industry can only succeed if it separates people from their joy and their confidence. An industry run by people who were not raised with love, attempting to steal it from others.

I’ve written about this before, but every child is born loving to draw. They draw on everything. They demand crayons in restaurants. They draw on your walls. You should let them do so. Fuck your walls. It’s easier to eventually paint over a wall, than to rebuild a child’s confidence.

It’s wild to me that we parent our children to fit into society, then get together with our friends and talk about how broken society is. I’ve seen people rail against our broken educational system, then demand their children get straight As in school. I’ve seen people complain about not having any time to themselves and then schedule every minute of their kid’s life.

There is more we can learn from children than they can learn from us.

Mostly we need to support children and let them know that they are loved. Children are so ready to love you back. For every cruel thing my father did to me, anytime he walked through the door and mussed my hair I was ready to give him another chance. I was so ready to love him.

Congratulations on your daughter turning three. The fact that you’re worried about this stuff is usually a sign that you’re on the right path. The funny thing about parenting is that the people who are most worried about messing it up, are the ones most likely to get it right. I’m old enough that I’ve seen a lot of my friends have kids, and those kids are now adults in their own right. And one of the first things I noticed was that the folks who were the most chaotic, the most fly-by-the-seat-of-their-pants, the most worried about fucking things up… they were the ones who ended up incorporating their kids into their messy lives, encouraging them to be themselves, giving them the space to be curious, to climb trees, to draw on the walls, to ask their neighbors for help. And ultimately, hold everything together with love. While the friends who made plans, and spreadsheets, and made lists of goals, and fretted about their kids not being able to tie their shoes yet, or read at a certain level yet—and by the way, I totally understand wanting to do these things, and worrying about these things—they were so concerned with how things were supposed to be going that they totally missed how things were actually going. Which is that this new amazing human was unfolding before your eyes, and while it might not be the human you were expecting… aren’t they amazing?!? And if you don’t understand them, well child what happened to your curiosity?!

Your kid is going to be alright. With enough love, your kid is going to be alright.

Don’t judge your children, love them. Because they will, in turn, love you back. And when they do—holy fucking shit, it’s just amazing.

My daughter’s coming over for dinner tonight. I can’t wait to hug her and tell her I love her.

I love you for asking this question.


🙋 Got a question for me? Ask it!

📕 My new book, How to die (and other stories), is now available for pre-order! It’s stories from this newsletter. It’s very handsome. Yes, you want it!

📆 Related, but secret… if you’re in the Bay Area, please circle May 21 on your calendar. All will be revealed in time.

📣 There’s a couple spots left in next week’s Presenting w/Confidence workshop. Sign up, we’ll have fun hanging out, we’ll make fun of AI slop, then I’ll help you get a job.

💰 If you’re enjoying this newsletter please consider joining the $2 Lunch Club! Writing is labor and labor gets paid, right?

🍉 Please donate to the Palestinian Children’s Relief Fund. The ceasefire is a lie.

🏳️‍⚧️ Please donate to Trans Lifeline, and for fuck sake if there is a trans child in your life PLEASE tell them you love them, they are SO ready to love you back.

Read the whole story
tante
4 days ago
reply
"Yes, this is about AI. The AI industry can only succeed if it separates people from their joy and their confidence. An industry run by people who were not raised with love, attempting to steal it from others."
Berlin/Germany
Share this story
Delete

Sixteen Claude AI agents working together created a new C compiler

1 Comment

Amid a push toward AI agents, with both Anthropic and OpenAI shipping multi-agent tools this week, Anthropic is more than ready to show off some of its more daring AI coding experiments. But as usual with claims of AI-related achievement, you'll find some key caveats ahead.

On Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company's Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.

Over two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.

Read full article

Comments



Read the whole story
tante
10 days ago
reply
Today in bad tech journalism: "AI agents" generating a C compiler.

They didn't though. They created a piece of it that didn't work properly (because LLMs are trained on a lot of C compiler code).
Berlin/Germany
agwego
9 days ago
Can't link the generated kernel ASM code is a fail. No optimization resulting in brutally slow clode is a fail.
Share this story
Delete

The World That Was

1 Comment and 2 Shares

It’s easy to be overwhelmed by the world. I mean, look at … everything. Massive ongoing wars everywhere, Fascism on the rise, exploding inequality. Shit is fucked up and more fucked up on a global scale than it ever was in my life time (I was born in 1979). And with the media landscape and notifications and 24 hour news it’s hard to not feel overwhelmed. Every morning when waking up is basically:

And it is important to be informed. To at least try to see what is going on in order to decide where one can make a difference or maybe at least help? Someone? Anyone?

But this is also no way to live. For a bunch of different reasons. I think given the state of the world it’s fair to let certain crises go into the background (without going full ignorance): You just mentally cannot dive into every crisis all the time. Not just because you don’t have the hours in the day but also because it will destroy your mind.

I have this tendency to believe that if I just dig for more information and understand, that if I can make sense of something, I will feel better and it will create some form of path towards resolution. That it would allow me to send a letter to a politician or support an organization or write or do something that can help turn things around. I believe that knowledge and understanding creates agency. Which isn’t 100% false but in the way I apply it is basically delusional.

And I do that because I am scared. I am scared by the consequences of the chaos. I’ve learned enough about history to understand that when shit hits the fan it’s rarely the powerful and wealthy who suffer the most. That it starts hurting at the bottom and then quickly moves up. And that scares me. Not in the abstract but in my bones. Even more now that I have a son who I just want to be able to live a life full of joy and love.

But being scared is not all I feel (even though it is a big part of it). I am grieving.

I realized that a few days ago when I took some time off of the news and all that. I was exhausted and burned out and took a walk. And understood that I was literally grieving. I was sad for the structure of the world that I see crashing down.

And don’t get me wrong. The structure wasn’t perfect. Or even great. We built a world order based on exploitation of the planet and each other. With some good things bolted to it here or there, some remnants of socialist and human rights thinking. Certain safety nets, certain conventions. It wasn’t much, but it was something. And now that they are being dismantled in record time I am grieving for those tiny things.

Because while that system was in place it did – at least to me, and maybe that was naive – feel as if we could use it as a platform to build something better on. Drive back the inequality and exploitation through collective action. The road to “fully automated luxury space communism” was still very long but it felt like there might be a floor to it all. And that floor was still too low and did not include everyone, probably a minority even. But from my privileged position as someone living in Germany it felt like a foundation to build on. A consensus.

And I miss it. It hurts to see it being killed. To see that in fact there is no consensus that includes any commitment – even a surface level one – to human rights and the will to build something better than “billionaires can get even richer while the world is burning”.

This is not a feeling I am planning to dwell on for too long. But I think it’s important that during the storm of news and notifications and whatever we sometimes take the time to understand how that makes us feel and why?

I am grieving because I had felt like there was sort of a “emergency break” kind of thing that would ensure things would be going too bad. And coming from a family where I inherited my parents’ fear of the threat of downwards social mobility that gave me a lot of emotional support. It was about more about a feeling than it was about facts.

It’s important to understand how the world makes you feel. And share it. Otherwise your emotions are gonna catch up with you at some point.

Now is the time to get back to it. Even if the rules-based order that I grew up in and relied on all my life is crumbling, maybe we can redirect that momentum towards something better. Or at least stop some fascists. “Pessimism of the intellect, optimism of the will” and all that.

Read the whole story
tante
12 days ago
reply
I wrote a bit about grief. Not for a person but the world that was.
Berlin/Germany
Share this story
Delete
Next Page of Stories