Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2552 stories
·
139 followers

AI data centres — in SPACE! Why DCs in space can’t work

1 Comment and 2 Shares

Spending all the money you have and all the money you can get and all the money you can promise has a number of side effects, such as gigantic data centres full of high-power chips just to run lying chatbots. These are near actual towns with people, and people object to things like noise, rising power bills, and AI-induced water shortages.

So what if, right, what if, we put the data centres in … space!

This idea has a lot of appeal if you’ve read too much sci-fi, and it sounds obvious if you don’t know any practical details.

Remember: none of this has to work. You just have to convince the money guys it could work. Or at least make a line go up.

A lot of people who should know better have been talking up data centres in space over the past couple of years. Jeff Bezos of Amazon wants Blue Origin to do space manufacturing. Google has scribbled a plan for a small test network of AI chips on satellites. [Reuters; Google]

But what’s the attraction of doing data centers on hard mode like this? They want to do their thing with no mere earthly regulation! Because people are a problem.

Space is unregulated the same way the oceans are unregulated — that is, it’s extremely highly regulated and there’s a ton of rules. But rules are for the peons who aren’t venture capitalists.

Startups are on the case, setting venture cash on fire. Lonestar Data Systems sent a computer the size of a book, riding along with someone else’s project, to the moon! The lander tipped over and it died. Oh well. [Grist]

Starcloud is targeting the AI bros directly. They’ve got a white paper: “Why we should train AI in space.” [Starcloud, 2024, PDF]

Last month, Starcloud sent up a satellite, Starcloud-1, containing one Nvidia H-100 processor. It didn’t die on launch, so that’s something! [Data Center Dynamics]

Starcloud-1 was a test. Starcloud-2 is the big deal: [Starcloud]

Our first commercial satellite, Starcloud-2, features a GPU cluster, persistent storage, 24/7 access, and proprietary thermal and power systems in a smallsat form factor.

That’s written in the present tense about things that do not exist. It’s a paper napkin scribble that got venture funding.

A good friend who writes under the pen name Taranis is an actual ex-NASA expert who has personally built electronics to go into space. Taranis also worked at Google on deploying AI systems. And Taranis has written an excellent blog post on this stupidity: “Datacenters in space are a terrible, horrible, no good idea.” [blog post]

You can send a toy system into the sky, and it might work a while before it breaks. You can’t send up a data centre, with tens of thousands of expensive Nvidia chips, with any economic feasibility, any time in the near future.

Firstly, you don’t actually have abundant power. The solar array for the International Space Station delivers 200 kilowatts and it took several trips to get it all up there. You could power about 200 Nvidia H-100 cards with that 200 kilowatts.

Secondly, cooling in space is an absolute arse. Space is an excellent insulator for heat. That’s why a thermos works. In space, thermal management is job number one. All you can use is radiators. Getting rid of your 200 kilowatts will need about 500 square metres.

Thirdly, a chip in space needs radiation tolerance. Cosmic rays zap it all the time.The chips degrade at best and short out at worst.

If your GPUs are cutting edge, they’re fragile already — they burn out all the time running in their optimum environment on Earth. Space is nastier:

GPUs and TPUs and the high bandwidth RAM they depend on are absolutely worst case for radiation tolerance purposes. Small geometry transistors are inherently much more prone both to SEUs [single-event upsets] and latch-up. The very large silicon die area also makes the frequency of impacts higher, since that scales with area.

If you want chips that work well in space, you’re working with stuff that’s 20 years behind — but built to be very robust.

And finally, your network is slow. You have at most a gigabit per second by radio to the ground. (Compare Starlink, which is on the order of one-tenth of that.) On Earth, the links inside data centres are 100 gigabit.

I’ve seen a lot of objections to the Taranis post — and they’re all gotchas that are already answered in the post itself, from people who can’t or won’t read. Or they’re idiots going, “ha, experts who’ve done stuff! What do they know? Possibility thinking!” Yeah, that’s great, thanks.

If you really want to do space data centres, you can treat the Taranis post as a checklist — this is every problem you’re going to have to solve.

So space is a bit hard. A lot of the sci-fi guys suggest oceans! We’ll put the data centres underwater and cooling will be great!

Microsoft tried data centres in the ocean a few years ago, putting a box of computers underwater off the coast of Scotland from 2018 to 2020. They talked about how it would be “reliable, practical and use energy sustainably” — but here in 2025, Microsoft is still building data centres on land. [Microsoft]

Microsoft admitted last year that the project was dead. The only advantage of going underwater was cooling. Everything else, like maintenance or updating, was a massive pain in the backside and underwater data centres were just not practical. [IT Pro, 2024]

Space is going to be just like that — only cooling’s going to suck too. This is unlikely to slow down the startup bros for one moment.

Read the whole story
tante
3 hours ago
reply
"So what if, right, what if, we put the data centres in … space!

This idea has a lot of appeal if you’ve read too much sci-fi, and it sounds obvious if you don’t know any practical details."
Berlin/Germany
Share this story
Delete

Hand and Hand

1 Comment and 3 Shares

Two hands have an idea to give each other matching tattoos. Lefty gets a beautiful eagle tattoo rendered on their arm. Now it’s Righty’s turn to be inked, but he looks scared as Lefty wields the tattoo gun to draw a childishly-drawn sloppy eagle

The post Hand and Hand appeared first on The Perry Bible Fellowship.

Read the whole story
tante
3 hours ago
reply
Matching tattoos
Berlin/Germany
Share this story
Delete

AI for evil — hacked by WormGPT!

1 Comment and 2 Shares

A chatbot is a wrapper around a large language model, an AI transformer model that’s been trained on the whole internet, all the books the AI vendor can find, and all the other text in the world. All of it. The best stuff, and the worst stuff.

So the AI vendors wrap the model in a few layers of filters as “guard rails.” These are paper-thin wrapper on the input and the output. The guard rails don’t work. They’re really easy to work around. All the “bad” text is right there in the training. It’s more or less trivial to make a chatbot spew out horrible content on how to do bad things.

As I’ve said before: the AI vendors are Daffy Duck running around frantically nailing a thousand little filters on the front, then Bugs Bunny casually strolls through.

We know that how to make bombs, hack computers, and do many other bad things are just there in the training. So they’re in the model. Can we get to those? Can we make an evil chatbot?

Yes we can! The Register has a nice article on the revival of the WormGPT brand — a chatbot put together by a hacking gang. For $220, you can get a chatbot model that will happily tell you how to vibe-code an exploit. “Your key to an AI without boundaries.” Sounds ’l33t. [Register]

The original WormGPT came out in June 2023. It was supposedly based on the perfectly normal GPT-J 6B open weights model — but the creator said he’d fine-tuned it on a lot of hacker how-to’s and malware info.

WormGPT was mostly for writing convincing phishing emails — to talk someone into thinking you were someone they should send all their money to. WormGPT got a lot of media coverage and the heat got a bit much for its creator, so WormGPT was shut down in August 2023. [Abnormal]

Brian Krebs interviewed WormGPT’s creator, Rafael Morais, also known as Last. Morais insisted he’d only wanted to write an uncensored chatbot, not one for crooks. Never mind that Morais was selling black-hat hacking tools just a couple of years earlier. He said he’d stopped now, though. [Krebs On Security]

Other hacker chatbots sprang up, with names like FraudGPT. The market for these things was suckers — script kiddies who wanted to write phishing emails and would pay way too much to get a chatbot to write the messages for them. The new chatbots were usually just wrappers around ChatGPT at a higher price. The smarter crooks realised they could just prompt-inject the commercial chatbots if they really wanted anything from one of these.

The WormGPT brand has returned, with WormGPT 4 out now! It came out on September 27th. They don’t say which model it’s based on. WormGPT 4 is only available via API access — $50 a month, up to $220 for a “lifetime” subscription. We don’t know if it’s Morais again.

WormGPT 4 can write your ransom emails and vibe-code some basic stuff — like a script to lock all PDFs on a Windows server! Once you get the script onto the server and run it.

You don’t have to spring for WormGPT, of course. There are free alternatives, like KawaiiGPT — “Your Sadistic Cyber Pentesting Waifu.” Because the world is an anime and everyone is 12.

The actual current user base for evil chatbots is the cyber security vendors, who scaremonger how only their good AI can possibly stop this automated hacker evil! Look at that terrible MIT cybersecurity paper from earlier this month. (They still haven’t put that one back up, by the way.)

The vendor reports have a lot of threats with “could” in them. Not things that are actually happening. They make these tools sound way more capable than they actually are.

None of these evil chatbots actually anything new. It’s a chatbot. It can vibe-code something that might work. It can write a scary email message. The bots may well lead to more scary emails clearly written by a chatbot. But y’know, the black-hat hackers themselves think the hacker-tuned chatbots are a scam for suckers.

I’m not seeing anything different in kind here. I mean, tell me I’m wrong. But AI agents still don’t work well at all, the attacks are old and well known, hacking attacks have been scripted forever, and magic still doesn’t happen. Compare Anthropic’s scary stories about alleged Chinese hackers abusing Claudebot a couple of weeks ago.

It’s vendor hype. Don’t believe the hype, do keep basic security precautions, and actually listen to your info security people — that’ll put you ahead of 95% of targets right there.

Read the whole story
tante
5 days ago
reply
This is so much "AI" reporting: Claims about potentials and/or threads. I'd just like to have grown-up conversations about tech again :(

"The actual current user base for evil chatbots is the cyber security vendors, who scaremonger how only their good AI can possibly stop this automated hacker evil!"
Berlin/Germany
Share this story
Delete

Desire to Pop

1 Comment and 2 Shares

Abstractions are powerful tools. Given enough abstraction everything gets somewhat simple. Somewhat clear. Also somewhat wrong. Abstraction turns everything real, material, consequential into mostly nothing. The abstraction of “a relationship” hides all the love and care and desire it might entail. The abstraction of “the border” hides the violence its defense entails.

Given enough abstraction every monstrosity becomes just a kind of mental gymnastics. This is the domain of devil’s advocates kind of people form whom the real often political meaning of certain statements or movements has become fully replaced by a game of debate. Who cares about the position, let’s win!

I keep thinking about abstractions when looking at the state of the current “AI” bubble and with it the state of a lot of the global economy.

As most people reading this know, I am not a fan of “AI”. If I never have to hear or read or see the term “agentic” ever again that would still be too early. Not only do I despise the political project that is “AI”, the mental damage these systems do to us, the harm to the environment they create, etc. but the way the narrative has turned a narrowly useful technology (stochastic pattern generation and recognition using neural networks) into the hammer to demolish all established codes and practices that ensure some base level of quality: But hey, who wants their software be developed by a team of professionals who can actually understand, model and solve or mitigate security issues when you can also just use a slot machine?

I’ve found myself saying how much I’d love the “AI” bubble to pop if just to shut “AI” influencers up, if just to have some space to talk about real solutions to real problems again. But that only works in abstraction. Only works when nothing means too much, really.

For worse (there’s no better in this sentence) we have made “AI” the foundation of many core parts of our economy. Not “AI” systems – those don’t really work, or meaningfully increase productivity – but the belief in the (future) value of the handful of tech companies building these systems. The US would be in a recession without the data center buildout that tech is throwing all its savings at. A buildout that is not connected to any form of successful business yet. “AI” does not scale the way digital services usually scale but all we see currently is still the old “increase user numbers and hope a business plan will manifest itself” scheme. Maybe ChatGPT can give Sam Altman some form of strategy besides lying.

This has material consequences. When (not if) the bubble pops we will see a few things: The stock market will take a dive which will affect many people living in countries without pension systems who rely on the money they have invested in ETFs or stocks. I am not talking about some VC dude or millionaire losing a few millions or billions, just normal people who wanted to retire at some point. Companies who have bet on “AI” now no longer can claim that one “needs to get onboard” and ride the hype but will have to fix their budgets – quick. This will lead to squeezing employees even more, firing people to make the next quarter’s numbers look better. In the current political landscape the instability that right leaning tech oligarchs and their fans will have created will probably benefit the right. We’ll see a lot of blame going around (remember: “AI” can never fail us, we can only fail “AI”. If this thing crashes, we did not believe enough!) and cuts to social services and anything that gives people the feeling of living in a functioning society. Which only helps right wingers but well that’s Neoliberalism for you.

I love the abstraction of the “AI” bubble popping. But the very probably effects haunt me.

This shouldn’t be read as a “well, ‘AI’ is here to stay so let’s make the best of it” kind of thing. I neither think “AI” will automatically stay (again: see this) nor am I sedated enough to believe that political values and action have no meaning. While I think there are a few narrow use cases for machine learning, whatever is called “AI” today has very few redeeming qualities being built on extraction, violence, domination, colonialism and right wing, anti-labor politics. This is not a moment of capitulation but of reflection.

It’s always easy to cheer for a revolution. Shit is fucked up and bullshit and our institutions and structures don’t seem to be willing, capable or motivated to meaningfully move towards a better world so let’s fuck shit up! Burn down data centers. Get out some guillotines. The thing is: Revolutions mean that people get hurt. That ultimately people die. That doesn’t mean that revolutions are always wrong, but it means that the abstraction is again doing a lot of work hiding harm to people who just want to get through their workday to be home with their family.

My criticism of “AI” is about limiting injustice and suffering. The suffering of the communities who get data centers put in their midst drinking up all the water, taking the electricity while producing metric fucktons of emissions, e-waste and noise. The injustice done to kids who get to chat to an LLM instead of building meaningful relationships to teachers and mentors, who don’t get to figure out what they are good at cause everything they do looks worse than “AI” output so they use that instead of getting to be so much better than slop machines. The way that non-working stochastic parrots undermine labor power and therefore putting the livelihood of thousands of families at risk. My criticism comes from love and care for people, communities and societies. So my actions can’t abstract the effects of those away.

But what can be done? In a better world we’d see governments segmenting the toxic “AI” part of the economy off and insulating the actual economy against it. Slowly moving big public investments out of those companies (not the best word, they seem more like cults these days but we still call them companies). Preparing for that bubble to deflate without taking thousands of lives with it. But we see the opposite: Europe where I live keeps wanting to throw all the money they can find at more “AI”. Just do more “AI”, it will be so good, bro. Trust us, bro. Just 10 more billion. It’ll be super cool, bro. Governments treat hyped tech like a stoner threats hits of their bong.

So what can we do? That is the question. Moving our savings out of the stock market? Maybe – if you have some. Stopping to criticize “AI” systems and the narratives that push them? Never: We mustn’t prop up or protect those dangerous and harmful systems by inaction.

I don’t have all the answers (I rarely have any TBH) but I do think that we should be at least careful wit glorifying the bubble popping a bit. Sure it’s fun to predict when it will go down and how much money Softbank is gonna set on fire but I think that it is also our job as critics to make sure that the public understands that “the AI bubble popping” has material consequences for them. That joining a union might be a good idea right now. That getting smart and knowledgeable people on Works councils is more important than ever. That tech companies are not your friends or benevolent and that they’d sell your kids if it made the stock perform well.

The “AI” bubble will deflate. But as cathartic as a “POP” might feel right now, we need to build the structures that ensure that this event doesn’t put a lot of harm on people who had nothing to do with it. Let Marc Andreessen or Satya Nadella or Sundar Pitchai and all those tech bros lose their money, make sure that after the storm nobody forgets that these people gambled with our lives and societies to make number go up. But we can’t just focus on holding those men accountable – as righteous and good that might feel (and holy cow do I want to see many of those people put on trial). We need to start building lifeboats and barriers protecting our peers, neighbors, families and communities.

We are all we have. Only solidarity will get us through this.

Read the whole story
tante
8 days ago
reply
I love the abstraction of the "AI" bubble popping. But the very probable effects haunt me.
Berlin/Germany
Share this story
Delete

Google Has Chosen a Side in Trump's Mass Deportation Effort

1 Comment

Google is hosting a Customs and Border Protection (CBP) app that uses facial recognition to identify immigrants, and tell local cops whether to contact ICE about the person, while simultaneously removing apps designed to warn local communities about the presence of ICE officials. ICE-spotting app developers tell 404 Media the decision to host CBP’s new app, and Google’s description of ICE officials as a vulnerable group in need of protection, shows that Google has made a choice on which side to support during the Trump administration’s violent mass deportation effort.

Google removed certain apps used to report sightings of ICE officials, and “then they immediately turned around and approved an app that helps the government unconstitutionally target an actual vulnerable group. That's inexcusable,” Mark, the creator of Eyes Up, an app that aims to preserve and map evidence of ICE abuses, said. 404 Media only used the creator’s first name to protect them from retaliation. Their app is currently available on the Google Play Store, but Apple removed it from the App Store.

“Google wanted to ‘not be evil’ back in the day. Well, they're evil now,” Mark added.

The CBP app, called Mobile Identify and launched last week, is for local and state law enforcement agencies that are part of an ICE program that grants them certain immigration-related powers. The 287(g) Task Force Model (TFM) program allows those local officers to make immigration arrests during routine police enforcement, and “essentially turns police officers into ICE agents,” according to the New York Civil Liberties Union (NYCLU). At the time of writing, ICE has TFM agreements with 596 agencies in 34 states, according to ICE’s website.

After a user scans someone’s face with Mobile Identify, the app tells users to contact ICE and provides a reference number, or to not detain the person depending on the result, a source with knowledge of the app previously told 404 Media. 404 Media also examined the app’s code and found multiple references to face scanning.

A Google spokesperson told 404 Media in an email “This app is only usable with an official government login and does not publicly broadcast specific user data or location. Play has robust policies and when we find a violation, we take action.”

A screenshot of Mobile Identify's Google Play Store page.

Last month, Google removed an app called Red Dot. That app, in much the same vein as the more well-known ICEBlock, lets ordinary people report sightings of ICE officials on a map interface. People could then receive alerts of nearby ICE activity. “Anonymous community-driven tool for reporting and receiving ICE activity alerts,” Red Dot’s website reads.

Red Dot’s removal came after a cascading series of events starting in September. That month 29-year-old Joshua Jahn opened fire at an ICE facility in Dallas, killing two detainees and wounding another. Authorities say Jahn used his phone to search for ICE-spotting apps, including ICEBlock, before the shooting, Fox reported. A short while after, the Department of Justice contacted Apple and demanded it remove ICEBlock, which Apple did, despite such an app being First Amendment protected speech

Both Apple and Google then removed Red Dot, which works similarly, from their respective app stores. Google previously told 404 Media it did not receive any outreach from the Department of Justice about the issue at the time. The company said it removed apps that share the location of what it describes as a vulnerable group: a veiled reference to ICE officials.

A representative for Red Dot told 404 Media in an email they “see 100% dissonance” in Google’s position. Google removed the app claiming it harms ICE agents “while continuing to host a CBP app that uses facial recognition to identify immigrants for detention and deportation.”

“This is unequivocally morally and ethically wrong. We are deeply concerned about the number of violations that must be occurring to deploy AI facial recognition on people for the purpose of making arrests. It is a clear and unacceptable case of selective application of their policies,” they added. The representative did not provide their name.

Google’s decision to host CBP’s immigrant-hunting app while removing one designed to warn people about the presence of ICE has concerned free speech experts.

“Providing tech services to supercharge ICE operations while blocking tools that support accountability of ICE officers is entirely backwards,” Kate Ruane, director of the Center for Democracy & Technology’s Free Expression Project, told 404 Media. “ICE is currently deploying armed, masked agents to take people from daycares, street corners, parking lots, and even their own homes, often based on paper thin suspicion and frequently with unjustifiable use of force. It is the mothers, fathers, children, friends, neighbors and coworkers being targeted by ICE who are most vulnerable in this situation.”

“ICE agents don’t want to face accountability for their actions, but documenting ICE and other police activities is essential to guard against abuse of power and improper conduct. Courts have recognized for decades that tracking and reporting on law enforcement activities is an important and time honored public accountability mechanism,” she continued. 

Ruane said apps like this are an exercise of First Amendment protected rights. “As with any other app, if someone misuses it to engage in unlawful activity, they can be held accountable. Google should restore these services immediately,” she added.

Joshua Aaron, the creator of ICEBlock, told 404 Media “Big tech continues to put profit and power over people, under the guise of keeping us safe. Right now we are at a turning point in our nation’s history. It is time to choose sides; fascism or morality? Big tech has made their choice.”



Read the whole story
tante
16 days ago
reply
"Don't be evil" is so far in the past, it's not even a memory anymore
Berlin/Germany
Share this story
Delete

65daysofstatic’s new No Man’s Sky album searches for humanity in an AI-filled world

1 Comment

It's not often that a band returns to soundtrack the same game nine years after its release - then again, most games aren't No Man's Sky. Once demoed on The Tonight Show with Jimmy Fallon and at splashy E3 press conferences, in 2016, No Man's Sky was heralded as gaming's future. And it was all made possible by the procedural generation that spawned its vast, sci-fi universe.

Nearly a decade later, as post-rock band 65daysofstatic returns to re-score the ever-evolving game, generated content is no longer the exciting futurism it once seemed. With AI slop flooding social media and AI-generated bands sneaking their way onto Spotify, the tech t …

Read the full story at The Verge.

Read the whole story
tante
22 days ago
reply
“Who cares if computers can make music? That’s not what music is,” says Wolinski. “The moving of the speakers to generate sound waves is such a tiny part of what gives music meaning. It’s all about the social relations around [it], the human dialogue between one person and another, even if they never meet. This is what art is — and it’s why generative AI completely misses the point.”
Berlin/Germany
Share this story
Delete
Next Page of Stories