Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2601 stories
·
141 followers

How to grow strawberries

1 Comment
Wall of new paintings. Three green paintings that say GO BIRDS, FUCK ICE, and FREE PALESTINE. Three purple paintings that say FUCK ICE. All the paintings have sticks attached so they look like protest signs.
Feels so good to have a studio again.

This week’s question comes to us from Anthea Tawia:

I came to San Francisco to visit the company I work for—sadly I was let go the first day I visited the office. I’m early in my career, worked there about 5ish months and really felt this was my big break. It’s disheartening to find yourself searching again after thinking you found what you’re looking for. How do you face a future that feels so uncertain?

First of all, I’m incredibly sorry this happened to you. It’s unforgivably brutal.

Secondly, I fear you’re about to be deeply disappointed in my answer because there is no easy answer to what you—and so many other people—are currently going through. I also refuse to engage in the type of toxic positivity that would tell you this is a good learning experience, or that it will all be alright. Because the former is a straight-up lie and the latter is questionable at best.

I’m sure other people have told you this, but it bears repeating not just for you but for anyone else reading this who might’ve recently lost their job: It’s not your fault, and there is nothing you could’ve done differently to change the outcome.

I’ll say it again: It’s not your fault, and there is nothing you could’ve done differently to change the outcome.

I’m assuming, just playing the odds here, that you work in tech. Probably in some design or design-adjacent capacity.

My own design and tech journey started under very different circumstances, at a very different time. I accidentally joined up almost at the very beginning, when things were more driven by curiosity than profit motives. And by “accidentally,” I mean that I had no clue this could be a career. None of us did. We had no idea what it was that we were making or how it would slot into the world, but either through naivete, hubris, prescience, or a combination of the three we felt this was something important, and overall positive. Words like “democratization,” “netizens,” and “new economy” were being thrown around with wild abandon. We were right about some things. It did change the world, but that was a massive monkey’s paw. We were wrong about other things. It was not overall positive. And we were ignorant about more things than we were right or wrong about combined. But by and large, there was the sense that the industry was attempting, if not always succeeding and certainly mired in the biases of white men, to make the world a better place.

It did feel, for at least a while, that we were doing a fair bit of solving problems, and making things that people enjoyed. Online banking and bill paying is nice. The iPod was cool. Cue cat? Very cool. Social media could’ve worked, maybe with different people in charge. 3D printers are cool. (I’ve made about 500 anti-ICE whistles this week!) Video cams were cool when we were pointing them at coffee pots and not our neighbors. Borrowing books from the library on an e-reader? Amazing.

Eventually, the industry did well enough to make a lot of people accidentally rich, which attracted people who were now expecting to get rich, and as with every new industry that eventually matures and goes into maintenance mode, a group of people who were pissed off they weren’t getting rich as quickly as the last group. And that’s when the problems that we were trying to solve switched from “what do people need” to “I’m not rich yet.” And eventually to “Ok, I’m richer than I ever need to be, but there is still some money, land, and water, over there that I want.”

But when I think back to myself at the start of my career, and the things that pulled me towards this industry, I don’t see myself making that same decision if I were making it now. This is not a place of honor.

As Pavel Samsonov so clearly and succinctly said recently on Bluesky: “Once, technology solved problems. People liked having problems solved, so they liked technology. Tech execs started to think of that sentiment as their due. So when they stopped solving problems, and people stopped liking them, they became outraged. ‘How dare you not love whatever we give you?’”

We do not love the surveillance. We do not love the slop. We do like paying rent and going to the doctor though. So we keep trying to do the work.

I think we need to open our eyes to the fact that the current industry many of us work in, not only doesn’t care about their workers, it actively resents them. In their eyes, we have gone from being the people who made things possible, to an unnecessary burden on the bottom line.

They hate that we charge money for our labor, and see that money as something we are stealing from their pockets. Which are very large, are already more full than they will ever be able to use in a lifetime.

They don’t want you to improve, they don’t want you to be more productive, they don’t even want you to be more pliable. They just want you gone. As I was writing this sentence I got an alert from a friend that Block, Jack Dorsey’s latest chewtoy, has laid off 4,000 people. Which made its stock rise 24%. When 4,000 people lose their livelihood, their ability to pay their rent, their ability to go to the doctor, their ability to look out for their children, and the system that we live under cheers that on… That system needs to be destroyed. Capitalism sees workers as a bug, not a feature. They are dying to eliminate us. Capitalism is not a system of honor.

If you are looking for a sliver of positivity in those last few paragraphs, it might be that five months into your career you still have time to walk back out and try another door. I realize that this is neither easy to do, nor uplifting, nor the answer you were looking for. But I feel like I am duty-bound to tell this industry is not a place of honor.

CEOs are handing out lucite to fascists. Companies are building massive surveillance networks to make kidnapping our neighbors easier. The world’s richest idiot has turned Twitter into a child pornography factory. VC firms are hiring partners whose only job qualification is that they’ve murdered Black people on the subway. And an industry that senses its own decline is now stuffing every product and service we’ve adopted with slop like slop is the only lifesaver left on the Titanic.

To make it even more depressing, as if that were necessary, when I look out at the evil bastards doing these things, celebrating the murder of my neighbors, I see many of the same faces who were talking about tech as a force for good when I first started. I see many of the same faces who were posting letters about the importance of diversity to their corporate sites in 2020. I see many of the same faces who promised to “do better,” after the brutal murder of George Floyd now calling for a bigger ICE presence on our streets. These are not people of honor.

Anthea, I am so far away from answering your question. So here I will attempt to do so, but I fear that you will not like my answer. However, I have too much respect for you to lie. Your last sentence was about facing a future that feels so uncertain. But I feel like a career in tech at this point is far from uncertain. In fact some things are more certain than uncertain. You would most likely be working for someone who doesn’t value you. You’d most likely be building something that is not improving the world. You’d most likely be forced to either spend your time babysitting the slop machine (and secretly fixing its mistakes) or actively working on the slop machine (and secretly pretending it was not full of mistakes). You’d be surrounded by over-eager 9-9-6 techbros who feel they’re temporarily embarrassed millionaires who don’t want the boot off workers’ necks as much as they envision their own foot inside the boot.

I know none of this feels good to hear. But I don’t think this industry deserves you. You deserve to work at a place that treats you with respect. A place that gives you the room to learn and grow, while cherishing the expertise you’re already walking through the door with. A place that pays you well and provides medical benefits for you and your family. A place with set hours, so you can flourish outside of work as well. A place where your opinion is not only welcome, but encouraged. A place where you feel safe. A place where you and the other workers can collectively bargain with management, because they understand that you are where the value comes from. I say these things not because you are special—although I’m sure you are!—but because all workers deserve these things.

Sadly, I fear those things are close to impossible to find in the tech industry at the moment. As you found out from direct experience. So I’d suggest asking yourself what you would be doing right now, if not for tech. Not because you don’t deserve to be here, but because it doesn’t deserve you.

If you decide to persevere in this industry, I’d walk into every interview with your head held high and remember that you are interviewing them, as much as they are interviewing you. You are deciding whether this is a company you want to sell your labor to. And I wouldn’t hesitate to walk out of any interview where you didn’t feel safe or valued. I’d walk the floor. I’d talk to other workers. (Outside work if possible.) I’d read up about the company. And I’d walk into that interview like I was doing a deposition. And of course, remember that none of this is a guarantee that they wouldn’t pull the same shit as your last company.

There is an uncertain future in front of you. And for that I am very sorry. Again, none of this was your fault, and there is nothing you could have done to keep it from happening. But in that uncertain future you may start finding things that you didn’t expect to find, maybe behind doors that you’d previously closed off. And maybe, hopefully, one of those will lead to a life full of joy and love. Keep in mind that a job will take up a huge percentage of your life, and this is not a practice life. It deserves to be amazing, not spent working for assholes. Life is wild, it doesn’t always go where you’re expecting it to. Shit sucks. Shit is unfair. Shit is brutal. Shit also makes strawberries. And whatever else we may do with our lives, we are all, at heart, strawberry farmers.

And for any tech leaders who are reading this and believe themselves the exception? Great. Show me. Instead of sending me the “not all tech leaders” email you’re already crafting in your head… do something to prove to us that it’s “not all tech leaders.” Take a stand. Call out your brethren. Refuse to work with Nazis. Stop licensing your software to Nazis. Start treating your workers with the respect they deserve. Every dollar in your very large pockets came from their labor. Share it.

And if you are running a company that respects its workers, and treats them honestly, and fairly? Hire Anthea. She seems awesome. You’d be lucky to have her.


📓 Some book news: If you’ve pre-ordered the new book, How to die (and other stories), thank you! I originally planned on getting all the pre-orders shipped out in February, but my printer has been sloooooooooow in getting books to me. Some orders have gone out. And there is a large shipment headed my way. I am sending books out in the order I received them. Fun fact: I asked a friend who works at a bookstore why things were slow and he told me every press in America is swamped right now, printing Heated Rivalry. Which honestly? I can’t get mad about that. I appreciate your patience. I love you.


🙋 Got a question? Ask it. I might meander myself into a useful answer.

💰 Join the $2 Lunch Club and help me pay my rent.

📣 The next Presenting w/Confidence workshop is scheduled for March 19 & 20. It’s a good workshop for folks interviewing.

🏳️‍⚧️ This week we are funneling all our help to the Trans Continental Pipeline. They’re busy getting trans people the fuck out of Kansas, and into Colorado. Because, yes, it’s come to this. Fuck these fascists.

Read the whole story
tante
9 hours ago
reply
"we need to open our eyes to the fact that the current industry many of us work in, not only doesn’t care about their workers, it actively resents them. In their eyes, we have gone from being the people who made things possible, to an unnecessary burden on the bottom line.

They hate that we charge money for our labor, and see that money as something we are stealing from their pockets."
Berlin/Germany
Share this story
Delete

The Alpha Wolf

1 Comment


This cartoon is by me and Nadine Scholtes.


TRANSCRIPT OF CARTOON

This cartoon has four panels.

PANEL 1

A man in a yellow shirt is at a bus stop, cheerfully lecturing the other two people at the stop.

MAN: “Feminization” has warped society. If we lived as nature intended I’d be the alpha wolf!

PANEL 2

The man with a huge thought balloon, showing him imagining walking with one hand holding a bloody axe and the other around a woman’s waist. A second woman, in a maid outfit, is carrying a tray of cake and steak. A third woman looks at him adoringly.

MAN: And the alpha wolf gets the first pick of everything! The best food, the best mates!

PANEL 3

MAN: That’s how men should live. I wish I was a wolf in the wild!

PANEL 4

Inside a wolf den, two adult wolves are talking. There are four kids (three small puppies, one medium sized) and a dead rabbit.

CAPTION: Wolves in the Wild

DAD WOLF: First the little ones eat, then the rest of us will.

MOM WOLF: And then — cuddle pile!

PUPPY: Yay!

CHICKEN FAT WATCH

“Chicken fat” is an archaic cartoonists’ term for unimportant little details in the art.

PANEL 1 – The tattoo is of a German cartoon mouse named Diddl, holding a heart.

A poster says “HEY YOU! READ THIS! Wow, I can’t believe you’re reading this just because I said to.”

Another poster shows a cool woman in sunglasses holding a guitar. Text says “YET ANOTHER BAND… you’re not cool enough to know.”

A pigeon standing on the sidewalk is wearing sunglasses and smoking a cigarette.

PANEL 3 – A poster has a picture of the panel 1 pigeon, with the caption “BEWARE Bad Pigeon.”

The guy waiting at the bus stop is miming shooting himself in the head so he doesn’t have to listen to this alpha wolf prattle any more.

The woman’s tattoo now shows the character Superjhemp (a parody of Superman and other superheroes). He’s very popular in Luxembourg – “he has appeared in over 29 graphic novels that have the highest sales rate for Luxembourgish publications.”


The Alpha Wolf | Patreon

Read the whole story
tante
11 hours ago
reply
Wolves have been represented very unfairly.
Berlin/Germany
Share this story
Delete

Marx is Unfrozen in the Future

1 Comment
PERSON:
Read the whole story
tante
6 days ago
reply
Marx unfrozen in the Future
Berlin/Germany
Share this story
Delete

Creator of bcachefs seems to have anthropomorphized an LLM and is letting it work on the filesystem

1 Comment

Comments

Read the whole story
tante
8 days ago
reply
Glad I never put bcachefs anywhere in my infrastructre. The main dev seems to vibecode on a filesystem.

No thanks.
Berlin/Germany
Share this story
Delete

IU Internationale Hochschule: Die fragwürdigen Silicon‑Valley‑Methoden Deutschlands größter Hochschule

1 Comment
Die IU verspricht ihren 130.000 Studierenden eine Revolution in der Bildung. Doch Mitarbeiter zeichnen ein anderes Bild: Sie berichten von hartem Profitdruck und einem Chef, der agiert wie Elon Musk.

Read the whole story
tante
8 days ago
reply
Vor einigen Jahren habe ich mal einen Vortrag für Professor*innen der IU gegeben.
Heutzutage würde ich das definitiv nicht wieder machen, die IU scheint mehr eine Ausbeutungsmaschine insbesondere für indische Studierende zu sein.
Berlin/Germany
Share this story
Delete

Acting ethically in an imperfect world

1 Comment

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are at some point in a situation where the best you can do is a vegetarian option.

Sometimes it’s not even our hand being forced but us not having the mental strength or priorities to do something: I could just not use WhatsApp because it is owned by Meta but my son’s daycare organizes everything through WhatsApp and do I really want to force my belief on all those very busy parents and caretakers or should I just bite the bullet and use the tool that seems to work for everyone – even though it’s not perfect?

There are no one-size-fits-all solutions for this: Sometimes a belief or ethic you hold is so integral to you that you will not move. Sometimes they are held loosely enough to let go under certain conditions. There’s a multitude of factors and thoughts that go into those kinds of decisions and at some point you just gotta make a call based on what’s in front of you and your priorities.

What I am saying is: We are all doing things we know are not ideal, are either morally questionable or do not align with our values. That’s life. I for example know that consuming meat is problematic for ethical and ecological reasons, I still do it sometimes. I reduce as much as I can but I am far from perfect. Just one of the many examples of my actions not being perfectly and purely aligned with my beliefs.

And I am 100% sure each and every reader will have similar experiences. We are imperfect and messy beings. In the end all you can do is actually try to make good decisions based on your values, try to learn from your actions and ideally do better – or at least understand what were the forces that go you to act against your values.


Cory Doctorow, probably one of the most influential writers about digital technology and culture celebrated the 6th anniversary of his personal blog pluralistic – congratulations! Cory is quite the phenomenon, I know nobody with his amount of output and his consistency of publication. It is scary just how consistently he writes and published while also churning out books. I do admire his work ethic tremendously.

But one thing in his celebratory post rubbed me the wrong way and I think it’s worth pointing out. Not for the one specific case but because it highlights a problematic way of thinking that I see a lot in current tech discourse that stands in the way of us actually improving the world.

So Cory outlines his process of how he publishes to his blog (and then pushes the same writing out to other places). He describes how one QA step in his process is piping his writing through an LLM (using the Ollama software, it’s unclear which open weight LLM he uses) to check for typos and small grammar mistakes. He then points out how some readers might find that problematic:

“Doubtless some of you are affronted by my modest use of an LLM. You think that LLMs are “fruits of the poisoned tree” and must be eschewed because they are saturated with the sin of their origins. I think this is a very bad take, the kind of rathole that purity culture always ends up in.”

Using LLMs isn’t always popular with the cool crowd, Cory knows that. And he wants to defend his (quite modest) use, which I understand: Nobody likes their problematic behavior being pointed out to them. But as outlined: Life’s complicated. Cory could just have said “I know there are many critiques of LLMs, but right now that is the best way for me to enable my work, I try to limit the problematic aspects by using a small open weight model and checking the results in detail.” and have moved on. But he needed to make a stand. And that stand lead him into the problematic train of thought that I wanted to point out here. Because many, many people listen to him and basically take his word as the gospel. So great power, great responsibility and such.

The whole argument is based on a strawman. Let’s look at Cory’s words:

Let’s start with some context. If you don’t want to use technology that was created under immoral circumstances or that sprang from an immoral mind, then you are totally fucked. I mean, all the way down to the silicon chips in your device, which can never be fully disentangled from the odious, paranoid racist William Shockley, who won the Nobel Prize for co-inventing the silicon transistor

Cory is right in pointing out that almost any technology we have has been touch by problematic figures. Racists, fascists, sexists, rapists. You name it. Anything you touch will have some research or engineering or product work by a person you despise in it.

The strawman is his claim that people who criticize LLM usage are doing that for some form of absolutist reasons. That they have a fully binary view of the world as separated into “acceptable, pure things” and “garbage”. Which is of course false. Because they are using a computer, using warm water that’s probably heated through the use of fossil fuels etc.

He attacks a ridiculous made-up figure to deflect from specific criticism of LLM use (that many probably wouldn’t even apply that strongly to his use case). But that’s not where criticism of LLMs comes from: It’s mostly specific focussing on the material properties of these systems, their production and use.

Cory continues:

“Refusing to use a technology because the people who developed it were indefensible creeps is a self-owning dead-end. You know what’s better than refusing to use a technology because you hate its creators? Seizing that technology and making it your own. Don’t like the fact that a convicted monopolist has a death-grip on networking? Steal its protocol, release a free software version of it, and leave it in your dust:”

Here again Cory is misrepresenting the LLM-critic’s argument: Sam Altman is a scam artist and habitual liar, but that’s not one of the first 10 to 20 reasons people criticise OpenAI’s products. Sure, basically every leading figure in the “AI” space seems to be unpleasant at best but that’s true for most of tech TBH. People criticise LLMs for their structural properties, their material impacts, for the way they make it harder to learn and grow, for the way they make products worse while creating massive negative externalities in the form of emissions, water use and e-waste. For the way these systems can only be build by taking every piece of data – regardless of whether the authors consent or even explicitly refuse and how the training needs ungodly amounts of harmful, exploitative labor done mostly by people in countries from the global majority. How it materially harms the commons.

Even if OpenAI was run by decent, ethical, friendly, trustworthy people (which would then of course make them not work on the products OpenAI has, but it’s just a thought experiment) their products would need to be criticized for what they are and what they do. It’s really not about these few dudes running the companies.

Cory misrepresents the arguments (well basically hides them) in order to not have to face any material criticism and turns them into “you just don’t like these people” which frames the criticism as emotional and not rational. As if it was about not liking a bunch of rich men.

He then goes into how the path forward is to “steal the protocol”. His following paragraph goes into detail:

“That’s how we make good tech: not by insisting that all its inputs be free from sin, but by purging that wickedness by liberating the technology from its monstrous forebears and making free and open versions of it”

And here we are coming to the core of the problematic argument. Because Cory implicitly argues that technology is neutral and that one can just change its meaning and effect through usage. But as Langdon Winner argues in his famous essay “Do artifacts have politics” artifacts have built-in politics deriving from their structure. A famous example is the nuclear power plant: Due to the danger of these plants, their needs with regards to resources as well as security power plants imply a certain form of political arrangement based on having a strong security force/army and a way to force these facilities (and facilities to store the waste in) upon communities potentially against their will.

Artifacts and technologies have certain logics built into their structure that do require certain arrangements around them or that bring forward certain arrangements. The second aspect is often illustrated by how ships are organized: Because ships are sometimes in dangerous situations and sometimes critical decisions need to be made, the existence of ships implies the existence of a hierarchy of power relationships with a captain having the final say. Because democracy would be too slow at times. These politics are built into the artifact.

Understanding this you cannot take any technology and “make it good”. Is a torturing device “good” if the plans on how to build it are creative commons? Do we need to answer the existence of the digital torment nexus by building an open source torment nexus? I’d argue we need to destroy it – regardless of what license it is released under.

That does not mean that it is impossible to take certain technologies or artifacts and try to reframe them, change their meaning. In some way computers are one such example: They were first used by governments, banks and other corporations to reach their goals but where then taken and reframed to devices intended to support personal liberation. It’s a bit more complicated (for why dive into the late David Golumbia’s “The Cultural Logic of Computation”) but let’s give that one to Cory. Sure, sometimes it is possible to take something originally built for nefarious purposes and find better uses for it. But is that true for everything? Very obviously not.

Let’s just look at the embedded politics of LLMs: In order to train a capable system you need data. Lots of it. AI companies keep buying books to scan them, they download everything from every legal or illegal source claiming “fair use” (a doctrine that only applies to the US by the way) or that “scraping is always okay”. Capable LLMs require a logic of dominance and of disregarding consent of the people producing the artifacts that are the raw material for the system. LLMs are based on extraction, exploitation and subjugation. Their politics is violence. How does one “liberate” that? What’s the case for open source violence?

He uses a so-called “open source LLM” and that’s very much how he presents his values but open-source LLMs do not really exist. You can download some weights but cannot understand what went into them or really change or reproduce them. Open source AI is just marketing and openwashing.

Cory shows his libertarian leanings here: If everything is somehow “free and open” then we have won. But “free and open” in this context usually means that “certain privileged groups have easy access to it and are not limited in what to do with it”. That’s one of the core problems with the whole “open Source” movement: That it reduces all struggle to if one can get their hands on the tools and has any restrictions to using them.

This also shines through in Cory arguing that we need to “liberate” technology. What a strange idea: Technology doesn’t need liberation, people do. Technologies are tools not what we actually care about. Sure, sometimes technologies can play a role in liberating people but just as often “freeing” a technology does quite the opposite to people: Ask the women who have massive amounts of nonconsensually created sexualized images and videos created of them whether they think that the “liberation” of stochastic images generators is liberating them? Technology doesn’t need to be free. It cannot be free because freedom as a concept applies to people.

And freedom is not the only value that we care about. Making everything “free” sounds cool but who pays for that freedom. Who pays for us having for example access to the freedom an open weight LLM brings? Our freedom as users rests on the exploitation of and violence against the people suffering the data centers, labeling the data for the training, the folks gathering the resources for NVIDIA to build chips. Freedom is not a zero-sum game but a lot of the freedoms that wealthy people in the right (which I am one of) enjoy stem from other people’s lack thereof.

“Purity culture is such an obvious trap, an artifact of the neoliberal ideology that insists that the solution to all our problems is to shop very carefully, thus reducing all politics to personal consumption choices:”

Cory labels people’s values and their prioritization as “purity politics” (referring back to the black and white strawman the started this part of his post with) and then pulls a really interesting spin here: Many people criticizing LLMs come from a somewhat leftist (in contrast to Cory’s libertarian) background. Cory intentionally frames those leftist thoughts that put politics based on values as “neoliberal ideology” that reduces “all politics to personal consumption choices”. This is narrativecly clever: Tell those stupid leftists that they are just neoliberals, the thing they hate! Awesome.

But the argument against using LLMs is not about shopping and markets at all. My not using LLMs does not influence anything in that regard, Microsoft will just keep making the data center go BRRRRRRR.

In a way this framing shows more about Cory’s thinking that about that of the people he criticises: Cory is focused on markets and market dynamics and in that world it’s about purchasing. But moral choices only sometimes relate to markets. They do when I for example choose only to buy fairly produced garments. But when I for example refused conscription (when I was young Germany still forced every young man to learn how to kill) that was not a shopping decision. That was politics as well as leading an ethical life.

People do not believe that “not using LLMs” will solve the issue of OpenAI etc all existing. They do no want to build on, use products with so clearly defined harms and negative externalities. Because they believe it to be wrong. Sure, there might be a utilitarian argument for “the thing exists anyways and if it saves you time, that’s good, right” but many people are not utilitarians. They want to lead a life where they feel their actions align with their values. In a way that is a path to freedom: To having the freedom to make the decisions one feels are right are in alignment with one’s values.

Which Cory actually also believe and acts upon when it’s about his values: He has refused to create a Bluesky account in spite of wanting to be there cause his friends are there for (good!) ideological reasons: Because Bluesky was back then and honestly is still today mostly centralized with the Bluesky corporation having a central chokepoint to control the network. Cory believes that one sometimes needs to make decisions based on one’s values. He just does not think that your values as someone not wanting to use LLMs matter.

I mean, it was extraordinarily stupid for the Nazis to refuse Einstein’s work because it was “Jewish science,” but not merely because antisemitism is stupid.

Everybody hates Nazis. Implying that one is in any way like the Nazis is just a killer argument. But let’s talk about Nazis for a second. The Nazis did a lot of psychological and medical research. On people they interned and later killed in concentration camps. There actually was a massive debate within especially psychology whether using the results of that kind of research is ethically possible. Utilitarianism of course argues that if it’s there one should use it but especially when your whole discipline is focused on understanding how our psyche works in order to for example help people in trauma just taking research that has been created through unthinkable violence and torture feels wrong. Feels contrarian to what your whole discipline is there for. This reminds me of Ursula K. Le Guin’s story “The Ones Who Walk Away From Omelas“: Omelas is an almost perfect city. Rich, democratic, pleasant. But it only works by having one small child in perpetual torment. Okay, but if that kid is already suffering because those other people chose to, should you walk away? Of just reap the fruits of that suffering?

Sometimes you need to walk away.

Cory then repeats the strawmen we already talked about and lands here:

“It’s not “unethical” to scrape the web in order to create and analyze data-sets. That’s just “a search engine””

Again, it twists the argument in the way that the AI corporations like to do it as well: Search engines scour the web so AI companies should be allowed the same. It’s the same technology! But what’s the purpose?

A search engine scrapes pages to build an “index” in order to let people find those pages. The scraping has value for the page and its owner as well because it leads to more people finding it and therefore connecting to the writer, journalist, musician, artist, etc. Search engines create connection.

AI scrapers do not guide people towards the original maker’s work. They extract it and reproduce it (often wrongly). “AI”‘s don’t point out to the web for you to find other’s work to relate to, they keep you in their loop and give you the answer cutting off any connection to the original sources.

While the technology of scraping is the same, the purpose and material effects of those two systems is massively different. Again, Cory misrepresents the critique and tries to make it look unreasonable by making it just a conversation about tech without regarding how that technology affects the world and the people in it.


I appreciate a lot of work Cory Doctorow has done in the last decades. But the arguments he presents here to defend his usage of LLMs for this rather trivial task (which TBH could probably be done reasonably well with traditional means) are part of why the Internet – and therefore the world – looks like it does right now. It’s a set of arguments that wants to delegitimize political and moral actions based on libertarian and utilitarian thinking.

Technologies are embedded not only in their deployment but also in their creation, conceptualization. They carry the understanding of the world that their makers believe in and reproduce those. A bit like an LLM reproduces the texts it learned from: It might not always be a 100% identical replica but it’s structurally so similar that the differences are surface level.

In order to build an Internet and a world that is more inclusive, fairer, freer we need to move past the dogma of unchecked innovation and technology. We need to re-politicize our conversations about technology and their effects and goals in order to build the structures (technological, political, social) we want. The structures that lead to a conviviality in harmony with the planet we all live on and will live on till the end of our days.

That path is paved with discussions about political and moral values. Discussions about how certain technological artifacts to align with those values or not.

I do agree with Cory that demanding perfect purity lead nowhere. We are imperfect people in an imperfect world. I just do not think that this means to go all accellerationalist. Just turning the “open source” dial up to 11 does not stop the apocalypse. It’s a lot harder.

Read the whole story
tante
9 days ago
reply
"Capable LLMs require a logic of dominance and of disregarding consent of the people producing the artifacts that are the raw material for the system. LLMs are based on extraction, exploitation and subjugation. Their politics is violence. How does one “liberate” that? What’s the case for open source violence?"
Berlin/Germany
Share this story
Delete
Next Page of Stories