Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2016 stories

But she never said “no”!

1 Comment

This stip was created with Becky Hawkins, who is also my collaborator on SuperButch! Becky drew this comic, while I wrote, lettered, and added the colors.

If you enjoy these cartoons, and can spare it, please support them on Patreon! A $1 pledge really matters to me.


The title image of this cartoon has the words “But she never said ‘NO’ in large white letters that fade into the background. Below the title is a drawing of telephone wires, with two birds sitting on a wire.
FIRST BIRD: Does this comic strip need a content warning?
SECOND BIRD: I think the title covers it.


A woman and a man are on a sofa. The man is leaning towards her, putting his lips near hers, while she pulls back and puts a protective hand, in a “stop” gesture, in front of her mouth.

WOMAN: I’m not sure I want to do this right now…
MAN (thought): That’s not literally saying “no.”

A closer shot of him from over the woman’s shoulder. He is smiling. She’s still holding up a “stop” hand. His thought balloon partly obscures her speech balloon, but not so much that we can’t read what she’s saying.

WOMAN: Hey c’mon, this isn’t a good idea.
MAN (thought): That’s not a literal “no.” So it’s okay to grab her boobs.

A closer shot of him leaning in to kiss her as she pulls away. She’s saying something, but we can’t read it because his thought balloon gets in the way.

MAN (thought): She still hasn’t literally said “no.” I’m good!

A close-up of his face. The woman’s not in the panel, but her word balloon – still mostly obscured by his head and his thought balloon – indicates that she’s positioned below him. He looks like he’s concentrating.

MAN (thought): Pulling away while I’m trying to pull her pants down isn’t literally saying “no.”

In silhouette, we see that she’s lying on her back, with him on top of her. She isn’t saying anything.

MAN (thought): Now she’s just being silent and unresponsive. No talking means she’s not saying “no!”

This is the final panel. The setting has changed; the man is now holding his arms up and looking frustrated. A few people in silhouette are looking at him; their posture makes it seem like they’re angry at him.

MAN: How was I supposed to know? I’m not a mind-reader!


A small panel below the bottom of the strip shows the man, now looking full of himself, talking to a different couple of people.

MAN: I do consider myself a feminist!

Read the whole story
1 day ago
(Content Warning!)
"But she never said 'No'"
Share this story


1 Comment and 2 Shares
Read the whole story
1 day ago
The Singularitan Sandbox
Share this story

Faulty Logic

1 Comment and 2 Shares

For the past few months, a single advertisement has been relentlessly popping up in my Twitter feed. “Tired of the internet shouting factory?” it asks. “Welcome to Kialo.” The name is Esperanto for “reason,” and the site is a collaborative debating platform where you can host or join discussions and contribute to arguments on both sides. The promise is of a certain kind of orderly hush, a philosophers’ glade where — through quiet, structured dialogue — initiates can cleanse themselves of intellectual impurities and dress their thoughts in the plainest, most honest garments. I decided to start a debate on a topic that had become a pressing concern in my world, and which I felt genuinely conflicted about: Should art made by artists accused of abuse be removed from cultural institutions? The site is built so that each argument branches into a tree, with each statement being broken down into further pro and con discussion. As the debate’s administrator, I had the primary responsibility for assessing where other contributors’ statements should fit, and for helping them break down their initial entries into concise propositions.

Within a few days of beginning the discussion, I noticed some dirty footprints starting to muddy up my glade. Even though everyone had the freedom to argue on both sides of the debate, vanishingly few of my fellow symposiasts were interested in building the argument for excluding the work of abusive artists. While the site’s users are anonymous, most of the usernames were male, and I was fielding a lot of entries like, “Are you suggesting abusers should be psychologically abused by being told what they produce is worthless?” The other noticeable, and related, problem was just how much trouble people had in following the structural rule meant to guide us to building coherent arguments: that each entry should be a concise claim. As a microphone at a Q&A after a film screening seems to have magical properties that make audience members forget what questions are, here was the opposite effect — would-be debaters seemed to forget what statements were.

Laying out coherent arguments is harder than it looks

Laying out coherent arguments is harder than it looks. In the course of a debate against Stephen Douglas in 1858 (the famous series of debates on slavery, for which the Lincoln-Douglas debate style was named) Lincoln accused his opponent of using “a specious and fantastic arrangement of words, by which a man can prove a horse-chestnut to be a chestnut horse.” As audience members, we are easily fooled by this kind of semantic juggling. When we try to formulate arguments of our own, we are likely to mix up the chestnut and the horse purely by accident.

In effect, it turned out that having a civilized, carefully managed, and logically coherent debate online did something I wouldn’t have expected — it made arguing boring and unsticky. I started neglecting my administrative duties and receiving notes from the site saying they understood that responding to suggested claims could be time-consuming, but that contributors “really make an effort” and it wasn’t nice to ignore them. Neither had the debate done much to advance my thinking on the topic I’d proposed. I started to wonder: Was logic an inappropriate tool with which to approach this question? When it was suggested that at issue might be how victims of abuse felt rather than what rights artists had, this avenue of discussion seemed to be a dead end — if this was about feelings, what was there to talk about?

Debate sites tend to advertise themselves as civilized upgrades to the fractiousness of online discourse. Idiots argue. Intellectuals debate, reads a banner on QallOut, where users can videoconference each other to debate topics like “The biggest problem facing the world is the federal reserve,” “The narrative of Christianity is unproven,” or “It is always wrong to deliberately kill a toddler.” These sites espouse the hope that online debate will demolish echo chambers, embolden truth-seekers, and shame the purveyors of bullshit, broadly defined. But the perceived value of debate online relies on a somewhat regressive notion: that logic has a purity that cuts across cultural and other identities. The fixation on logic as an ideal vehicle for human progress is less a reflection of the practicality of this means of resolving our shared issues than it is a longing for a moral framework beyond human perceptions.

Perhaps it’s that users aren’t being rigorous enough in their application of logic to contemporary questions. But more likely, the answer isn’t in a stricter adherence to the rules of formal debating, either in dedicated spaces or on social media. The utopic vision of human perfectibility through reason obscures what online spaces can actually offer: a broadening of our conception of what it is to be human.

Moderation of debate sites differs widely, as does the quality of discussion. I saw a debate on QallOut with the topic “A person’s clothing is not a cause of rape”; on I saw “Vote yes if you want to kill feminist as a sport,” and on I saw “The average Jew would kill you over a penny.” Of their position on hate speech, QallOut’s founders write, “If someone says something awful on QallOut, they need to step up and defend that” since “True hate speech can’t stand up to this kind of scrutiny, leaving the speaker looking foolish and discredited.” People talking through their disagreements one-on-one is seen as a grand project in which clashing viewpoints can be subdued by logical argumentation — and not just subdued, but actually resolved. Debate is presented as a good in and of itself, regardless of what exactly is being debated.

The Western consideration of rhetoric as an art begins with the Sophists, a philosophical movement that arose in Athens in the fifth century BCE. The Sophists believed there was no “truth,” only perception. Everyone lives inside their own all-enveloping universe in which the physical properties of reality, to say the least of its moral qualities, are entirely individual and can in no way be measured against a common yardstick. “Man is the measure of all things,” Protagoras wrote. If Protagoras thinks the water at the gymnasium is cold and Hippocrates thinks it is warm, then it is cold for Protagoras and warm for Hippocrates.

In private spheres of our lives, this relativism is (relatively) easy to work around — Protagoras can choose not to swim. But in the political sphere, when we are required to act together, how can we bridge the distance between our separate realities? The Sophists say that the best we can do under the circumstances is concede to the orator who is able to convince the largest number of people. Divorced from any objectively true vision of reality, the art of persuasion is all there is. If the majority decided to put Socrates to death, then Socrates’ death was, for all intents and purposes, the right thing.

The problem of relativism is one of Western philosophy’s Weebles; it tends to be knocked down only to pop back up again. Is it possible for us to know the truth about anything, and if so, how would we achieve this knowledge — also, how would we know if we had achieved it? Aristotle believed the evidence of our senses could help us describe and classify what was true about an octopus; relativism, in which one opinion was as good as another, he easily dismissed. He also set out a system of syllogisms by which we could judge whether an argument was consistent. In his seminal work, Rhetoric, he introduced the rhetorical terminology of ethos, logos, and pathos — the personal trustworthiness of the speaker, the logical coherence of the speech, and the appeal to the audience’s sensibilities.

Underlying much of the enthusiasm for debate is faith in a universal mode of reasoning which not only cuts through differences in experience, but renders them irrelevant

For Aristotle, rhetoric wasn’t simply batting arguments about without expecting to advance shared knowledge. This is where we get the idea of “sophistry” as a pejorative, meaning to disguise a bad argument as a good argument — systematizing logical deduction as a form of reasoning was meant to eliminate the possibility of being deceived by verbal tricks. Aristotle saw the possibility of misuse in laying out his theory of rhetorical tactics; in the wrong hands, persuasion could be used for ill. But he generally agreed with the site administrators of QallOut — that it would be easier to convince people of things that were just and good than of things that were not. So “the average Jew would kill you over a penny” should be easy to argue against, and your audience should find arguments against this thesis more persuasive.

Contemporary debate culture seems to be a cross-breed of Sophist and Aristotelian beliefs. Ethos, logos, and pathos, or related terms, sometimes appear on judges’ scoring sheets in contemporary high school or university debates — in Australia, debaters are judged on manner, matter, and method. Debates are a gamification of thinking in which the winner is the debater or debate team that manages to convince the judges — a good debater should be able to argue either side of the same question and win. This suggests that truth is relative and persuasion is all. However, debate is also lauded as a pro-social act, one in which people can improve their thinking and perhaps build greater consensus.

Today, the fundamental orientation of online debate culture is toward universals, which are more likely to spark a reaction. There is a heavy reliance on words like “always” and “never,” as well as a tendency towards extreme responses to perceived social ills: “That music glorifying violence against women should be banned,” “Schools should block YouTube,” “Affirmative action should be abolished.” It’s an indulgence in a fantasy of control — if I ran the world, I would make all prospective parents attend parenting classes, or abolish progressive taxation, or fund a space mission to Mars, and the rest of you, with your individual needs and experiences, would be subsumed under the wisdom of my one rule. The fact that the high schoolers in crookedly knotted ties or Redditors killing time are not in any position to see their proposals enacted differentiates this kind of academic debate from, for instance, parliamentary debate, in which there is a risk of actual consequences. Most of us engaging in academic debates have the luxury of taking ourselves very seriously, while also being protected from urgently needing to determine where truth or justice might lie.

Winning a debate is like winning a game of tennis in the sense that afterwards, tennis is essentially unchanged. You can’t solve tennis’ underlying tensions by playing it, and you do not lay a question to rest by debating it. The conventionally hopeful formulation that begins the exercise — “be it resolved” — is the first misdirection, as the chance of coming to a final answer, such that no one will ever need to discuss the question again, hovers around zero.

Sites that teach debating know this. offers students what would, in another context, seem like an invitation to plagiarism: lists of popular debate topics along with a rundown of common arguments on both sides, pithy quotes from experts, and rundowns of the history behind the pro and con sides. In the same way that you might study the French Defence or Alekhine’s gun in chess, there are recognizable gambits that lead to well-worn counter-moves. The game is to trap your opponent in a logical corner, and the first to contradict themselves loses. It’s a game that teaches us to pit the white and black positions against each other; at the same time, a utopic hope persists that at the end of the game, black and white could find themselves on the same side — the side of truth. They would get there, presumably, by way of logic. Underlying much of the enthusiasm for debate is faith in a universal mode of reasoning which could not only cut through differences in experience and vantage point, but render them irrelevant — if everyone could get onside of logic, they would reach a consensus.

The ostensible divorce of reasoning from identity becomes a meta-argument for universal truths and solutions. It works to shore up the idea that a logical truth will stand on its own no matter who is delivering it. Some users defend logic as if it were a personal friend under attack: Reddit hosts a subreddit called “a place for bad logic,” where users post examples of logical gaffes they’ve spotted on other subreddits — it’s fashionable for Redditors to perceive themselves as lone philosophers in a sea of undeveloped minds. A subreddit for “open debate” starts with a question about where to find a debate about gun control in which people use “actual arguments” rather than acting “like five-year-olds.” On Twitter, a search for the hashtag #logic is full of posts that extol, with cult-like fervor, the power of “objectivity” and “intellectual honesty” rather than feelings or experiences as the true tools of cognition. These calls are used to elevate status by aligning oneself with the purity of reason, which, if only those with false beliefs would listen, would bring them into an apprehension of truth. Declaring oneself on the side of logic sets up an implicit divide between the rational self and the irrational others — it requires at least a notional opponent.

The spirit of sites like Kialo and Qallout is one of reformist zeal, like the temperance movement: where most online arguing is rude and undisciplined, and easily veers into abuse and hate speech, the sites offering debate rather than argument promise to advance the human race through etiquette and rigorous logic, which will eliminate wrong or harmful beliefs through informed dialogue. But logical argumentation rarely makes people change their minds; neither does exposure to facts. In a 2016 article, researchers at Cornell analyzed data from Reddit’s ChangeMyView community, where users propose a thesis and invite others to debate. While the results showed that some tactics are better than others — using different words from those used by the original poster in order to shift the frame of the discussion; using specific examples; using more tentative phrasing rather than speaking with a show of certainty — the instances of the original poster actually changing their view were discouragingly few.

In the behavior of social media users posting under their real names, identity — contrary to logic-proponents’ assumptions may be among the strongest persuasive tools

Because Twitter is a public space, there is a perception that any statement made there should be open to challenge. Not being “open to debate” is an accusation that can exhaust members of marginalized groups, who are disproportionately called upon to defend statements about their experience. I live in Canada, where we are still struggling with the “truth” step in efforts to bring truth and reconciliation into relations between settlers and Indigenous peoples. Debates between Indigenous activists and settlers reluctant to revise the status quo haven’t felt like avenues to truth, because they tend to waste time on the premise that we live in a post-racial society — racism has been fixed, so boil water advisories on reserves, substandard housing and health care, crushing suicide rates, and an ongoing epidemic of apprehension of Indigenous children by the foster care system either aren’t real, or aren’t the consequence of racism. “Logic” is often invoked as an argument for discounting differences in experience in favor of an abstract notion of equality.

Just as anonymity allows users to try out opinions they’re not comfortable voicing in their offline worlds — ideas can be more extreme because repercussions for socially unacceptable opinions are limited — the invocation of open debate in service of truth lets users attempt to cover their prejudices and bad faith with a veneer of dispassion. Further, we know that logic has little to do with how people actually process information, especially when it comes to the kinds of beliefs we would describe as “debatable.” The topic I raised on Kialo, for instance — about whether the work of artists accused of abuse should be removed from cultural institutions — would have made much more sense with emotional context from the #MeToo movement.

Under the right circumstances, debate is fun. The very word calls up an image of a hazy dorm room at three in the morning, when every high undergraduate thinks they’re on the cusp of finally solving the big questions. It’s exciting to encounter ideas you’ve never heard before, and to imagine what it feels like to be from another family or another part of the country, where the things you take for granted seem outlandish. The quest for self-definition requires some trial and error, and other people can help us test our beliefs by pushing us to formalize the arguments that underwrite them. Or we might be persuaded into a new camp, adopting beliefs on topics we hadn’t even thought about before.

For all the talk of universality, it’s the poddish nature of these discussions that makes them feel vital — making some progress towards elucidating what we think, and therefore who we are, in small groups of people who can become our friends. Debate in this sense is about intimacy rather than persuasion, a demonstration of trust: It’s easy to take mutual respect for granted when there’s nothing to disagree about, but a genuinely respectful relationship can accommodate disagreement. By respecting one another as debate partners, we become colleagues and collaborators in the pursuit of truth. We also inflate each others’ egos by conferring the status of philosopher on one another; two 18-year-olds who’ve read a chapter apiece of Plato’s Republic can make each other feel like cutting-edge intellectuals.

Inhabiting the platforms we share online can feel like walking down a dorm-room hall — some people are debating, but others are working, playing, talking, or flirting, and the intimacy we feel can be more persuasive than argument. At its best, social media allows us to see what other people care about. The consensual eavesdropping that Twitter or Instagram allow isn’t about testing one’s beliefs through logic, but it can offer a window onto other people’s worlds. Watching the clash of opinions can be much less instructive than listening to people who share a similar worldview and set of experiences talk freely to each other.

In his 2010 book The Honor Code: How Moral Revolutions Happen, philosopher Kwame Anthony Appiah shows that arguments are not what change people’s minds on moral questions — honor is. The end of dueling, or Chinese foot-binding, or the Atlantic slave trade, did not come about, Appiah writes, because of new or more convincing arguments — the arguments against these practices had been in place, sometimes for centuries, before most people were turned against them. What changed was the “honor world” — the group of people who understand and acknowledge the same codes of behavior. Dueling was illegal before it came to seem dishonorable, in part because a newly created popular press brought the aristocracy’s honor code into discussion in lower class circles. This exposure to ridicule or mimicry put a new complexion on a practice that had persisted despite all logical argument against it.

If debate doesn’t actually change minds, the rhetorical power of social media networks may work best as a way to insist on a broadening of our honor worlds. In the behavior of social media users posting under their real names, identity — contrary to logic-proponents’ assumptions may be among the strongest persuasive tools. If an honor world is about acknowledging the same codes of behavior, an expanding sense of one’s world can bring unquestioned values or practices into sharp relief. Most Canadians, for instance, would not consider it honorable to rob someone of their land or to break a treaty.

Debate, in its formal and informal manifestations, is generally conceived as a force for good — indeed, as one of the great hallmarks of civilization. This is partly because it is viewed as the alternative to physical violence as a way of solving disputes. But argument as an intellectual contest may also have the effect of favoring a contestant who does not necessarily have right on their side. Winning an argument may mean bringing forward a true and good thesis, but it may also mean persuading one’s judges of something untrue through force of personality or canny rhetorical stratagems. If a consensus view emerges, it may have everything to do with who is participating, who is judged trustworthy, and how much skin is in the game. But debate or physical violence aren’t the only options for finding a way to live together despite our differences, or for finding out who we are and what we believe.

It’s possible for digital interactions to enlarge our honor worlds by bringing us into closer contact with one another. As novels propose moral arguments through character development, digital spaces are best designed not for debating universals, but for developing our capacity to identify through difference. Interacting with a wide range of people with differing worldviews and experiences in digital spaces means more subconscious absorption of alternatives to the life we know. The idea of “debate” imposes an adversarial framework on online interactions, as well as privileging logic as a tool of discovery.

This essay is part of a collection of essays on the theme of DEBATE FETISH. Also from this week, Rob Horning on being always already convinced

Read the whole story
7 days ago
"the perceived value of debate online relies on a somewhat regressive notion: that logic has a purity that cuts across cultural and other identities."
4 days ago
Share this story

Die DSGVO-Todesliste

1 Comment and 2 Shares

Warum es wichtig ist: DSGVO verändert das Internet.

Ankündigungen von Unternehmen, Produkte und Services aufgrund der bald wirksamen DSGVO einzustellen, sind immer mit Vorsicht zu genießen. Manchmal bietet sich die DSGVO als letzter Nagel im bereits bereitstehenden Sarg an, manchmal kann man die DSGVO als weiteren Grund nehmen, einen bestehenden Strategieschwenk zu vollziehen. (Wie es wohl bei der kommenden Einstellung von Klout der Fall ist.) In allen Fällen lässt sich allerdings zumindest festhalten, dass die DSGVO nicht geholfen hat. (Wie auch.)

Nichtsdestotrotz also hier die erste Liste der ersten größeren DSGVO-Opfer.

Kim Crawley auf Peerlyst: „The GDPR death toll“: is a Czech company. The GDPR isn’t killing the company as a whole, but it is killing its social network for students. The company believes that the academic social platform with about 20,000 daily users would have to completely change how they do everything in order to comply with the GDPR. So it’s easier for to cease that particular service of theirs.

Super Monday Night Combat

Video game developer Uber Entertainment has maintained Super Monday Night Combat, an MMO game, since 2012. That particular massively multiplayer online game will soon be no more. Uber Entertainment has confirmed that redeveloping the game to comply with the GDPR would be a pragmatic nightmare.



Most of you should know what Whois is. It’s a service that enables you to look up which person or organization owns a domain name. Well Whois, at least as we know it, will cease operations on May 25th, due to the GDPR. DNS organization ICANN may have to come up with some alternative which complies with the GDPR. The Whois service has been in operation in some form or another since the early 1980s and its standard is documented in RFC 3912.

Das größte Opfer wird natürlich die Adtech-Branche sein. Gleichzeitig wird außerhalb dieser Branche diesem kommenden Massaker wohl am wenigsten nachgeweint werden. Doc Searls:

To get a sense of what will be left of adtech after GDPR Sunrise Day, start by reading a pair of articles in _AdExchanger_ by @JamesHercher. The first reports on the Transparency and Consent Framework published by IAB Europe. The second reports on how Google is pretty much ignoring that framework and going direct with their own way of obtaining consent to tracking:

Google’s and other consent-gathering solutions are basically a series of pop-up notifications that provide a mechanism for publishers to provide clear disclosure and consent in accordance with data regulations.


The Google consent interface greets site visitors with a request to use data to tailor advertising, with equally prominent “no” and “yes” buttons. If a reader declines to be tracked, he or she sees a notice saying the ads will be less relevant and asking to “agree” or go back to the previous page. According to a source, one research study on this type of opt-out mechanism led to opt-out rates of more than 70%.

Meaning only 30% of site visitors will consent to being tracked. So, say goodbye to 70% of adtech’s eyeball targets right there.

Google’s consent gathering system, dubbed “Funding Choices,” also screws most of the hundreds of other adtech intermediaries fighting for a hunk of what’s left of their market. Writes James, “It restricts the number of supply chain partners a publisher can share consent with to just 12 vendors, sources with knowledge of the product tell AdExchanger.”

Wir reden hier nicht von einem Teil der Branche, sondern von der gesamten Adtech-Branche. Searls:

Meanwhile, the adtech business surely knows the sky is falling. The main question is how far.

One possibility is 95% of the way to zero. That outcome is suggested by results published in PageFair last October by Dr. Johnny Ryan (@JohnnyRyan) there.

Ich hatte gestern auch auf Techdirt verwiesen, welche eine Zersplitterung des Internets durch die DSGVO aufkommen sehen.

Gerade Onlinepublisher scheinen sich in diese Richtung zu entwickeln. Hier beschreibt Lucia Moses auf Digiday, wie US-Publisher darüber nachdenken, Europäische IP-Adressen einfach zu blocken statt sich an die DSGVO anzupassen:

Some U.S. publishers with small European audiences and businesses are considering dealing with the issue by just blocking European IP addresses outright from accessing their sites. The thought: Why even bother with the risk of fines that could total 4 percent of global revenue.

Well+Good, a health news and lifestyle publication, plans to block IP addresses from Europe while it figures out what compliance means for the site, said Christina Roberts, vp of digital media operations there. […]

It’s an option Chris Tolles, CEO of entertainment news site Topix, is also considering. With a monthly reach of 22.3 million unique visitors, Topix gets more than 10 percent of its revenue from the U.K., revenue he doesn’t want to lose. But the rest of Europe isn’t necessarily worth the cost because high ad-blocking rates in countries like Germany make it hard to monetize those audiences. “It’s a rounding error for a lot of people,” he said. […]

Tom Sly, svp of digital revenue at local broadcaster E.W. Scripps Co., said Scripps has been working for the past eight months to get compliant with GDPR. But if Scripps doesn’t think it can get compliance across the board, it’ll block EU IP addresses.

​Ich fürchte, europäische IP-Adressen zu blocken, wird eine gängige Praxis werden. Besonders für jüngere Angebote, die noch kein Publikum/ keine Nutzerschaft in der EU haben. Das ist schlicht einfacher und pragmatischer als den Aufwand auf sich zu nehmen, alle internen und externen Prozesse der DSGVO anzupassen.

Das kann natürlich auch bedeuten, dass mittelfristig neue Internet-Services ihren Weg gar nicht mehr in die EU finden. Denn nachträgliches Anpassen wird nicht günstiger. Und Facebook und Google etwa können sich in der Zwischenzeit weiter auf dem EU-Markt eingraben und ihn so für Neuankömmlinge unattraktiver machen. (Diese mittelfristige/langfristige Aussicht hängt stark davon ab, wie attraktiv der EU-Markt an sich ist und bleibt.)

--- per Email oder RSS-Feed abonnieren. Man kann auch auf Twitter und Facebook folgen!
Read the whole story
11 days ago
DSGVO Death Toll
Share this story

The Software Code of Ethics

1 Comment


Software might be eating the world and all the social and organizational structures we strange naked apes have spent centuries establishing but recently it seems to have become a truism that “something” needs to change.

Complex software systems (usually referenced as “algorithms” or – when things get super buzzwordy – “Artificial intelligence”) keep automating more and more decisions that either were never in the realm of feasible if even thinkable or that used to be made by people or social structures. Which is scary given that the people building these systems tend to come from very few selected slices of the population making deeply political decisions without any political oversight, democratic legitimization and, sadly way too often, any regard for the social and political consequences of their actions.

That last aspect isn’t really the engineers’ fault. At least not given our framework for thinking about engineering and STEM in general: Engineers solve problems. By applying standardized and measurable methods and techniques of problem analysis, modeling and solving they create a solution to whatever gets thrown their way. Your toaster burns the toast? Engineers will add sensors and a control system to keep your toast perfect. Annoyed by your typos? Some software engineer will add spell-checking and maybe even some probabilistic autocorrect system that makes writing correctly so much simpler.

That mindset is what engineers get trained with. And not just in formal education. There are more “learn to code in your browser” platforms than I can count and many of their users do have a certain will to solve issues they personally have (at least those that finish these courses successfully). Cause learning to code is really hard if you have no idea what to do with that new skill. Not impossible, but it’s a lot harder to push through all the frustrating syntax and semantic bullshit if you don’t really know what you are doing it for.

Engineering and doubly so software engineering (and programming, software development or however you want to call it) as we think about it (at least in the West) is about the belief that everything can be put into a model and that that model can be prodded and poked and tweaked to fix any issues that might emerge. Of course that can fail – not all issues have a solution – but the mindset is to iterate through models and established algorithms/methods/data structures/approaches until one “fits”. “Fitting” meaning that your choice allows you to “solve” the issue.

This sounds super abstract and that’s actually the point. You can only call something “engineering” (in contrast to “tinkering” or similar terms) if you develop and establish standardized processes, metrics and generic methods for solving problems. When building an algorithm to plan the most effective route from point A to point B it doesn’t matter – in the abstract – whether you’ll walk, ride a bike or take your car. Sure, certain roads might only work for certain modes of transportation but the algorithm itself, the approach is the same. The parameters, the configuration and some of the base data changes.

Of course this doesn’t mean that domain knowledge isn’t important. Knowing your domain means knowing which approaches traditionally work and which hard and soft constraints – especially the implicit ones – need to be taken care of. Knowing your domain means knowing that transporting people and transporting packages might look similar but needs very different solutions.

But domain knowledge can be learned (and unlearned) quickly. The idea of how software engineering or computer science is usually taught is to give people the basic methods and data structures to quickly work in all kinds of different domains. For many that is actually one of the most fascinating things about programming: You can work on something completely different every few weeks. First a content management system, then maybe a shop system, then some logistics thing. The tech you use stays the same, the approaches often differ only slightly. The domain knowledge required you inhale in a crash course or on the job. It really can be intoxicating: You have a toolbox of methods and tech that works regardless of what the problem is. Transportation? Check. Communication? Check. Democracy? Check.

I’ve stopped counting how many software systems tried to sell me on solving democracy (these days usually with a blockchain because we live in an absurdist play). With new, byzantine election systems, software systems to somehow make something transparent, software systems to distribute resources or a budget, software systems to fight fake news or solve the issue of so-called filter bubbles. There is a new one every time you sneeze (yes you, the one in the back. Your fault!). And they all start out with the best of intentions, with an idea of what needs fixing and how to improve democracy.

The problem is: Democracy isn’t a problem to solve. It’s a class of sets of values and practices. A way of doing things that can obviously be changed and even improved but that never has a solution. And that is the engineering fallacy: When everything is a problem, you keep looking for solutions. But some things don’t need solving.

And while there are many ideas of how to handle this structural problem (like adding some stuff to the curriculum at university) today I want to focus on the idea of the “Software Code of Ethics”.

The idea is simple: Software creators, programmers, engineers do have such an impact on each of our worlds and we keep seeing them build stupid, pointless, dangerous offensive, horrible things (and awesome things as well but those don’t concern us here). So why not create a code of ethics that software developers live and work by? A set of rules and guidelines that helps software engineers to get out of the abstract and into the concrete consequences of their actions? Something like the Hippocratic Oath that doctors take. Wouldn’t that help stop coders building systems of oppression and discrimination?

I see why that idea is starting to get so much traction. It’s super simple to demand, you can always point to the Hippocratic Oath as something very similar that works reasonably well. Certain subcultures that are very engineering-like have similar rulesets as well (even though they often are not so much a Code of Ethics as an ethos). And something needs to be done, right?

I don’t see a Code of Ethics as a bad idea in general. Especially for people who are looking to do better an established set of rules and guidelines can be helpful to inform one’s own practices and actions. But I feel like many people arguing for one overlook a set of key facts about the situation at hand.

The first big issue is lack of gatekeepers. Which sounds werid cause the Internet hates gatekeepers and less gatekeepers is more good right? Freedom!!111 But the Hippocratic Oath works because it’s really hard to become a doctor in your basement. You have to go through medical school (which filters out all kinds of people based on skill but also money and other criteria). To go into practice you often have to join/get certified by some sort of board or organization. That organization serves as the gatekeeper to the Oath: While breaking the Hippocratic Oath itself might not be breaking the law (depending on how it was broken) the organization “governing” doctors can use their power to enforce keeping the code of ethics. If they kick you, your fancy medical degree looks good but doesn’t let you practice medicine – at least as easily.

Software engineers, programmers etc. do have organizations you can voluntarily join and that can propose these ethical codes or practices but you can just as easily write code without ever having heard of them: Did you know that the ACM – the Association for Computing Machinery – has an elaborate Code of Ethics? The ACM isn’t a small organization but still most people in tech don’t know that their Code of Ethics exists. And we’re not even talking about enforcing it in any way, shape or form.

Many of these organizations have a very hefty acedemic bias meaning that self-taught programmers or people who learned on the job might not even have heard of them, couldn’t really have heard of them. Basically you can write code that changes billions of lives without ever having left you basement or put on pants. This makes enforcing or just motivating a certain code of ethics extremely challenging. Especially when we think of the job market. Let’s assume that there is a very good, well-defined code of ethics of software developers that would stop them from building the next Cambridge Analytica or whatever horrible software of the day you want to look at. Say a company wants to hire a developer and one follows that code, the other doesn’t. There really isn’t a lot of incentive for the company to hire the ethical coder (unless that is their own pet project/idea). So proposing a code is all fine and good but I fear it might just reach too few people and have too few teeth.

The second problem is the code itself. When I got my degree in computer science every student was forced to take a course called “Informatik und Gesellschaft” (computer science and society). It was a crash course in data protection and discrimination through software etc. and it had an exam you had to pass for your degree. But in spite of everybody going through it and learning about dual use of software algorithms I know of a few people I want to University with who now work for weapons manufacturers (of course just just defensive weapons, because that is a super duper nice way of saying “I get a lot of money to write code to kill people”) or who have worked on very dubious data analytics projects. Because writing a code of ethics for software development that actually means something, that you can’t easily sidestep is hard.

I already wrote about how abstract and often disconnected software development is. Especially when working in teams, on larger software systems the small subsystem one person or team works on might look very unsuspicious: You just write code to detect eyes in a picture. That has great applications for cameras for example. Or if you want to aim for a headshot. But the software component is very similar.

If you work for a weapons manufacturer you can assume that even your most innocent little tool will be used to harm others, that’s what you get paid for. But what if you just write face detection for some client you don’t know? It’s not always simple to really follow where some piece of code might land and which consequences it will have. Especially with software engineering trying to create flexible, abstract, generic solutions that can be applied in many different ways.

Sure, we can ask the person doing the integration to be ethical. Not the person writing the face detection but the person integrating it into a drone. But fewer and fewer software systems are this simple and the people doing the integration of the obviously problematic things usually have gone through enough hierarchy and shaping in the organization that they do align with its values making and ethical dissent that less likely. If they even see an ethical issue. Facebook sees connecting people as generally good and everything that creates more connections or makes it easier to connect is good. That puts everything that makes connecting harder, that creates blocks or slows connecting down in a bad light. And it’s not even like that is an unreasonable argument. It’s wrong but you can make it without looking ridiculous. A code of ethics only “works” if the things that it wants to discourage can easily be parsed as “bad”. Which is a rare thing in software. (Unless you build software for weapons. Seriously. You are a horrible human being.)

With the advent of automated statistics (which is real talk for most of what gets called Artificial Intelligence these days) the consequences of these systems, the way they will behave get harder and harder to predict, especially when they leave the developer sandbox. How long did it take Twitter’s users to turn Microsoft’s Twitter chatbot into a racist hate machine? Of course we could ask for ethical coders not to release any such learning software system but a code of ethics shouldn’t be based on a specific kind of approach, a specific tech.

Through compartmentalization and growing abstractions and complexities the consequences of the development and deployment of software systems get increasingly harder to understand. And that’s without even thinking about the unforseeable things unknown social systems and groups can do with any given set of tools.

And even with a lot of training do we really believe that software developers can just predict how a certain software package might change a social system or structure? Something that sociologists and psychologists spend their life’s work understanding? I find that hard to believe.

The problem with existing Codes of Ethics for tech is that they try to cover everything that tech does (again: For an example see the ACM Code of Ethics). Because the more abstract (and short) the code is, the harder it is to apply consistently and meaningfully.

Slapping a Code of Ethics on tech is a very techy solution. It feels like adding another software component to the pile of libraries and algorithms that make up your project: “We use Python, Django and a Code of Ethics”. But that solution is too simple.

Getting “tech” as a huge pile of different, slightly overlapping communities to subscribe to some kind of Code of Ethics will require more than just finding the right wording. It would require changing the whole modus operandi of tech, it would require significant changes to how teams developing tech operate. And to be honest, I don’t think tech itself is able to really handle it.

Predicting the consequences of the application of some technology isn’t a skill we can teach tech people with a few courses. Realistically what can be done is to get tech people to develop a feeling for when they need to ask someone else. People who have experience with psychology, sociology, philosophy etc. People who are actually affected negatively be certain technologies or their applications. It just continues tech hubris to assume that “all that tech consequence stuff” is just a small thing tech people will easily pick up.

Solving the issues of inhumane tech won’t be solved within tech. Those can only be solved through interdisciplinary approaches. By building teams of people with very different skills and giving those who don’t know how to write code the same importance that are often assigned to programmers: Look at Facebook. If non-programmers (or non-ad-business people) had anything to say there, Facebook would stop throwing the term community around obviously not really knowing what it means. A Code of Ethics wouldn’t change that.

I’d love to see better, simpler, shorter Code of Ethics for people in tech. Codes that do communicate a certain world view, a set of basic principles as they’d apply to software systems. But even if it was possible to build them (and it’s not like people haven’t tried) I feel like demanding a Code of Ethics for software might be confusing a small mechanism to improve tech with a solution for a structural and political problem. And that problem sadly is a little too complex than to be solved by adding a few nice words.

If you want to support me having time to write these things or if you just enjoyed this essay you can buy me a drink (Paypal link).

Photo by DannonL

Read the whole story
11 days ago
The Hippocratic Oath is not a very useful model for ethics in tech. It ignores the structure of the tech culture as well as its lack of structures...
Share this story

Desert Island Economics

2 Comments and 11 Shares

Read the whole story
29 days ago
"Liberty Island" feels a little too real for my tastes ...
28 days ago
32 days ago
Share this story
1 public comment
33 days ago
Using a desert island to look at the practical results of liberty, and even a bit of communism.
Texas, USA
Next Page of Stories