Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2352 stories
·
125 followers

Tossed Salads And Scrumbled Eggs

1 Comment

With the decision to focus on our consultancy full-time in 2025, my time as an employee draws to a close. My attitude has become that of an entomologist observing a strange insect which can miraculously diagnose issues, but only if the diagnosis is "you weren't Agile enough". My rage has quickly morphed into relief because this is, broadly speaking, the competition.

'A beetle clock?' she said. She had turned away from the glass dome.

'Oh, er, yes... The Hershebian lawyer beetle has a very consistent daily routine,' said Jeremy. 'I, er, only keep it for, um, interest.'

While our team is now blessedly free of both the madness of corporate dysfunction and the grotesque world of VC-funded enterprise, we must still interface with such organizations, ideally in as pleasant a manner as is possible for both parties. But there is still one question from my soon-to-be old life that bears pondering, which I must understand if I am to overcome its terrible implications.

What the fuck is going on with all those dweebs talking about Scrum all day?

I.

I would rather cover myself in paper cuts and jump into a pool of lemon juice than attend one more standup where Amateur Alice and Blundering Bob pat each other on the back for doing absolutely fucking nothing... except using the learning stipend to get "Scrum Master Certified" on LinkedIn or whatever.
— A reader

You may be surprised to hear, given my previous writing, that I am more-or-less ambivalent about the specifics of Scrum. The inevitable protestations that "Scrum works well for my team" whenever it comes up are both tedious and very much beside the point.

The reason that these protestations are tedious is that, in the environment we find ourselves navigating, these are meaningless statements without context. Most people in management roles are executing the Agile vision in totally incompetent fashion, usually conflating Scrum with Agile. They also think that it's going just swimmingly, when a more accurate characterization would be drowningly. Given that you do not know who you are speaking to over the internet, whether they are competent engineers or self-proclaimed thought leaders, any statement about Agile "working for my team" does not convey much information, in the same way that someone proclaiming that they are totally not guilty is generally an insufficient defense in court.

The reason that they are beside the point is that the specifics of Scrum are much, much less interesting than what we can infer from the malformed version of the practice that we see spreading throughout the industry. I believe there are issues with Scrum, but those issues simply do not explain the breathtaking dysfunction in the industry writ large. Instead I believe that Scrum and the assorted mutations that it has acquired simply reflect a broader lack of understanding of the systems that drive knowledge work, and the industry has simply adopted the methodology that slots most neatly into our most widely-held misconceptions.

II. Oh Baby, I Hear The Blues A-Callin'

As you can see, I am not a data engineer like yourself, but we share the deep belief that Scrum is complete and utter bullcrap.
— A reader

When I first entered the industry, I was at a large institution that had recently decided to become more Agile.

It is worth taking the time to explain what this means for the non-technicians in the audience, both so that they can follow what is going on, and so that the technicians here can develop an appreciation for how fucking nuts this all sounds when you explain it to someone with some distance. Most of us are so deeply immersed in this lunacy that we no longer have full context on how bizarre this all is.

To begin with, "Agile" is a term for a massive industry around productivity related to software engineering. Astute adults, with no further context, will see the phrase "industry around productivity" and become appropriately alarmed. The industry is replete with Agile consultants, Agile coaches, Agile gurus, and Agile thought leaders. Note that these are all the same thing at different points on the narcissism axis. Agile is actually a philosophy with no concrete implementation details, so there are management methodologies that claim to be inspired by that broader philosophy. The most popular one is called Scrum.

As with any project management methodology, the dream goal with Scrum is for teams to work more quickly, to respond to changes in the business more rapidly, and to provide reliable estimates so that projects do not end up in dependency hell. This is typically accomplished through Jira. All-powerful Jira! All-knowing Jira! What is this miraculous Jira? It's a website that simulates a board of sticky notes!

An image of a Jira board, which is a series of columns containing cards, with the leftmost column being labelled "To Do" and the rightmost being "Done"

That's it, that's the whole thing. When something needs to be done, you put it in there and communicate on the card. Well, all right, that doesn't sound so bad yet.

In any case, this is paired with a meeting that runs every morning, called a Stand-Up. It is supposed to run for approximately ten minutes, as one would expect for a meeting that's going to happen every day. Instead, every team I've seen running Scrum has this meeting go on for an hour. Yikes, yes, daily one hour meeting. And since orthodoxy in the modern business world is that a "silo" is bad1, many people work on more than one team, so they attend two one hour meetings per day. That is a full 25% of an organization's total attention dedicated to the same meeting every day.

What on earth are you doing in daily one hour meetings?

Well, we discuss the cards.

Wait, I thought the whole point of Jira was so that all your notes are on the electronic cards?

You're asking too many questions, heretic. Guards, seize them!

Of course, while this is usually enough to provoke complete confusion when explained to people with enough distance from the field to retain their grasp on common sense, it gets worse. Prepare yourself for a brain-frying.

You typically don't just do work as it needs doing. In an effort to keep track of the team's commitments as time goes on, the team commits to Sprints, which basically means that you commit about two weeks worth of cards to the board, then only work on those cards on pain of haranguing. Sprints are usually arranged back-to-back with no breaks, and "sprinting" nonstop throughout the year is obviously a totally healthy choice of words which has definitely never driven anyone to burnout.

But to keep track of how much work is in each card, there is usually another meeting called Backlog Grooming, where the team sits around and estimates how much time each card is going to take. This is done by assigning Story Points to cards. What is a Story Point? Why, it's a number that is meant to represent complexity rather than time, because we know in the software world that time estimates are notoriously unreliable.

To make things even simpler, most teams actually still use them to mean time, enough so that there are all sorts of articles out here where people desperately try to explain to professionals in the industry that they shouldn't use the phrase "Story Points" incorrectly, even though knowing what one of the core phrases in the methodology means should be a given.

Okay, you're with me so far, right? Scrum is a project management methodology based on Agile, where you run daily Stand-Ups to reflect on how your Sprint is going, and the progress of your Sprint is measured by the number of Story Points you've completed in your cards, which may or may not be hours.

Fuck, wait, did I say cards? There are no cards, there are Stories and Tasks, and a long sequence of Stories and Tasks contributes to an Epic... wait, did I not explain what an Epic is? An Epic usually translates to some broader commitment to the business. Sorry, sorry, we'll try again.

So you do the Stand-Ups to evaluate how many Story Points you've completed in your Sprint — ah shit, wait, wait, I forgot. Okay, so the number of Story Points you've done in a Sprint is Velocity.

Yeah, right, so you want your Velocity to stay high.

So you run Backlog Grooming to produce Story Points for each of our Stories and Tasks, which are not time estimates except when they are, and then we try to estimate how many Story Points we can fit into a Sprint, which is explicitly a timespan of two weeks, again keeping in mind that Story Points are not time estimates, okay? If we do a good job, we'll have a high Velocity. And then we put that all into Jira, and you write down everything you're doing but then I also ask you about it every morning while I simultaneously try not to turn this into a "justify your last eight hours of work" ceremony.

Damn it, wait, wait, I forgot to tell you, these aren't meetings, okay? They're called Ceremonies, and the fact that I am demanding large swathes of people attend ceremonies against their will does not make this a cult.

Phew. Okay. Now, with all of that in mind, how many Story Points do you think it'll take you to update that API, Fred?

Four?

Fred, you dumb motherfucker, Story Points have to adhere to the Fibonacci sequence2, you stupid idiot. Four? You mud-soaked yokel. You've disrespected me, this team, and most of all, yourself. Christ, Fred, you're better than this. Fucking four? I wouldn't let that garbage-ass number near my business. I don't understand why you're struggling with th—

III. None Of These Words Are In The Bible

As someone whose part of their job was to write end-user documentation at REDACTED about these exact things, you have my wholehearted, eye-twitching encouragement from this section alone.
— A reader reluctantly working in Scrum advocacy

I'm going to stop there for a second.

You may understand now why, when first confronted by Jira and the Agile Cult, I elected not to read anything about it for about a year. It was, after all, an organization-wide transformation being pushed by many people with big ol' beards and very serious names like Deloitte in their work histories. Repeated references were made to the Agile "manifesto" by non-engineers, which caused me to avoid reading anything about it. It was a whole manifesto, for which the only frames of reference I have are voluminous works by Marx and Kaczynski which I haven't read either. Surely these people had been employed because they had some formidable skillset that I was missing.

Imagine my befuddlement when I realized that this is the entirety of the manifesto:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right,
we value the items on the left more.

Congratulations! You're all Agile certified! I'm so proud of all of us. This certificate is going right on the fridge, alongside the prettiest macaroni art.

A few things here are striking.

The first is that I don't see any Proper Nouns at all. So all that Scrum-Velocity-Story-Point-Epic-Fibonacci stuff is very much some sort of philosophy emerging from a weird nerd-aligned religious schism.

The second thing is that the text is actually reasonable, and able to provoke meaningful discourse. For example, should individuals be contrasted with processes? Is taking the individual into account necessarily at odds with process? In some ways yes, in some ways no, but the authors are merely stating a rough preference for when the two come into conflict. And the final sentence walks back the preceding four, so this is hardly the incendiary foundation for the monolith of bullshit I just described.

So where on earth is all this Scrum stuff coming from? There's nothing in the original manifesto that would suggest that this is sensible, nor is there anything that would even begin to send people down this strange pathway.

For all the flaws with Scrum, you will find no support for this stark madness anywhere within an authoritative source on the topic. Yes, it comes with a thousand confusing names and questionable value, but it doesn't actually suggest people dedicate up to half their time to meetings. In fact, Scrum is mostly embraced in a manner which implies some of its most fervent advocates have failed to spend even a few minutes reading about their primary job functions. How else can we explain the prevalence of Story Points as time estimates, and the one hour meetings every morning?

And this is exactly why I don't view Scrum itself as particularly problematic. The fundamental issue, the one that is only moved by small degrees by project management methodologies, is that many, many people simply have totally unsophisticated ideas around how knowledge work functions.

IV. Anxiety3 and Scrum Masters

Last week, someone was trying to bully me into estimating something. It took two hours of me saying "I can't estimate that, it has no end point, you don't understand what you're asking" for it to finally devolve into "Ok, you can't estimate it, I understand, but if you had to estimate it what would it be?"

I said fuck it, 8 hours for me to investigate and come back with an answer on how long the work would take... and they were happy with that. Nobody paused to consider it took 25% of my estimated time to have a meeting about why it was a dumb question.
— A reader

Work at large companies has a tendency to blow up, run far behind schedule, then ultimately limp past the finish line in a maimed state. One of my friends talks about how, when faced by his first failed project on a team, a management consultant responded to all critical self-reflection with "But you'd say that, overall, this was a success?" in a desperate bid to generate a misleading quote to put into a presentation to the board.

The core of this issue lies in the simple fact that time estimates will, with varying frequency based on domain and team skill, explode in spectacular fashion. We are not even talking about a task taking twice as long as initially estimated. I'm talking about missing deadlines by years. The software I mention in this blog post is now over ten years overdue. I am fairly certain that the majority of software projects collapse in this fashion, for reasons that would only fit into a post about Sturgeon's Law.

It is in the shadow of this threat that the Scrum Master lives. Yes, that's right, there are still exciting Important Words that we haven't introduced. The Scrum Master, who I usually call Scrum Lords because it's funnier, is some sort of weird role that's specialized entirely in managing the Jira board, providing Agile coaching, and generally doing ad hoc work for an engineering team. Keeping with the theme of Scrum being bad but people being even worse at implementing it as prescribed, they usually end up being the project manager as well. Atlassian's definition of a Scrum Master notes that one of their core roles is "Remove blockers: superhero time", which makes me want to passionately make out with a double-barreled shotgun. I can only assume that Scrum Masters feel deeply infantilized by this, and I am offended on their behalf.

remove_blockers.png

They are generally very sad and stressed out, while simultaneously pissing off everyone around them. I can just punch any software YouTuber's name into the search bar along with "Scrum Master" and be assured that I can find someone sneering. Putting that brief meanness aside, I am actually very sympathetic. They are, after all, people, and I take the bold stance that I'd prefer people be happy and self-actualized.

All of this, with the boards and the Stories and the Epics, they're all mechanisms for trying to construct some terrible fractal of estimation that will mystically transmute the act of software engineering into the act of bricklaying. And I'm guessing that bricklaying is also way more complicated than it looks, so this still wouldn't improve matters much even if it worked. This is further complicated by the fact that most Scrum Masters have either no understanding of the work under consideration, or have learned enough merely to be dangerous4. This puts them into an impossible position.

If companies are going to pay outsized compensation to perform a job that simply requires a degree and a willingness to endure tedium, I can hardly fault someone for taking that deal. Even the Atlassian definition of a Scrum Master notes that technical knowledge is "not mandatory", so who can blame them for not having technical knowledge? And once you're in that position, you have now become the shrieking avatar of the latent anxiety in the business. All projects are default dead barring exceptional talent, but this level of realism would fail to extract funding from the business, even if cool analysis reveals that the failure chance is still worth the risk.

The Scrum Master is thus reduced to a tragic figure. They worry about losing their overpaid role, are not developing skills that are easily packaged when pitching themselves to other businesses, and feel responsible for far too much inside a project. Yet they do not have the knowledge or the power to debug the machine that is the team, even if they are well-intentioned and otherwise talented.

Bad actors can more-or-less get away with saying anything to avoid doing work, because the truth is that only an engineer can tell when another engineer is making things up, which is precisely why we all live in fear of sketchy mechanics overcharging us for vehicle repairs. Even if someone is suspected as malingering, the Scrum Master is unable to initiate termination procedures, and will probably have to trust their gut to a degree that is unpalatable for most people if they want to escalate issues.

If the project is running late, they have no recourse other than to ask the engineers to re-prioritize work, then perform what I think of as "slow failure", which is normally the demesne of the project manager. When a project is failing, the typical step is not to pull the plug or take drastic action, it is to gradually raise a series of delays while everyone pretends not to notice the broader trend. By slowly failing, and at no point presenting anyone else in the business with a clear point where they should pull the plug, you can ultimately deliver nothing while tricking other people into implicitly accepting responsibility. The Scrum Master is generally not malicious, they are just failing to see the broader trend, and simply hoping for the sake of personal anxiety regulation that this task will indeed be accomplished by the next sprint.

When I run into someone in this position, I have very little trouble with my disdain when they're enjoying harassing everyone, but I mostly run into people who are actually struggling to be happy with 40 hours of their week. I know of Scrum Masters who have broken down crying when they hear that people are leaving teams — not due to a deep emotional connection with the person leaving, but because that anxiety is lying right below the surface, and almost any disruption can set it off. It is not unusual to hear people in this role flip between intensely rededicating themselves to "fixing the issues" and then despairing about their value to society, something that I personally went through on my first corporate team. It sucks.

I suspect that the impact of the organization manifesting their anxiety in one person in this way, then giving that person control of meetings and the ability to deliver that anxiety to their teams, is perhaps one of the most counter-productive configurations possible if you assume that the median Scrum Master is not a bastion of self-regulation. These people exist, but I wouldn't bet on being able to hire a half dozen of them at an affordable rate. For most of us, including me, attaining this level of equanimity is very much a lifelong work-in-progress. But even this is not a problem with Scrum, it's a much more serious problem — that organizations run default-dead projects and have cultures where people have to hide this while executives loot the treasury — that is simply made slightly worse by Scrum configurations.

V. The PowerPoint Is Not The Territory

Most engineers just go through the Scrum/Agile motions, finding clever ways to make that burndown chart progress at the right slant without questioning what they’re doing, and it’s nice to read someone articulating the negative thoughts that some of us have had for such a long time. Believe me when I say it’s been this way pretty much since the inception of this fad in the mid 90s.
— A reader

I have previously joked (I was actually dead serious) that the symbolic representation of the work, the card on the Jira board, is taken to be such a literal manifestation of the work that you can just move the pointless tasks to "Done" without actually doing anything and the business will largely not notice. If the card is in "Done", then the work is Done. The actual impact of the work is so low that no one notices, in a way that my electrician would never be able to get away with. Readers have written in to say that they have done exactly this, and nothing untoward has happened.

This conflation of management artefacts with the actual reality of the organization is widespread, and also not Scrum specific, but it is my contention that methodologies producing these artefacts is core to their appeal. A phrase I love is that "the map is not the territory", which more or less translates to the idea that maps merely contain abstract symbols of the territory they represent, and that while we may never have access to a perfect view of the whole territory, it is important to understand that we aren't looking at the real territory. That little doodle of a mountain is not what the mountain actually looks like. Despite the scribble of a sleeping dragon, Smaugh may be awake when you get there.

The harsh truth is that, as with anything complicated enough in life, you cannot realistically de-risk it. We go through our days with complex five-year plans, have them utterly blown apart every year by Covid, assassinations, coups, and if you're super lucky the best you can hope for is the dreadful experience of watching other people getting cancer rather than yourself. Then, because this is terrifying, we immediately go back to pretending that the most important event in the next five years will also be predictable.

And the other thing that we do because risk is usually terrifying (it's actually quite fun when you learn to expose yourself to good rare events — say, writing in public), is we immediately cling to things that smooth this away. Software engineers do not like engaging with the business partially because they trend towards being nerds, but mostly because interfacing with true economic reality is confronting. And non-programmers seem like they're interfacing with the reality of the business, but frequently they are interfacing with Reality As PowerPoint, which is closer to the territory but still not the territory.

True reality is never accessible because no one has perfect information. We do not know whether our competitor's latest product is going to be far behind schedule or utterly obliterate us. We do not know if a pandemic is going to shut down the state for a year.

To make matters worse, reality that is accessible is usually not accessible from a high vantage point. From a bird's eye view, you have no way of knowing that 80% of a specific team's output is from Sarah, and Sarah's son just broke his arm playing soccer so that project is about to collapse as she scrambles to cope. This is totally visible to some people at the business, but is not going to be shared with the person making promises to the board. We could build a complex systems-thinking approach about our work, but that is very hard and will have obvious fuzziness.

Many of the mediocre executives I meet, particularly those I meet in the data governance space5, love their PowerPoints and Jira boards because while they are nonsense, they are nonsense that looks non-fuzzy and you will only have to deal with their inaccuracy once every few years, at which point so many people signed off on the clear-but-wrong vision of reality that it's hard to tell who is ultimately accountable for the failure. A more effective management methodology, one which accurately portrays the degree to which no one knows what is going on because life is chaotic, only makes sense for an entirely privately owned business where the owner needs to turn a profit rather than impress his employers or the markets.

This mode of non-fuzzy being is only available to those who are salaried to "run the business", which means that they are not accountable to the territory, much like a hedge fund manager who receives bonuses for good years and a simple firing (keeping their ill-gotten gains) during a bad year, allowing them to engage in strategies that have massively negative expected returns but only during rare events. This is in stark contrast to the reality of the bootstrapped business founder, such as the barber down the road, who will simply be on the hook for such losses. If you're looking for results, rather than being precise, we want the symbols for our work to look as fuzzy as we are uncertain, rather than pseudoclarity. I want the rope bridges on my map to exist in a superposition of being intact and destroyed in the last big storm.

Of course, this is exactly what expensive PowerPoint reports and Jira provide. Pseudoclarity, for as long as you're willing to fork over enterprise license money. The version of reality where you can simply calculate how many Story Points you're completing per month, compare that to the number of Story Points in the project, then calculate that the project will be finished on time is very, very tempting, but the ability to do this is dictated by factors that are almost totally unrelated to Scrum itself. A team that can work this smoothly has probably already won, whatever you decide to do.

Some people have just lived with these symbols for so long that they think drawing a box on a PowerPoint slide that says "Secure Personally Identifiable Data" is the same thing as actually making that happen, as if one could conjure a forest into existence by drawing some trees.

VI. Dear God, The Meetings

Funny thing is that, every time my company tried to introduce OKRs and make it work, it was clear to me that nobody read the book nor understood the final goal. Like Agile or Scrum. People try to implement them as dogma, and they always backfire because of that. I guess it is always easier to be a cook and follow recipes than to be a chef and adjust based on current circumstances/ingredients. — A reader

If there is a specific problem with Scrum, something that I genuinely think makes it stand out as uniquely bad rather than just reflecting baseline organizational pathology, meetings are it. People are not good at running meetings, and mandating that they hold more meetings does not merely reflect our weaknesses as a society, it greatly amplifies the consequences of that weakness.

So many meetings. So, so many meetings. Suffice it to say that anyone who runs a one-hour Stand-Up with any consistency should be immediately terminated if they are primarily Agile practitioners. There is very little to say here, save that people are so terrible at running meetings that, on average, the sanest thing to do for most businesses is pick a framework that minimizes the default number of them. I will appeal to authority here on the normalcy of meeting-running skill being low and simply quote Luke Kanies who has given meetings more thought than I have:

So a manager’s day is built around meetings, and there is a new crop of tools to help with them. What’s not to love?

Well. The tools are built by and for people who hate meetings, and often who aren’t very good at them. Instead, I want tools for people whose job is built around meetings, and who know they must be excellent at them.

In the absence of people who treat running meetings as seriously as we treat system design, try not to run many meetings. If this sounds unpalatable then get good, nerds. I'm turning the tattered remnants of my humility module off to say that the team at our consultancy runs a meeting at 8PM every Thursday, after most of the team has just worked their day job and struggled to send kids off to bed, and we actually look forward to it. This is attainable, though even then we constantly reflect on whether the meeting needs to keep existing before it wears out its welcome. I ask people very frequently, possibly too frequently, whether they're still having fun.

I currently believe that meeting-heavy methodologies are preferred because they feel like productivity if you aren't mindful enough to notice the difference between Talking About Things and Doing Things. Even some of the worst Agile consultants I know have periodically produced real output, but I suspect they can no longer differentiate between the things they do that have value and the things that do not.

VII. We Need To Talk About Jeff

A while ago, I wrote a short story about Scrum being a Lovecraftian plot designed to steal human souls. It ended with this quote:

In the future, historians may look back on human progress and draw a sharp line designating "before Scrum" and "after Scrum." Scrum is that ground-breaking. [...]

If you've ever been startled by how fast the world is changing, Scrum is one of the reasons why. Productivity gains of as much as 1200% have been recorded.

In this book you'll journey to Scrum's front lines where Jeff's system of deep accountability, team interaction, and constant iterative improvement is, among other feats, bringing the FBI into the 21st century, perfecting the design of an affordable 140 mile per hour/100 mile per gallon car, helping NPR report fast-moving action in the Middle East, changing the way pharmacists interact with patients, reducing poverty in the Third World, and even helping people plan their weddings and accomplish weekend chores.

This is so unhinged that readers thought this was something I made up. 1200% productivity improvements? You can use Scrum to report on wars and accomplish your weekend chores? This looks like I asked ChatGPT to produce erotica for terminally online LinkedIn power users.

I wish it was.

That's the blurb from one of Jeff Sutherland's books, one of the Main Agile Guys. The subtitle of the book is "doing twice the work in half the time", so this absolute weirdo is proposing that Scrum makes you four times faster than not doing Scrum. Jeff has also gone on record with amazing pearls of wisdom like this:

If Scrum team is ten times faster than a dysfunctional team and AI makes teams go four times faster, then when both use AI the Scrum team with still be be ten times faster than the dysfunctional AI team. Actually, what I am teaching today is not only for developers to use AI to generate 80% of the code and be five times faster. This will make each individual team member 5 times as productive and the whole team five times faster. But it you make the AI a developer on the team you will get a synergistic effect, potentially making the team 25 times faster.

Jeff, what the fuck are you saying? This is incomprehensible nonsense. You are throwing random numbers out faster than Ben Shapiro when he's flinging anti-trans rhetoric, a formidable accomplishment in and of itself, and they're all totally unsubstantiated. This is insane. This is demented. Scrum is ten times faster than a dysfunctional team? Are all non-Scrum teams dysfunctional? AI makes teams go four times faster? But you're teaching people to use AI to be five times faster? Then if you put AI on the team as a developer there's a synergistic effect and they're 25 times faster? Fucking what? What mystical AI do you have access to that can replace a developer on a team today and why aren't you worth a trillion dollars?

And if we throw Scrum into the mix for that sweet, sweet 10x speedup, can I get my team to be 250 times faster? Can our team of six people perform the work of, let me do some maths, 1500 normal developers?

How have we defaulted to a methodology that has this raving fanatic at the helm?

VIII. Cognitive Biases and Doing Better

Contracting has exposed me to a variety of technical challenges and domains, ranging from work I remember with fondness and pride, to the kinds of unbearable, interminable corporate Scrum nightmares you describe so eloquently in your blog which seemed to be cooked up in a lab intent on undermining and punishing any sign of genuine ambition towards the improvement of human life.
— A reader

The reality is that teams are messy, filled with emotion, and that this is further compounded by the fact that our work requires a great deal of emotional well-being to deliver consistently. I once worked on the management team at a Southeast Asian startup, and while I was terribly depressed at that job, I was able to get my work done with some degree of consistency. Now that I am in IT, I basically cannot program when I am in a negative headspace because I cannot think clearly, and this dominates most of the productivity gains I see in a typical engineering team. Poor sleep, low psychological safety, and a thousand other little levers in the brain can disrupt functioning. There is no real shortcut for this.

With that said, I do have thoughts on how to do better, and you're going to get them no matter how annoying that is! Behold, the unbridled power of not relying on advertising revenue!

Names Matters And Simple Is Good

Let's start very, very simply.

Names matter.

Agile is popular because the word Agile has connotations of speed, and that is genuinely as sophisticated as many people are when designing their entire company's culture.

Sprints are popular because the word Sprint has connotations of speed. The fact they are called Sprints has probably, in aggregate, genuinely killed a few people when you aggregate the harm of being told you are Sprinting every week across a few million anxious people. Don't give things idiotic names.

All methodologies should compete against a baseline of a bunch of sticky notes on a whiteboard, and you should question your soundness of mind every time you feel the need to introduce a Proper Noun, okay? Just have a big list of things to do, order in terms of what you want done first, and then do that. Just think about how much you'll save on onboarding and consultants for the exact same outcome in almost all cases. There are plenty of superior methods to this, but there are way more worse ones. If you have a meeting, just call it a meeting and prepare an agenda.

I swear to God, if you invent a Proper Noun and someone asks me to learn it for no reason, sweet merciful Jesus, I will find you and —

Cognitive Biases

Ahem.

A spectacular amount of the design that goes into these methodologies is based around avoiding cognitive biases around estimation, though they frequently fall short because there is no easy fix for a mind that craves only easy fixes. That one sentence describes 90% of dysfunction in all fields.

Consider the Fibonacci sequence restrictions, meaning that four Story Points can't exist (which is hilarious when adopted by teams using Story Points as time, because now four days can't exist). The generous reasoning behind this is that the variance in a highly-complex or time-intensive task is higher than that of a simple task, so it makes sense to force people into increasingly large number rather than stressing about a single point here or there. In reality, this is fucking silly and if someone suggested this ex nihilo, without the background of Scrum, we'd be absolutely baffled. But hey, an attempt was made.

Most of the suggestions will be around different models for handling these biases. I'll indicate which of these I have actually tried. For the most part, I have found that the typical organization is completely unwilling to actually try these, and will only consider them when talking to me when I am presenting in my Consultant Mode, not my Employee Mode. Even though I am more deeply embedded in the workplace culture as an employee, most people can't stop seeing ICs as too low-status to take seriously. In these contexts, I've just gone rogue and experimented without management buy-in.

Bets and Sunk Costs

Basecamp has a free book out online that talks about a methodology they call ShapeUp. It's quite good despite my general disdain for business books. In it, they explicitly deal with the failure mode of tasks stretching well beyond their value to the business, existing in the perpetual zone of "almost done".

We combine this uninterrupted time with a tough but extremely powerful policy. Teams have to ship the work within the amount of time that we bet. If they don’t finish, by default the project doesn’t get an extension. We intentionally create a risk that the project—as pitched—won’t happen. This sounds severe but it’s extremely helpful for everyone involved.
First, it eliminates the risk of runaway projects. We defined our appetite at the start when the project was shaped and pitched. If the project was only worth six weeks, it would be foolish to spend two, three or ten times that.
[...]
Second, if a project doesn’t finish in the six weeks, it means we did something wrong in the shaping. Instead of investing more time in a bad approach, the circuit breaker pushes us to reframe the problem. We can use the shaping track on the next six weeks to come up with a new or better solution that avoids whatever rabbit hole we fell into on the first try. Then we’ll review the new pitch at the betting table to see if it really changes our odds of success before dedicating another six weeks to it.

All this does is turn off the sunk cost fallacy, forcibly. It's very smart.

I ran this for a while with the other engineer mentioned in this blog post. Despite the broad horror of that story, it was the most productive work period of my life. It's also worth noting that the other engineer went on to become one of my co-founders, and that we both studied psychology together before getting into IT. A lot of our effectiveness came down to ruthless self-analysis and paying attention to failure modes.

Small Slippages Don't Exist

I like the advice given by P. Fagg, an experienced hardware engineer, "Take no small slips." That is, allow enough time in the new schedule to ensure that the work can be carefully and thoroughly done, and that rescheduling will not have to be done again. — Fred Brooks, The Mythical Man Month

This is the smartest and most practicable thing that I'm ever going to write on this blog. Unsubscribe after this because it's all downhill from here.

There is a rather strange phenomenon that arises around project lateness. When we estimate that something is going to be completed in a month, the natural temptation is to think that, when you are one day overdue, that you are almost done. I.e, it will be finished in one month and five days.

In reality, each day past the deadline increases the estimated deadline. Prepare yourself for one of my patented doodles, a thing which I am bullied for relentlessly at work, and enjoy the dreadful simulated experience of being one of my colleagues enduring an interminable lecture about some abstract concept that no one cares about.

A small chart showing a project that will probably finish between one and two months

This is the distribution that people think that are sampling from. As you move past the one month mark, you are approaching the sad probability that it'll take two months, but the likelihood of that is astonishingly low. At each point, the odds are that it's the next sprint where you'll deliver.

The truth is that if you keep missing deadlines (or even miss one deadline), reality is gently, and eventually not-so-gently, informing you that you are not drawing from the distribution you thought you were. Instead, which each passing day, it is increasingly likely that you are drawing from some super cursed distribution that will ruin your project forever.

A chart showing two estimates one for the same project, one lasting two months at most, and the other taking over 18 months

Each delay represents the accumulation of evidence that you are more likely to be drawing from the blue instead of the red, or something even worse than the blue. These days, when something important is late by one day, I immediately escalate to the highest alert level possible. This is unpalatable for political reasons, but it is the only appropriate response. It works.

I've tried this out, and I have never regretted it. I also once warned an executive about a server that was two weeks late in being provisioned. I failed to adequately explain the idea because it was complicated, they were impatient, and I hadn't practiced the explanation enough times... and I think that it's too counter-intuitive for non-statisticians to actually act on. Doing unusual things is a genuinely hard skill, even if you absolutely believe that the unusual thing is better. That server was provisioned a year behind schedule. There are no small delays in my world, only early delivery and absolute catastrophes.

No Deadlines

Our consultancy doesn't do deadlines. This was a strange idea when I first came across it because it is so different from the corporate norm, but it's a much better model when you have trust with the parties involved. If you don't have trust, guess what, nothing else matters. We pair this with fixed price billing, but the core is that we try to only work projects where there's no real risk of a few weeks here or there affecting our client adversely. The fixed price billing means that we aren't rewarded for running late, and have a higher effective hourly rate if we deliver something the client is happy with in less time. It also means that clients don't feel bad when we do things like document comprehensively or improve test suites.

This is tricky, because it runs totally counter to how a large business operates, but there would also be very little to enjoy in starting a business only to do what everyone else is doing. There's common wisdom in the savvier parts of the IT world that you should limit the number of weird things you do with regards to technology stack. Even that is actually an argument against bias (people drastically underestimate the difficulties of using weird technology and overestimate the value) — but that's also terrible advice if you actually know what you're doing. You want to maximize the amount of weird stuff you're doing across the business to generate asymmetry with your competitors, with the admittedly serious caveat that the pathway to this particular ancient ruin is littered with skulls. Pay attention to the skulls.

This isn't very useful for a larger business, as usually they have been architected to rely on conventional mechanisms, but it's worth thinking about the fact that this is possible. Authors like Jonathan Stark have it as part of their normal practice. You can choose to build a system that is not predicated on this idea of work flowing linearly through a system like a factory floor. In fact, one of the most famous books on IT operations is The Phoenix Project, but the The Phoenix Project is self-admittedly just a reskinned version of The Goal, a totally different book that is explicitly about factory floors.

This is also totally beside the point, but the audiobook version of The Goal has a romantic subplot in a book about factory operations and the editors included smooth saxophone when the protagonist finally frees up enough time at work to attend to his ailing marriage, which caused me to exhale tea through my nose.

Finally, here is a boring disclaimer that some industries simply can't get away with experimenting along these dimensions. Microchip manufacturers need to deliver the product in time for the next iPhone to ship or Apple cancels the contract. C’est la vie.

Estimation Is Expensive

This is a point taken from a private conversation with Jesse Alford, but obviously tasks can be estimated accurately, it's just expensive. I've done it before to a higher accuracy than I've seen on any Scrum team, with minimal practice, just by taking the time to have deep conversations with another engineer about the work. Unfortunately, it takes a non-trivial amount of engineering effort to do, and frequently has to be paired with actual work. Once again going to Basecamp, whose kool-aid I swear that I only drink sparingly even though it's delicious and refreshing, they have a specific chart on their platform called a hill chart. I love hill charts. They look like this:

A chart of a hill-shaped diagram, with tasks on the left climbing towards the middle, and tasks on the right sliding down the hill

Simply put, they reflect the reality that there is a phase of a project where scope increases as you run into new cases during implementation, and then a phase where you actually have a good idea of how long something is going to take. For example, if I was going to integrate an application with a third-party application, the period of horror where I learn about the Informatica API (a wretched abomination whose developers should be crucified) is the left side of the chart as I learn things like "The security logs don't tell you if a login attempt was successful, just that someone clicked the login button. The right side of the chart, once I have painfully hiked up the hillside which is littered in caltrops, is where I say "This is still absolute torment, but I am now confident that there are only three days of pain left".

People can have their gigantic Jira board, I guess, if they're willing to put that much time into something that isn't the work itself. And of course, that wouldn't be that great anyway, as it's possible to miss board deadlines, have to perform re-work, otherwise teams you depend on could screw up, people leave the team or become demotivated, etc. For the most part, businesses are best served by doing really, really simple things that have outsized value.

Even within the consultancy, the only things we've bothered setting internal deadlines for are those that we've been procrastinating on. Even our deadlines are deployed towards psychological ends. This will change over time, but it has been totally fine until now.

Autonomy Builds Morale

Nothing crushes software engineering productivity faster than low morale. Generally speaking, software engineers in the first world are well-paid enough, and their work is hard enough to measure, that you cannot intimidate them into working faster by standing behind them whilst demanding they flip burgers faster. Nor would I want to, on account of at least trying not to be a bastard.

Scrum teams, and any team that does not pay extremely close attention to the symbolic framework that the team is operating in, will collapse over the course of approximately a year. A core issue with Scrum specifically is that many of the ceremonies symbolize extant organizational issues much too clearly. One of the other important meetings, the Retrospective (I know, there's another word!) is where you sit down and evaluate how a sprint went, and what the team can change. I love the idea of the Retrospective. It is a fantastic idea that any team aspiring to greatness should adopt.

But if everyone in the Retrospective requires organizational permission to change things, or is the generic team where changes are not addressed with violent purpose, the Retrospective becomes emblematic of everything that makes people feel disrespected. On my current conventional-employment team, only about 20% of the team continues to attend Retrospectives, which is the outcome I've gradually observed everywhere I've seen Scrum. Again, this isn't a problem with Scrum — it's that Retrospectives interact positively with good cultures and negatively with bad cultures, and since most cultures are bad, it follows that Scrum is actively harmful to deploy into a random environment. It's only appropriate to roll it out when the organization is ready to actually make changes, and the continuance of the process should be immediately interrogated, and possibly terminated, the moment an engineer reports that they feel it's a waste of time. You can bring it back after figuring out how it all went so wrong the first time.

IX. Scrumbled Eggs All Over My Face

There's probably some broad lesson in here about thoughtfulness, constantly evolving how you approach your craft and the structures that surround it, being suspicious of people selling 1200% improvements in productivity, and accepting that there's no substitute for reading and thinking deeply about your problems. There is no management methodology that will make up for having team members that proselytize full-time for a philosophy that is five lines long without having read those five lines6.

But that sounds really tiring! Someone has announced Agile 2.0, baby! Nothing can go wrong! Finally, all our problems are solved! Goodnight, everybody!


  1. This frequently boils down to professional responsibility cosplayers hoping that by repeating the phrase "break down silos" every day, they will be able to network every employee at the company into one gigantic Borg cube, somehow achieving all the upsides of specialization and none of the downsides. There is a nuanced middle-ground between these two points, and 90% of people are accordingly busy not acknowledging this. 

  2. I had to look this up despite seeing it in practice many times because it's so weird that I felt like I was just making things up. 

  3. The etymology of the word business reveals that it originally stemmed from something approximately meaning "anxiety". It is claimed that this sense of the word is now obsolete... but I think you and I both know that there's a sliver of it still in there. 

  4. This is a sexy phrase in business and software because people like the idea of being "dangerous" when they are in fact being extremely boring. News flash: things being dangerous is bad, you mad bastards. Programming is not skydiving and that is okay. Learn enough to be safe instead. Imagine learning enough surgery or cybersecurity to be dangerous then bragging about it. Even Michael Hartl's Learn Enough To Be Dangerous courses actually make you exceedingly safe. 

  5. This is where grifters are flocking to in my field, because it sounds responsible and the industry has decided it doesn't require technical ability. I attended a meeting a few weeks ago where someone had written a slide saying that "robotics" are part of our five-year plan. My brother in Christ, I'm a glorified database administrator, why do I need robots

  6. The wildest thing about all this is that I actually like the manifesto and I still think most of this is bullshit. I like the principles enough that our consultancy is current leaning heavily towards adopting Extreme Programming practices. 

Read the whole story
tante
12 hours ago
reply
"If there is a specific problem with Scrum, something that I genuinely think makes it stand out as uniquely bad rather than just reflecting baseline organizational pathology, meetings are it. People are not good at running meetings, and mandating that they hold more meetings does not merely reflect our weaknesses as a society, it greatly amplifies the consequences of that weakness"
Berlin/Germany
Share this story
Delete

Reclaiming sovereignty in the digital age

1 Comment
Reclaiming sovereignty in the digital age

The internet is at an inflection point. The platforms have cemented their power, generative AI and associated financial pressures are pushing companies to further degrade the online experience, and more than anything else, the notion that democratic governments should leave the internet alone is rapidly breaking down. Nothing shows that more than the recent arrest of Telegram CEO Pavel Durov in France and the suspension of Twitter/X in Brazil.

Make no mistake, governments’ stance on the internet has been changing for some time. Beyond the actions of French authorities and the Brazilian Supreme Court, Australia continues to try to craft a new framework for the internet that works for their society, Canada has advanced regulations of its own, with an Online Harms Act making its way through parliament, and the European Union arguably kicked off this whole movement in the first place. But as the United States hypocritically starts throwing up barriers of its own to try to protect Silicon Valley from Chinese competition, other countries see an opening to ensure what happens online better aligns with their domestic values, instead of those imposed from the United States.

This movement is more widespread than it might otherwise appear. Last month, the Global Digital Justice Forum, a group of civil society groups, published a letter about the ongoing negotiations over the United Nations’ Global Digital Compact. “It is eminently clear that the cyberlibertarian vision of yesteryears is at the root of the myriad problems confronting global digital governance today,” the group wrote. “Governments are needed in the digital space not only to tackle harm or abuse. They have a positive role to play in fulfilling a gamut of human rights for inclusive, equitable, and flourishing digital societies.” In most of the world, that isn’t a controversial statement, but it’s one that challenges the foundational ideas that emerged from the United States and have shaped the dominant approach to internet politics for several decades.

How internet politics were poisoned

In the 1990s, as the internet was being commercialized, cyberlibertarians grabbed the microphone and framed how many advocates would understand the online space for years to come. Despite the internet having been developed with military and government funds, cyberlibertarians treated the government as the enemy. “You are not welcome among us,” wrote Electronic Frontier Foundation (EFF) co-founder John Perry Barlow in his Declaration of the Independence of Cyberspace. “You have no sovereignty where we gather.” It was surely a welcome message to the global elite gathered at the World Economic Forum in 1996, where he published his manifesto. Governments, not corporations, were the great threat. That blind spot helped fuel the creation of the digital dystopia we now live in.

The cyberlibertarian approach that emerged out of the United States isn’t particularly surprising. The political dynamic in the United States has a stronger libertarian bent than in many other countries, especially the high-income Western countries it’s usually compared to. Digital politics in California had already integrated libertarianism and neoliberalism, so it wasn’t a big jump for it to define the approach to the internet. “The California Ideology is a mix of cybernetics, free market economics, and counter-culture libertarianism,” wrote Richard Barbrook and Andy Cameron in 1995. They described it as a “profoundly anti-statist dogma” that resulted from “the failure of renewal in the USA during the late ‘60s and early ‘70s.”

That perspective began being championed by publications like Wired and digital rights groups like the EFF, but the corporate sector along with Democratic and Republican politicians found a lot to like too. In the late 1980s, then-Senator Al Gore was laying out how he saw “high-performance computing” as a tool of American power on the world stage, while Newt Gingrich embraced the internet when he became Speaker of the House in 1995. Despite being positioned as an approach that prioritized internet users, cyberlibertarianism was very friendly to the corporate interests that wanted to control the internet and shape it to maximize their profits.

The TikTok ban is all about preserving US power
The platform isn’t a national security threat, but a challenge to Silicon Valley’s dominance
Reclaiming sovereignty in the digital age

The digital rights movement’s focus on privacy and speech occasionally found it running up against the nascent internet companies, but more often they found themselves on the same side of the fight — whether it was against government regulation or traditional competitors that new internet companies wanted to usurp (and ultimately replace). Cyberlibertarians and the digital rights movement that grew out of it championed the notion that tech companies were exceptional; that traditional means of assessing communications and media technologies were no longer valid and that traditional rules couldn’t apply to these new companies. It was a gift to the rising internet companies; one that has created a lot of problems as we try to properly regulate them and rein them in today.

Traditionally, media and communications sectors were subject to strict rules, including expectations of a certain amount of domestic ownership like the United States has with its broadcasters or regulation of the type of programming or advertising that could be shown. In many countries, there was also public ownership as a bulwark against the private sector, such as with public broadcasters. The neoliberal turn had already started to change some of that, but with the internet it was all out the window; it had to be left to the private sector, with minimal regulation. If American tech companies that got a head start and easier access to capital dominated other countries’ internet sectors, their governments simply had to accept it, or else they would feel the combined pressure of company lobbyists, US diplomats, and digital rights groups that claimed such regulations were an inherent violation.

Probably one of the best examples of this dynamic is the copyright fight. For years, record labels, entertainment companies, and book publishers had pushed to increase copyright terms and worsen the terms of the deals given to artists. When file sharing came along, those companies worked with government to attempt a massive crackdown, but it wasn’t hard to turn public sentiment against them as it became incredibly easy to get access to far more music and media than people could ever imagine. Anti-copyright campaigners sided with the tech companies that wanted to violate the copyrights held by those traditional firms instead of trying to find a middle ground, even defending Google when it began scanning millions of books as part of its Google Books project. At the time, it was very much a David vs Goliath situation, with the tech companies in the position of David. But that fight — and many others like it — helped enable the growth of tech companies into the monopolists they are today.

“Don’t be evil” has long been jettisoned at Google and beyond in favor of a “move fast and break things” approach. They want to increase their power and grow their wealth at any cost, and are driven to do so just like any other capitalist company. They are not unique in that, but for a long time their public relations teams successfully convinced people otherwise. Parts of the digital rights movement have evolved in recognition of that, paying closer attention to the economic and political power the companies wield, yet even then it too often becomes narrowly focused on competition policy. The organizations most responsible for this approach have never made amends for their role in helping empower tech companies to cause the many harms they do today. In some cases, they still go to bat for them.

The Google monopoly ruling won’t save the internet
More competition won’t be enough to dismantle Silicon Valley’s power
Reclaiming sovereignty in the digital age

Prominent digital rights groups defended the scam-laden crypto industry several years ago, even taking money from crypto and Web3 groups to fund their efforts, and now claim that when OpenAI, Google, or Meta steal any content they can get their hands on — from artists, writers, news organizations, or social media users (which is basically all of us) — those actions should be considered fair use. In short, some of the most powerful companies in the world should have no obligation to compensate or get permission from the people who made the posts or created the works because that would threaten the cyberlibertarian ideals they’ve built their worldview on.

Cyberlibertarianism helps Silicon Valley

As we move into a period where regulation of digital technology and actions against major tech companies are becoming the norm, disingenuous opposition from industry lobbyists and some digital rights activists alike has become all too common. With crypto, they often argued they weren’t supporting the scams, but the idea of decentralization, even though, in practice, they were indeed defending a technology that was being commercialized as scam tech. A similar tactic is playing out with the defense of generative AI companies, where advocates arguing that stealing everyone’s work should be considered fair use say they’re not defending generative AI itself, but if they don’t defend the mass theft companies are engaging in then the entire practice of scraping would be in jeopardy.

Those arguments are intentionally overbroad and inherently deceptive. They make it seem like the entire foundation of the internet itself is at risk, playing on the libertarian reflexes within the tech community and taking advantage of the broader public’s lack of technical knowledge about how they internet works. But they’re exaggerations that serve the tech companies at the end of the day and have become commonplace in the opposition to efforts to rein in Silicon Valley’s power.

There are many examples of it. When Australia and Canada moved forward with legislation to force Google and Meta to bargain with news publishers so some of their enormous digital ad profits would go to local journalism, the cyberlibertarian response was to claim that the countries were implementing a “link tax” that would threaten one of the foundational aspects of the web: the hyperlinking between different web pages. Yet, while politicians and legislation often referred to the fact that the platforms do link to news articles, they never sought to actually put a price on links. In practice, the focus on links was rhetorical — a way to explain their plans to the public — with the ultimate goal being to force tech companies to sit down with news publishers and make a deal.

A similar process played out when Canada followed several European countries in regulating streaming platforms, something Australia is planning to do and the UK is investigating as well. The law forces foreign platforms like Netflix or Prime Video to commit to funding local content production and displaying a certain amount of Canadian content to users, as Canadian broadcasters have long been expected to do. Not only was this framed as a tax that would be passed onto consumers, but prominent digital rights campaigners picked up industry talking points that the legislation wouldn’t just apply to streaming companies, but to independent content creators on platforms like YouTube or TikTok — despite the government and the media regulator being clear that was not their plan. That fueled a deceptive news cycle and even got some online creators to publicly oppose the bill based on false information. As usual with cyberlibertarian approaches, the honest statements of government couldn’t be trusted.

The arrest of Durov and suspension of Twitter/X also bring the issues of privacy and speech into the spotlight. For years, these have been the central focus of digital rights campaigns, yet framing the internet through those lenses leads to a specific understanding of the problem — one that positions the government as the central threat. That approach is based on an inherently American perspective, coming out of how the US First Amendment frames free speech, rather than the understanding of free expression in many other countries that acknowledges the role of government to intervene on speech that threatens the broader society — which is exactly what the Brazilian Supreme Court is doing. There has been minimal outrage over the suspension of Twitter/X outside the right-wing echo chamber, which I would argue is a result of the hatred that’s developed for Elon Musk outside that circle. In any other case, the banning of a social media platform seems like the kind of case digital rights groups would jump on.

Pavel Durov and Elon Musk are not free speech champions
The actions against Telegram and Twitter/X are about sovereignty, not speech
Reclaiming sovereignty in the digital age

Telegram is another case entirely. For months before Durov was arrested, French authorities who specialized in investigating child abuse were collecting evidence of child predators using the platform to communicate with children, convincing them to make explicit images of themselves, and bragging about their abuse of children with other predators. Police tried to get Telegram to act, but it ignored the requests — to such a degree that the company until recently bragged on its website of not responding to authorities. Unsurprisingly, the police sought an arrest warrant for the chief executive and when he landed in France, they arrested him.

While some commentators have tried to frame the arrest as a speech issue, many privacy advocates have tended to ignore the substance of the case to narrowly focus on the fact that two of Durov’s charges fall under an obscure 2004 French law that requires companies distributing encryption technology to declare it. It’s not a surprise why: debating the issue of whether child predators and other criminals should be able to freely use these services is an uncomfortable one for them, because they explicitly argue in favor of it. The cyberlibertarian argument is that all communications must be encrypted to protect them from the governments they perceive as such a significant threat, and that means allowing the dregs of society to use them in criminal ways too; something the vast majority of the public would surely disagree with.

It’s an argument that once again treats digital technology and the internet as an exception where traditional norms cannot apply — particularly the fact that authorities have long been able to get warrants to search people’s mail, wiretap their phones, or obtain their text messages. That’s the trade off we’ve collectively made, and one that the vast majority of people have never seen as a threat to their rights, freedoms, or liberty — because they’re not libertarians. The push for encryption also sets up an arms race, forcing authorities to seek out even more intrusive methods to identify criminals and collect necessary evidence, including procuring software that compromises devices themselves similar to NSO Group’s Pegasus spyware. But once those tools exist, they can be obtained by many other groups that don’t have to follow the rules in democratic countries and use them against a much wider swath of people.

It’s also quite an ironic stance. The vast surveillance apparatus these campaigners decry is often not one owned and controlled by government. In fact, it was developed and rolled out by the private companies cyberlibertarians championed up until very recently, and sometimes still find themselves defending. The internet has enabled the creation of the most intrusive and comprehensive global surveillance system in the history of humanity, as companies developed business models based on mass data collection to shape advertising and other means of targeting users. It’s an infrastructure that has increasingly moved into physical space as well, and one that everyone from hackers to intelligence agencies have been able to use to all manner of nefarious ends.

This internet, where corporate power was a lesser concern than government, was supposed to deliver “a civilization of the Mind in Cyberspace” that would turn out to be “more humane and fair than the world your governments have made before,” as Barlow put it in 1996. But that vision was compromised by its blind spots and exclusions — hinderances that are still at central to how many people see the internet. Writing about Barlow in 2018, journalist April Glaser wondered what might have been if another approach had inspired the past two decades of internet politics. “I can’t help but ask what might have happened had the pioneers of the open web given us a different vision,” she wrote, “one that paired the insistence that we must defend cyberspace with a concern for justice, human rights, and open creativity, and not primarily personal liberty.” We’ll never know what could have been, but we can still jettison that perspective from our fights over the internet moving forward.

Embracing digital sovereignty

For a long time, it was hard to push back against an understanding of the internet framed through an individualist and anti-statist cyberlibertarian lens, even as a particular version of digital technology was being pushing on the world by a hegemonic United States to benefit its growing internet companies — and by extension its own global power. American politicians were not shy about that fact, but it largely escape the digital rights movement — particularly its leading organizations in the United States — whose narrow obsession with privacy and an American interpretation of free speech also set the mold for how groups in other countries understood digital communication. But with US dominance no longer guaranteed and people around the world getting fed up with the abuses of major tech companies, there’s an opportunity to carve out a new approach to the internet.

Embrace the splinternet
We’re being told to pick between US and Chinese tech. What if we don’t choose either?
Reclaiming sovereignty in the digital age

Instead of solely fighting for digital rights, it’s time to expand that focus to digital sovereignty that considers not just privacy and speech, but the political economy of the internet and the rights of people in different countries to carve out their own visions for their digital futures that don’t align with a cyberlibertarian approach. When we look at the internet today, the primary threat we face comes from massive corporations and the billionaires that control them, and they can only be effectively challenged by wielding the power of government to push back on them. Ultimately, rights are about power, and ceding the power of the state to right-wing, anti-democratic forces is a recipe for disaster, not for the achievement of a libertarian digital utopia. We need to be on guard for when governments overstep, but the kneejerk opposition to internet regulation and disingenuous criticism that comes from some digital rights groups do us no good.

The actions of France and Brazil do have implications for speech, particularly in the case of Twitter/X, but sometimes those restrictions are justified — whether it’s placing stricter rules on what content is allowable on social media platforms, limiting when platforms can knowingly ignore criminal activity, and even banning platforms outright for breaching a country’s local rules. We’re entering a period where internet restrictions can’t just be easily dismissed as abusive actions taken by authoritarian governments, but one where they’re implemented by democratic states with the support of voting publics that are fed up with the reality of what the internet has become. They have no time for cyberlibertarian fantasies.

Counter to the suggestions that come out of the United States, the Chinese model is not the only alternative to Silicon Valley’s continued dominance. There is an opportunity to chart a course that rejects both, along with the pressures for surveillance, profit, and control that drive their growth and expansion. Those geopolitical rivals are a threat to any alternative vision that rejects the existing neo-colonial model of digital technology in favor of one that gives countries authority over the digital domain and the ability for their citizens to consider what tech innovation for the public good could look like. Digital sovereignty will look quite different from the digital world we’ve come to expect, but if the internet has any hope for a future, it’s a path we must fight to be allowed to take.

Read the whole story
tante
1 day ago
reply
"Prominent digital rights groups defended the scam-laden crypto industry several years ago, even taking money from crypto and Web3 groups to fund their efforts, and now claim that when OpenAI, Google, or Meta steal any content they can get their hands on — from artists, writers, news organizations, or social media users (which is basically all of us) — those actions should be considered fair use. In short, some of the most powerful companies in the world should have no obligation to compensate or get permission from the people who made the posts or created the works because that would threaten the cyberlibertarian ideals they’ve built their worldview on."
Berlin/Germany
Share this story
Delete

Gimmicks of Future Past

1 Comment
The work of art in the age of the gimmick.
Read the whole story
tante
2 days ago
reply
"Far from liberating creativity from the strictures of conventional mediums, technology like AI is only serving to constrain the artist’s vision. There are exceptions—like Albert Oehlen and Richard Prince, both of whom subvert tech optimism by prankishly misusing their computer programs to create scrappy, consciously juvenile collages—but in general, neo-modernist painters are indeed not using their new tools; the new tools are using them."

(I love that I am not the only one hating Refik Anadol's empty works that in German I tend to call "Pixel wichsen" (Pixel jerkoff))
Berlin/Germany
Share this story
Delete

Ban warnings fly as users dare to probe the “thoughts” of OpenAI’s latest model

1 Comment and 2 Shares
An illustration of gears shaped like a brain.

Enlarge (credit: Andriy Onufriyenko via Getty Images)

OpenAI truly does not want you to know what its latest AI model is "thinking." Since the company launched its "Strawberry" AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe into how the model works.

Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an "o1" model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model.

Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1's raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets. There have been early reports of some successes, but nothing has yet been strongly confirmed.

Read 10 remaining paragraphs | Comments

Read the whole story
tante
2 days ago
reply
OpenAI is threatening to ban everyone who tries to research what their new model is actually doing.

While OpenAI uses a lot of language about "safeguards" it's mostly about keeping the illusion intact that o1 is a big leap when in fact it is a marginal patch to what they have been doing for a long time now. But they are looking for money right now and need to keep the hype active.
Berlin/Germany
Share this story
Delete

Paul Graham and the Cult of the Founder

1 Comment

Paul Graham has been bad for Silicon Valley.

Without Paul Graham, we would not have YCombinator. And YCombinator is, chiefly, the Cult of the Founder. Silicon Valley would be so much better off without it. The companies that came out of YCombinator would be better off if their leaders weren’t so convinced of their own moral superiority.

And this has been a long time coming. YCombinator’s malign influence can be traced back to its very first class.

The photo below is of YCombinator’s first cohort, in 2007. You can see a young, tall, lanky Alexis Ohanian in the back left row. Sam Altman stands in the front, arms crossed, full of unearned swagger. Paul Graham (the proto-techbro) is to Altman’s right, dressed in an outfit that screams “goddammit I reserved this tennis court for half an hour ago!”

To Altman’s left is Aaron Swartz. Aaron cofounded Reddit, but left the company when it was sold to Conde Nast. He couldn’t stand the YCombinator vibes. I first met Aaron a couple years later, after he cofounded the Progressive Change Campaign Committee. He would go on to cofound Demand Progress and successfully wage a major campaign against SOPA/PIPA, all while contributing to Creative Commons and RSS and blogging and making a dozen other types of good trouble.

It occurs to me that Aaron and Altman represent two archetypes of what Silicon Valley might value. Sam Altman embodied the ideals of the founder. He so impressed Paul Graham that, even though Altman’s company (Loopt) was a failure, Graham named him the the next President of YCombinator. Say what you will about the guy, but he has a remarkable flair for failing upward.

Aaron, meanwhile, was a hacker in the classical sense of the word. He was intensely curious, brilliant, and generous. He was kind, yet uncompromising. He had a temper, but he pretty much only directed it toward idiots in positions of power. When he saw something wrong, he would build his own solution.

Aaron died in 2013. He took his own life, having been hounded by the Department of Justice for years over the crime of (literally) downloading too many articles from JSTOR. Upon his death, the entire internet mourned. Books have been written about him, documentaries have been produced. It felt back then as though there was this massive, Aaron-shaped hole. There kind of still is, even today.

Sam Altman and OpenAI have scraped practically the entire Internet. JSTOR, YouTube, Reddit… so long as the content is publicly accessible, OpenAI’s stance appears to be that copyright law is only for little people.

For this, Altman has been crowned the new boy-king of Silicon Valley. It strikes me that in present-day Silicon Valley, thanks largely to the influence of networks like YCombinator, is almost entirely Altman wannabes. Altman is the template. It’s him and Peter Thiel and Elon Musk and Marc Andreessen and David droopy-dog Sacks. They have constructed an altar to founders and think disruption is inherently good because it enables such marvelous financial engineering. They don’t build shit, and they think the employees and managers who run their actual companies ought to show more deference.


I’ve been thinking about this lately because Paul Graham’s latest essay, “Founder Mode,” has been making the rounds. The essay is, on one level, an ode to micromanagement:

Hire good people and give them room to do their jobs. Sounds great when it's described that way, doesn't it? Except in practice, judging from the report of founder after founder, what this often turns out to mean is: hire professional fakers and let them drive the company into the ground.

But more than that, its a paean to the ethereal qualities that elevate “founders” from the rest of us.

There are things founders can do that managers can't, and not doing them feels wrong to founders, because it is.

(…)

Founders feel like they're being gaslit from both sides — by the people telling them they have to run their companies like managers, and by the people working for them when they do. Usually when everyone around you disagrees with you, your default assumption should be that you're mistaken. But this is one of the rare exceptions. VCs who haven't been founders themselves don't know how founders should run companies, and C-level execs, as a class, include some of the most skillful liars in the world.

Graham comes to this realization by hanging out with his founder buddies. These are some of the richest men in the world! And sometimes, the people around them push back on their ideas! But those people aren’t founders. It just isn’t right, for a founder to be questioned like that.

The single practical suggestion in the essay is that companies should follow in the footsteps of Steve Jobs (of course) and hold retreats of “the 100 most important people working at [the company],” regardless of job title. Graham insists this is unique to Jobs, that he has never heard of another company doing it. Dan Davies counters that this is, in fact, quite common, remarking:

“When I was at Credit Bloody Suisse, they used to have global offsites with 100 key employees from different levels (…) I don’t necessarily want to gainsay Paul Graham’s experience here, but if VCs are in the habit of imposing the kind of structures that he describes on their portfolio companies, then I think every business school prof in the world would agree with me that they are being dicks and should stop.”

The mood that animates Graham’s essay, though, is just sheer outrage that professional managers might constrain the vision and ambition of founders. In Graham’s rendering, the founders are akin Steve Jobs, while the professional managers are like John Sculley. (Nevermind that young-Steve-Jobs was a horrendous manager — not just in the sense that he was a cruel boss, but also in the sense that his products didn’t hit their sales targets and the company bled money. The notion that young-Jobs failed, older-Jobs succeeded, and he maybe learned something in between contains too much nuance for the YCombinator-class.)

Notice that, in this rendering, the story of Apple becomes Jobs-vs-Sculley, rather than Jobs-vs-Wozniak. The original legend of Apple is that the company combined Jobs’s once-in-a-generation talent for envisioning the trajectory of consumer technologies with Steve Wozniak’s generational skill for building an ahead-of-its-time product. And then it cast aside Woz, because he got in the way.

The Silicon Valley of the 1980s, 90s, and even the 00s still culturally elevated hackers like Woz. The “founders” (entrepreneurs, really) didn’t understand the tech stack, but they knew how to bring a product to market. Steve Jobs couldn’t code for shit, and for much of its history, Silicon Valley revered Woz as much as it did Jobs.

Aaron was, in a sense, my generation’s equivalent of Woz. It isn’t a perfect analogy. But as archtypes go, it fits well enough. They don’t even try to produce Aarons anymore. Everyone is trying to be Sam frickin’ Altman now.


YCombinator was one of the major sources of that cultural change, because YCombinator proved so effective at perpetuating its own mythology. Paul Graham developed a successful formula: bring together the best young entrepreneurs with the best potential ideas. Tell them they are special. Give them advice, connect them to funders, do everything in your power to help them succeed. Most of the companies will fail, but you can trumpet the ones that succeed as proof of your special skill at identifying the character traits of true founders. (In practice, this is quite simple: Paul Graham and his successors — Sam Altman and Garry Tan — just look for people that remind them of themselves.)

Notice the self-reinforcing nature of this model. If you have a ton of resources, and you get to pick first, it’s a lot easier to pick winners. Peter Thiel is a good example — much of Peter Thiel’s vast fortune comes from having been the first guy to invest in Mark Zuckerberg. Good for him, but he was also basically the first guy given the opportunity to invest in Mark Zuckerberg. Declaring that this wealth is due to a unique capacity to identify the special qualities of founders is a bit like saying the San Antonio Spurs are a uniquely well-run basketball franchise because they drafted Victor Wembanyama. We can recognize their good fortune without constructing a whole mythology around “Spurs mode.”

And YCombinator has indeed spawned many successful companies. It counts the founders of Reddit, AirBnB, Doordash, Dropbox, Coinbase, Stripe, and Twitch among its alumni. But less clear is how these companies would have fared in the absence of YCombinator. Did Paul Graham impart genuinely original knowledge to them, or just fete them with stories about what special boys they all were, while open the doors to copious amounts of seed funding?

The Cult of the Founder says that founders are all Steve Jobses. They are unique visionaries. They can make mistakes, but their mistakes are of the got-too-far-ahead-of-society variation. Non-founders just cannot understand them. Other techies can’t either. The most talented hackers in the world are really just employees, after all.

We could dismiss the Cult of the Founder if not for the tremendous capital that it has accumulated. The Cult of the Founder only really matters because of the gravitational force of money. To the Founder-class — Graham, Andreessen, Musk, et al — this is proof of their special brilliance. They’re rich and you’re poor, and the difference is their special skills and knowledge. But of course its more complicated than that. They have all that money because we created institutional rules that they learned to exploit. They get to invest early in promising ventures and cash out huge returns before the company ever has to think about generating a profit. They’ve found a dozen different ways to never pay taxes on those windfall profits. And existing regulations are for other people.

This is all of a piece with Andreessen’s techno-optimist manifesto and Balaji Srinivasan’s batshit bitcoin declarations. A small, cloistered elite of not-especially-bright billionaires have decided that they are very, very special, and that the problem with society these days is that people keep treating them like everyone else.

The tech industry was never perfect. It never lived up to its lofty ambitions. But it has gotten demonstrably worse. And I think the fork-in-the-road moment was when the industry stopped trying to celebrate old-school hackers like Aaron Swartz and started working full-time to build monuments to Sam Altman instead.

Paul Graham did that. More than anything, Graham’s cultural influence has been elevating and exalting “founders” as a unique and special boys. And the broader tech industry is worse off as a result.

Subscribe now

Read the whole story
tante
3 days ago
reply
"A small, cloistered elite of not-especially-bright billionaires have decided that they are very, very special, and that the problem with society these days is that people keep treating them like everyone else. "
Berlin/Germany
Share this story
Delete

Video Game Developers Are Leaving The Industry And Doing Something, Anything Else

1 Comment

From line cooks to bike repairs, people who used to be video game developers are being forced to try something new

The post Video Game Developers Are Leaving The Industry And Doing Something, Anything Else appeared first on Aftermath.



Read the whole story
tante
7 days ago
reply
This story about video game developers just illustrates how the short term thinking in businesses based on pumping stock value (for example by firing a bunch of people) is cancerous: You lose your teams, you lose so many talented people and their years, sometimes decades of experience. Sure right now everyone hope "AI" will fix things but let's be grown up here: #AI can't really do any of that.
Berlin/Germany
Share this story
Delete
Next Page of Stories