Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2351 stories
·
125 followers

Reclaiming sovereignty in the digital age

1 Comment
Reclaiming sovereignty in the digital age

The internet is at an inflection point. The platforms have cemented their power, generative AI and associated financial pressures are pushing companies to further degrade the online experience, and more than anything else, the notion that democratic governments should leave the internet alone is rapidly breaking down. Nothing shows that more than the recent arrest of Telegram CEO Pavel Durov in France and the suspension of Twitter/X in Brazil.

Make no mistake, governments’ stance on the internet has been changing for some time. Beyond the actions of French authorities and the Brazilian Supreme Court, Australia continues to try to craft a new framework for the internet that works for their society, Canada has advanced regulations of its own, with an Online Harms Act making its way through parliament, and the European Union arguably kicked off this whole movement in the first place. But as the United States hypocritically starts throwing up barriers of its own to try to protect Silicon Valley from Chinese competition, other countries see an opening to ensure what happens online better aligns with their domestic values, instead of those imposed from the United States.

This movement is more widespread than it might otherwise appear. Last month, the Global Digital Justice Forum, a group of civil society groups, published a letter about the ongoing negotiations over the United Nations’ Global Digital Compact. “It is eminently clear that the cyberlibertarian vision of yesteryears is at the root of the myriad problems confronting global digital governance today,” the group wrote. “Governments are needed in the digital space not only to tackle harm or abuse. They have a positive role to play in fulfilling a gamut of human rights for inclusive, equitable, and flourishing digital societies.” In most of the world, that isn’t a controversial statement, but it’s one that challenges the foundational ideas that emerged from the United States and have shaped the dominant approach to internet politics for several decades.

How internet politics were poisoned

In the 1990s, as the internet was being commercialized, cyberlibertarians grabbed the microphone and framed how many advocates would understand the online space for years to come. Despite the internet having been developed with military and government funds, cyberlibertarians treated the government as the enemy. “You are not welcome among us,” wrote Electronic Frontier Foundation (EFF) co-founder John Perry Barlow in his Declaration of the Independence of Cyberspace. “You have no sovereignty where we gather.” It was surely a welcome message to the global elite gathered at the World Economic Forum in 1996, where he published his manifesto. Governments, not corporations, were the great threat. That blind spot helped fuel the creation of the digital dystopia we now live in.

The cyberlibertarian approach that emerged out of the United States isn’t particularly surprising. The political dynamic in the United States has a stronger libertarian bent than in many other countries, especially the high-income Western countries it’s usually compared to. Digital politics in California had already integrated libertarianism and neoliberalism, so it wasn’t a big jump for it to define the approach to the internet. “The California Ideology is a mix of cybernetics, free market economics, and counter-culture libertarianism,” wrote Richard Barbrook and Andy Cameron in 1995. They described it as a “profoundly anti-statist dogma” that resulted from “the failure of renewal in the USA during the late ‘60s and early ‘70s.”

That perspective began being championed by publications like Wired and digital rights groups like the EFF, but the corporate sector along with Democratic and Republican politicians found a lot to like too. In the late 1980s, then-Senator Al Gore was laying out how he saw “high-performance computing” as a tool of American power on the world stage, while Newt Gingrich embraced the internet when he became Speaker of the House in 1995. Despite being positioned as an approach that prioritized internet users, cyberlibertarianism was very friendly to the corporate interests that wanted to control the internet and shape it to maximize their profits.

The TikTok ban is all about preserving US power
The platform isn’t a national security threat, but a challenge to Silicon Valley’s dominance
Reclaiming sovereignty in the digital age

The digital rights movement’s focus on privacy and speech occasionally found it running up against the nascent internet companies, but more often they found themselves on the same side of the fight — whether it was against government regulation or traditional competitors that new internet companies wanted to usurp (and ultimately replace). Cyberlibertarians and the digital rights movement that grew out of it championed the notion that tech companies were exceptional; that traditional means of assessing communications and media technologies were no longer valid and that traditional rules couldn’t apply to these new companies. It was a gift to the rising internet companies; one that has created a lot of problems as we try to properly regulate them and rein them in today.

Traditionally, media and communications sectors were subject to strict rules, including expectations of a certain amount of domestic ownership like the United States has with its broadcasters or regulation of the type of programming or advertising that could be shown. In many countries, there was also public ownership as a bulwark against the private sector, such as with public broadcasters. The neoliberal turn had already started to change some of that, but with the internet it was all out the window; it had to be left to the private sector, with minimal regulation. If American tech companies that got a head start and easier access to capital dominated other countries’ internet sectors, their governments simply had to accept it, or else they would feel the combined pressure of company lobbyists, US diplomats, and digital rights groups that claimed such regulations were an inherent violation.

Probably one of the best examples of this dynamic is the copyright fight. For years, record labels, entertainment companies, and book publishers had pushed to increase copyright terms and worsen the terms of the deals given to artists. When file sharing came along, those companies worked with government to attempt a massive crackdown, but it wasn’t hard to turn public sentiment against them as it became incredibly easy to get access to far more music and media than people could ever imagine. Anti-copyright campaigners sided with the tech companies that wanted to violate the copyrights held by those traditional firms instead of trying to find a middle ground, even defending Google when it began scanning millions of books as part of its Google Books project. At the time, it was very much a David vs Goliath situation, with the tech companies in the position of David. But that fight — and many others like it — helped enable the growth of tech companies into the monopolists they are today.

“Don’t be evil” has long been jettisoned at Google and beyond in favor of a “move fast and break things” approach. They want to increase their power and grow their wealth at any cost, and are driven to do so just like any other capitalist company. They are not unique in that, but for a long time their public relations teams successfully convinced people otherwise. Parts of the digital rights movement have evolved in recognition of that, paying closer attention to the economic and political power the companies wield, yet even then it too often becomes narrowly focused on competition policy. The organizations most responsible for this approach have never made amends for their role in helping empower tech companies to cause the many harms they do today. In some cases, they still go to bat for them.

The Google monopoly ruling won’t save the internet
More competition won’t be enough to dismantle Silicon Valley’s power
Reclaiming sovereignty in the digital age

Prominent digital rights groups defended the scam-laden crypto industry several years ago, even taking money from crypto and Web3 groups to fund their efforts, and now claim that when OpenAI, Google, or Meta steal any content they can get their hands on — from artists, writers, news organizations, or social media users (which is basically all of us) — those actions should be considered fair use. In short, some of the most powerful companies in the world should have no obligation to compensate or get permission from the people who made the posts or created the works because that would threaten the cyberlibertarian ideals they’ve built their worldview on.

Cyberlibertarianism helps Silicon Valley

As we move into a period where regulation of digital technology and actions against major tech companies are becoming the norm, disingenuous opposition from industry lobbyists and some digital rights activists alike has become all too common. With crypto, they often argued they weren’t supporting the scams, but the idea of decentralization, even though, in practice, they were indeed defending a technology that was being commercialized as scam tech. A similar tactic is playing out with the defense of generative AI companies, where advocates arguing that stealing everyone’s work should be considered fair use say they’re not defending generative AI itself, but if they don’t defend the mass theft companies are engaging in then the entire practice of scraping would be in jeopardy.

Those arguments are intentionally overbroad and inherently deceptive. They make it seem like the entire foundation of the internet itself is at risk, playing on the libertarian reflexes within the tech community and taking advantage of the broader public’s lack of technical knowledge about how they internet works. But they’re exaggerations that serve the tech companies at the end of the day and have become commonplace in the opposition to efforts to rein in Silicon Valley’s power.

There are many examples of it. When Australia and Canada moved forward with legislation to force Google and Meta to bargain with news publishers so some of their enormous digital ad profits would go to local journalism, the cyberlibertarian response was to claim that the countries were implementing a “link tax” that would threaten one of the foundational aspects of the web: the hyperlinking between different web pages. Yet, while politicians and legislation often referred to the fact that the platforms do link to news articles, they never sought to actually put a price on links. In practice, the focus on links was rhetorical — a way to explain their plans to the public — with the ultimate goal being to force tech companies to sit down with news publishers and make a deal.

A similar process played out when Canada followed several European countries in regulating streaming platforms, something Australia is planning to do and the UK is investigating as well. The law forces foreign platforms like Netflix or Prime Video to commit to funding local content production and displaying a certain amount of Canadian content to users, as Canadian broadcasters have long been expected to do. Not only was this framed as a tax that would be passed onto consumers, but prominent digital rights campaigners picked up industry talking points that the legislation wouldn’t just apply to streaming companies, but to independent content creators on platforms like YouTube or TikTok — despite the government and the media regulator being clear that was not their plan. That fueled a deceptive news cycle and even got some online creators to publicly oppose the bill based on false information. As usual with cyberlibertarian approaches, the honest statements of government couldn’t be trusted.

The arrest of Durov and suspension of Twitter/X also bring the issues of privacy and speech into the spotlight. For years, these have been the central focus of digital rights campaigns, yet framing the internet through those lenses leads to a specific understanding of the problem — one that positions the government as the central threat. That approach is based on an inherently American perspective, coming out of how the US First Amendment frames free speech, rather than the understanding of free expression in many other countries that acknowledges the role of government to intervene on speech that threatens the broader society — which is exactly what the Brazilian Supreme Court is doing. There has been minimal outrage over the suspension of Twitter/X outside the right-wing echo chamber, which I would argue is a result of the hatred that’s developed for Elon Musk outside that circle. In any other case, the banning of a social media platform seems like the kind of case digital rights groups would jump on.

Pavel Durov and Elon Musk are not free speech champions
The actions against Telegram and Twitter/X are about sovereignty, not speech
Reclaiming sovereignty in the digital age

Telegram is another case entirely. For months before Durov was arrested, French authorities who specialized in investigating child abuse were collecting evidence of child predators using the platform to communicate with children, convincing them to make explicit images of themselves, and bragging about their abuse of children with other predators. Police tried to get Telegram to act, but it ignored the requests — to such a degree that the company until recently bragged on its website of not responding to authorities. Unsurprisingly, the police sought an arrest warrant for the chief executive and when he landed in France, they arrested him.

While some commentators have tried to frame the arrest as a speech issue, many privacy advocates have tended to ignore the substance of the case to narrowly focus on the fact that two of Durov’s charges fall under an obscure 2004 French law that requires companies distributing encryption technology to declare it. It’s not a surprise why: debating the issue of whether child predators and other criminals should be able to freely use these services is an uncomfortable one for them, because they explicitly argue in favor of it. The cyberlibertarian argument is that all communications must be encrypted to protect them from the governments they perceive as such a significant threat, and that means allowing the dregs of society to use them in criminal ways too; something the vast majority of the public would surely disagree with.

It’s an argument that once again treats digital technology and the internet as an exception where traditional norms cannot apply — particularly the fact that authorities have long been able to get warrants to search people’s mail, wiretap their phones, or obtain their text messages. That’s the trade off we’ve collectively made, and one that the vast majority of people have never seen as a threat to their rights, freedoms, or liberty — because they’re not libertarians. The push for encryption also sets up an arms race, forcing authorities to seek out even more intrusive methods to identify criminals and collect necessary evidence, including procuring software that compromises devices themselves similar to NSO Group’s Pegasus spyware. But once those tools exist, they can be obtained by many other groups that don’t have to follow the rules in democratic countries and use them against a much wider swath of people.

It’s also quite an ironic stance. The vast surveillance apparatus these campaigners decry is often not one owned and controlled by government. In fact, it was developed and rolled out by the private companies cyberlibertarians championed up until very recently, and sometimes still find themselves defending. The internet has enabled the creation of the most intrusive and comprehensive global surveillance system in the history of humanity, as companies developed business models based on mass data collection to shape advertising and other means of targeting users. It’s an infrastructure that has increasingly moved into physical space as well, and one that everyone from hackers to intelligence agencies have been able to use to all manner of nefarious ends.

This internet, where corporate power was a lesser concern than government, was supposed to deliver “a civilization of the Mind in Cyberspace” that would turn out to be “more humane and fair than the world your governments have made before,” as Barlow put it in 1996. But that vision was compromised by its blind spots and exclusions — hinderances that are still at central to how many people see the internet. Writing about Barlow in 2018, journalist April Glaser wondered what might have been if another approach had inspired the past two decades of internet politics. “I can’t help but ask what might have happened had the pioneers of the open web given us a different vision,” she wrote, “one that paired the insistence that we must defend cyberspace with a concern for justice, human rights, and open creativity, and not primarily personal liberty.” We’ll never know what could have been, but we can still jettison that perspective from our fights over the internet moving forward.

Embracing digital sovereignty

For a long time, it was hard to push back against an understanding of the internet framed through an individualist and anti-statist cyberlibertarian lens, even as a particular version of digital technology was being pushing on the world by a hegemonic United States to benefit its growing internet companies — and by extension its own global power. American politicians were not shy about that fact, but it largely escape the digital rights movement — particularly its leading organizations in the United States — whose narrow obsession with privacy and an American interpretation of free speech also set the mold for how groups in other countries understood digital communication. But with US dominance no longer guaranteed and people around the world getting fed up with the abuses of major tech companies, there’s an opportunity to carve out a new approach to the internet.

Embrace the splinternet
We’re being told to pick between US and Chinese tech. What if we don’t choose either?
Reclaiming sovereignty in the digital age

Instead of solely fighting for digital rights, it’s time to expand that focus to digital sovereignty that considers not just privacy and speech, but the political economy of the internet and the rights of people in different countries to carve out their own visions for their digital futures that don’t align with a cyberlibertarian approach. When we look at the internet today, the primary threat we face comes from massive corporations and the billionaires that control them, and they can only be effectively challenged by wielding the power of government to push back on them. Ultimately, rights are about power, and ceding the power of the state to right-wing, anti-democratic forces is a recipe for disaster, not for the achievement of a libertarian digital utopia. We need to be on guard for when governments overstep, but the kneejerk opposition to internet regulation and disingenuous criticism that comes from some digital rights groups do us no good.

The actions of France and Brazil do have implications for speech, particularly in the case of Twitter/X, but sometimes those restrictions are justified — whether it’s placing stricter rules on what content is allowable on social media platforms, limiting when platforms can knowingly ignore criminal activity, and even banning platforms outright for breaching a country’s local rules. We’re entering a period where internet restrictions can’t just be easily dismissed as abusive actions taken by authoritarian governments, but one where they’re implemented by democratic states with the support of voting publics that are fed up with the reality of what the internet has become. They have no time for cyberlibertarian fantasies.

Counter to the suggestions that come out of the United States, the Chinese model is not the only alternative to Silicon Valley’s continued dominance. There is an opportunity to chart a course that rejects both, along with the pressures for surveillance, profit, and control that drive their growth and expansion. Those geopolitical rivals are a threat to any alternative vision that rejects the existing neo-colonial model of digital technology in favor of one that gives countries authority over the digital domain and the ability for their citizens to consider what tech innovation for the public good could look like. Digital sovereignty will look quite different from the digital world we’ve come to expect, but if the internet has any hope for a future, it’s a path we must fight to be allowed to take.

Read the whole story
tante
17 hours ago
reply
"Prominent digital rights groups defended the scam-laden crypto industry several years ago, even taking money from crypto and Web3 groups to fund their efforts, and now claim that when OpenAI, Google, or Meta steal any content they can get their hands on — from artists, writers, news organizations, or social media users (which is basically all of us) — those actions should be considered fair use. In short, some of the most powerful companies in the world should have no obligation to compensate or get permission from the people who made the posts or created the works because that would threaten the cyberlibertarian ideals they’ve built their worldview on."
Berlin/Germany
Share this story
Delete

Gimmicks of Future Past

1 Comment
The work of art in the age of the gimmick.
Read the whole story
tante
1 day ago
reply
"Far from liberating creativity from the strictures of conventional mediums, technology like AI is only serving to constrain the artist’s vision. There are exceptions—like Albert Oehlen and Richard Prince, both of whom subvert tech optimism by prankishly misusing their computer programs to create scrappy, consciously juvenile collages—but in general, neo-modernist painters are indeed not using their new tools; the new tools are using them."

(I love that I am not the only one hating Refik Anadol's empty works that in German I tend to call "Pixel wichsen" (Pixel jerkoff))
Berlin/Germany
Share this story
Delete

Ban warnings fly as users dare to probe the “thoughts” of OpenAI’s latest model

1 Comment
An illustration of gears shaped like a brain.

Enlarge (credit: Andriy Onufriyenko via Getty Images)

OpenAI truly does not want you to know what its latest AI model is "thinking." Since the company launched its "Strawberry" AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe into how the model works.

Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an "o1" model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model.

Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1's raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets. There have been early reports of some successes, but nothing has yet been strongly confirmed.

Read 10 remaining paragraphs | Comments

Read the whole story
tante
1 day ago
reply
OpenAI is threatening to ban everyone who tries to research what their new model is actually doing.

While OpenAI uses a lot of language about "safeguards" it's mostly about keeping the illusion intact that o1 is a big leap when in fact it is a marginal patch to what they have been doing for a long time now. But they are looking for money right now and need to keep the hype active.
Berlin/Germany
Share this story
Delete

Paul Graham and the Cult of the Founder

1 Comment

Paul Graham has been bad for Silicon Valley.

Without Paul Graham, we would not have YCombinator. And YCombinator is, chiefly, the Cult of the Founder. Silicon Valley would be so much better off without it. The companies that came out of YCombinator would be better off if their leaders weren’t so convinced of their own moral superiority.

And this has been a long time coming. YCombinator’s malign influence can be traced back to its very first class.

The photo below is of YCombinator’s first cohort, in 2007. You can see a young, tall, lanky Alexis Ohanian in the back left row. Sam Altman stands in the front, arms crossed, full of unearned swagger. Paul Graham (the proto-techbro) is to Altman’s right, dressed in an outfit that screams “goddammit I reserved this tennis court for half an hour ago!”

To Altman’s left is Aaron Swartz. Aaron cofounded Reddit, but left the company when it was sold to Conde Nast. He couldn’t stand the YCombinator vibes. I first met Aaron a couple years later, after he cofounded the Progressive Change Campaign Committee. He would go on to cofound Demand Progress and successfully wage a major campaign against SOPA/PIPA, all while contributing to Creative Commons and RSS and blogging and making a dozen other types of good trouble.

It occurs to me that Aaron and Altman represent two archetypes of what Silicon Valley might value. Sam Altman embodied the ideals of the founder. He so impressed Paul Graham that, even though Altman’s company (Loopt) was a failure, Graham named him the the next President of YCombinator. Say what you will about the guy, but he has a remarkable flair for failing upward.

Aaron, meanwhile, was a hacker in the classical sense of the word. He was intensely curious, brilliant, and generous. He was kind, yet uncompromising. He had a temper, but he pretty much only directed it toward idiots in positions of power. When he saw something wrong, he would build his own solution.

Aaron died in 2013. He took his own life, having been hounded by the Department of Justice for years over the crime of (literally) downloading too many articles from JSTOR. Upon his death, the entire internet mourned. Books have been written about him, documentaries have been produced. It felt back then as though there was this massive, Aaron-shaped hole. There kind of still is, even today.

Sam Altman and OpenAI have scraped practically the entire Internet. JSTOR, YouTube, Reddit… so long as the content is publicly accessible, OpenAI’s stance appears to be that copyright law is only for little people.

For this, Altman has been crowned the new boy-king of Silicon Valley. It strikes me that in present-day Silicon Valley, thanks largely to the influence of networks like YCombinator, is almost entirely Altman wannabes. Altman is the template. It’s him and Peter Thiel and Elon Musk and Marc Andreessen and David droopy-dog Sacks. They have constructed an altar to founders and think disruption is inherently good because it enables such marvelous financial engineering. They don’t build shit, and they think the employees and managers who run their actual companies ought to show more deference.


I’ve been thinking about this lately because Paul Graham’s latest essay, “Founder Mode,” has been making the rounds. The essay is, on one level, an ode to micromanagement:

Hire good people and give them room to do their jobs. Sounds great when it's described that way, doesn't it? Except in practice, judging from the report of founder after founder, what this often turns out to mean is: hire professional fakers and let them drive the company into the ground.

But more than that, its a paean to the ethereal qualities that elevate “founders” from the rest of us.

There are things founders can do that managers can't, and not doing them feels wrong to founders, because it is.

(…)

Founders feel like they're being gaslit from both sides — by the people telling them they have to run their companies like managers, and by the people working for them when they do. Usually when everyone around you disagrees with you, your default assumption should be that you're mistaken. But this is one of the rare exceptions. VCs who haven't been founders themselves don't know how founders should run companies, and C-level execs, as a class, include some of the most skillful liars in the world.

Graham comes to this realization by hanging out with his founder buddies. These are some of the richest men in the world! And sometimes, the people around them push back on their ideas! But those people aren’t founders. It just isn’t right, for a founder to be questioned like that.

The single practical suggestion in the essay is that companies should follow in the footsteps of Steve Jobs (of course) and hold retreats of “the 100 most important people working at [the company],” regardless of job title. Graham insists this is unique to Jobs, that he has never heard of another company doing it. Dan Davies counters that this is, in fact, quite common, remarking:

“When I was at Credit Bloody Suisse, they used to have global offsites with 100 key employees from different levels (…) I don’t necessarily want to gainsay Paul Graham’s experience here, but if VCs are in the habit of imposing the kind of structures that he describes on their portfolio companies, then I think every business school prof in the world would agree with me that they are being dicks and should stop.”

The mood that animates Graham’s essay, though, is just sheer outrage that professional managers might constrain the vision and ambition of founders. In Graham’s rendering, the founders are akin Steve Jobs, while the professional managers are like John Sculley. (Nevermind that young-Steve-Jobs was a horrendous manager — not just in the sense that he was a cruel boss, but also in the sense that his products didn’t hit their sales targets and the company bled money. The notion that young-Jobs failed, older-Jobs succeeded, and he maybe learned something in between contains too much nuance for the YCombinator-class.)

Notice that, in this rendering, the story of Apple becomes Jobs-vs-Sculley, rather than Jobs-vs-Wozniak. The original legend of Apple is that the company combined Jobs’s once-in-a-generation talent for envisioning the trajectory of consumer technologies with Steve Wozniak’s generational skill for building an ahead-of-its-time product. And then it cast aside Woz, because he got in the way.

The Silicon Valley of the 1980s, 90s, and even the 00s still culturally elevated hackers like Woz. The “founders” (entrepreneurs, really) didn’t understand the tech stack, but they knew how to bring a product to market. Steve Jobs couldn’t code for shit, and for much of its history, Silicon Valley revered Woz as much as it did Jobs.

Aaron was, in a sense, my generation’s equivalent of Woz. It isn’t a perfect analogy. But as archtypes go, it fits well enough. They don’t even try to produce Aarons anymore. Everyone is trying to be Sam frickin’ Altman now.


YCombinator was one of the major sources of that cultural change, because YCombinator proved so effective at perpetuating its own mythology. Paul Graham developed a successful formula: bring together the best young entrepreneurs with the best potential ideas. Tell them they are special. Give them advice, connect them to funders, do everything in your power to help them succeed. Most of the companies will fail, but you can trumpet the ones that succeed as proof of your special skill at identifying the character traits of true founders. (In practice, this is quite simple: Paul Graham and his successors — Sam Altman and Garry Tan — just look for people that remind them of themselves.)

Notice the self-reinforcing nature of this model. If you have a ton of resources, and you get to pick first, it’s a lot easier to pick winners. Peter Thiel is a good example — much of Peter Thiel’s vast fortune comes from having been the first guy to invest in Mark Zuckerberg. Good for him, but he was also basically the first guy given the opportunity to invest in Mark Zuckerberg. Declaring that this wealth is due to a unique capacity to identify the special qualities of founders is a bit like saying the San Antonio Spurs are a uniquely well-run basketball franchise because they drafted Victor Wembanyama. We can recognize their good fortune without constructing a whole mythology around “Spurs mode.”

And YCombinator has indeed spawned many successful companies. It counts the founders of Reddit, AirBnB, Doordash, Dropbox, Coinbase, Stripe, and Twitch among its alumni. But less clear is how these companies would have fared in the absence of YCombinator. Did Paul Graham impart genuinely original knowledge to them, or just fete them with stories about what special boys they all were, while open the doors to copious amounts of seed funding?

The Cult of the Founder says that founders are all Steve Jobses. They are unique visionaries. They can make mistakes, but their mistakes are of the got-too-far-ahead-of-society variation. Non-founders just cannot understand them. Other techies can’t either. The most talented hackers in the world are really just employees, after all.

We could dismiss the Cult of the Founder if not for the tremendous capital that it has accumulated. The Cult of the Founder only really matters because of the gravitational force of money. To the Founder-class — Graham, Andreessen, Musk, et al — this is proof of their special brilliance. They’re rich and you’re poor, and the difference is their special skills and knowledge. But of course its more complicated than that. They have all that money because we created institutional rules that they learned to exploit. They get to invest early in promising ventures and cash out huge returns before the company ever has to think about generating a profit. They’ve found a dozen different ways to never pay taxes on those windfall profits. And existing regulations are for other people.

This is all of a piece with Andreessen’s techno-optimist manifesto and Balaji Srinivasan’s batshit bitcoin declarations. A small, cloistered elite of not-especially-bright billionaires have decided that they are very, very special, and that the problem with society these days is that people keep treating them like everyone else.

The tech industry was never perfect. It never lived up to its lofty ambitions. But it has gotten demonstrably worse. And I think the fork-in-the-road moment was when the industry stopped trying to celebrate old-school hackers like Aaron Swartz and started working full-time to build monuments to Sam Altman instead.

Paul Graham did that. More than anything, Graham’s cultural influence has been elevating and exalting “founders” as a unique and special boys. And the broader tech industry is worse off as a result.

Subscribe now

Read the whole story
tante
2 days ago
reply
"A small, cloistered elite of not-especially-bright billionaires have decided that they are very, very special, and that the problem with society these days is that people keep treating them like everyone else. "
Berlin/Germany
Share this story
Delete

Video Game Developers Are Leaving The Industry And Doing Something, Anything Else

1 Comment

From line cooks to bike repairs, people who used to be video game developers are being forced to try something new

The post Video Game Developers Are Leaving The Industry And Doing Something, Anything Else appeared first on Aftermath.



Read the whole story
tante
6 days ago
reply
This story about video game developers just illustrates how the short term thinking in businesses based on pumping stock value (for example by firing a bunch of people) is cancerous: You lose your teams, you lose so many talented people and their years, sometimes decades of experience. Sure right now everyone hope "AI" will fix things but let's be grown up here: #AI can't really do any of that.
Berlin/Germany
Share this story
Delete

On “AI” Art

1 Comment

Many of my readers have probably seen Ted Chiang’s recent essay on “Why A.I. Isn’t Going to Make Art“, if you have not do so, it is excellent.

And while I don’t agree with everything Chiang wrote I think he got the core right:

Generative A.I. appeals to people who think they can express themselves in a medium without actually working in that medium. But the creators of traditional novels, paintings, and films are drawn to those art forms because they see the unique expressive potential that each medium affords. It is their eagerness to take full advantage of those potentialities that makes their work satisfying, whether as entertainment or as art.

Now there have been some criticisms of Chiang’s argument – some better, some worse. Of course the “idea guys” who’d love it if them having an idea meant that that thing appeared out of thin air (probably to sell it) feel offended, but they are not serious people.

A somewhat meta argument against Chiang is that he’s not the boss who gets to say what is art and what is not and that he doesn’t get to delegitimate AI artists. Fair but also doesn’t address Chaing’s thinking in the least. It’s a strategy of avoidance. Works if you want it to but it does not move the conversation forward an iota.

A better argument is that Chiang puts a lot of weight on “decisions” during the process that make up the work while not giving people massaging their prompts credit for their decisions. At first glance this hits: AI systems generate very boring stuff and it takes some trying and massaging of prompts to get things to look less obvious, less mediocre. Isn’t that a form of (curating) decision making?

Chiang paints things with a somewhat broad brush. Which I get. He is writing against an onslaught of tech influencers and investors and a mainstream based on the foundation of computationalism: If you want to stop an avalanche nuance might not be your focus.

I do think that people can make art with AI. I have seen art made with AI. More than 99% of things labeled “AI art” are none of the sort – regardless how long someone retried to generate exactly the right image. So maybe it’s not just decisions? (Chiang’s attempt to quantify decisions is the weakest part of his argument IMO) I think that Chiang himself kinda touched the actual argument without getting it fully in focus.

In the quote I pulled from his essay at the beginning of this text he speaks about mediums. About art being a way to engage with a medium and the requirements and limitations they bring: If you want to carve a marble statue you need to learn how to do that, but you also need to understand the material you work with, its properties and what it allows you to do and what it hinders. A moving marble statue for example is kind of a problem if you want to stick to the traditional medium. Just like a movie giving you all kinds of ways to show movement while restricting the viewer’s degrees of freedom (you cannot walk around a movie).

Art is about perspective. It’s a way for an artist to express their feelings, beliefs, lived experience, their perception of reality through a medium. The choice of medium gives artists a certain palette, a “toolbox” of sorts that come with abilities and restrictions, with cultural traditions and learned ways to read them, etc. A choice of medium is not random. Because it connects the work to other works, their traditions – even in opposition to them.

When people say “AI Art” they often mean “producing an image using a statistical system” claiming that the medium is the digital image. Which is not true. The medium is “statistically generated images” which is related but different, has a different meaning.

It make me think about a recent mastodon post by Kieran Healy:

Framing “AI” generated images as images tries to give them a different context, a different level of appreciation. It tries to circumvent the fact that people start associating “AI” with slop, with something bad and worthless of low quality. With something nobody gave a shit to actually make.

I get that defense but it is cheating. It’s trying to borrow a context with higher reputation for one’s own work and it is read as dishonest. Because it is.

Good “AI” Art tries to dive into the qualities of the – somewhat new – medium. What properties does statistical generation have as a material of expression? Where do the properties of that material connect with our real-world experiences? What do they allow an artist to say in a way that other mediums can not?

And that is why there is so very little “AI” Art. I believe. Because people using those system use them as a shortcut to create digital images without working with the material. That’s why it feels meaningless. Because – a lot of the time – it kinda is.

On the other hand: What do I know, I am neither an artist nor a creative 😉

Read the whole story
tante
9 days ago
reply
"Framing “AI” generated images as images tries to give them a different context, a different level of appreciation. It tries to circumvent the fact that people start associating “AI” with slop, with something bad and worthless of low quality. With something nobody gave a shit to actually make."
Berlin/Germany
Share this story
Delete
Next Page of Stories