Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2348 stories
·
125 followers

Paul Graham and the Cult of the Founder

1 Comment

Paul Graham has been bad for Silicon Valley.

Without Paul Graham, we would not have YCombinator. And YCombinator is, chiefly, the Cult of the Founder. Silicon Valley would be so much better off without it. The companies that came out of YCombinator would be better off if their leaders weren’t so convinced of their own moral superiority.

And this has been a long time coming. YCombinator’s malign influence can be traced back to its very first class.

The photo below is of YCombinator’s first cohort, in 2007. You can see a young, tall, lanky Alexis Ohanian in the back left row. Sam Altman stands in the front, arms crossed, full of unearned swagger. Paul Graham (the proto-techbro) is to Altman’s right, dressed in an outfit that screams “goddammit I reserved this tennis court for half an hour ago!”

To Altman’s left is Aaron Swartz. Aaron cofounded Reddit, but left the company when it was sold to Conde Nast. He couldn’t stand the YCombinator vibes. I first met Aaron a couple years later, after he cofounded the Progressive Change Campaign Committee. He would go on to cofound Demand Progress and successfully wage a major campaign against SOPA/PIPA, all while contributing to Creative Commons and RSS and blogging and making a dozen other types of good trouble.

It occurs to me that Aaron and Altman represent two archetypes of what Silicon Valley might value. Sam Altman embodied the ideals of the founder. He so impressed Paul Graham that, even though Altman’s company (Loopt) was a failure, Graham named him the the next President of YCombinator. Say what you will about the guy, but he has a remarkable flair for failing upward.

Aaron, meanwhile, was a hacker in the classical sense of the word. He was intensely curious, brilliant, and generous. He was kind, yet uncompromising. He had a temper, but he pretty much only directed it toward idiots in positions of power. When he saw something wrong, he would build his own solution.

Aaron died in 2013. He took his own life, having been hounded by the Department of Justice for years over the crime of (literally) downloading too many articles from JSTOR. Upon his death, the entire internet mourned. Books have been written about him, documentaries have been produced. It felt back then as though there was this massive, Aaron-shaped hole. There kind of still is, even today.

Sam Altman and OpenAI have scraped practically the entire Internet. JSTOR, YouTube, Reddit… so long as the content is publicly accessible, OpenAI’s stance appears to be that copyright law is only for little people.

For this, Altman has been crowned the new boy-king of Silicon Valley. It strikes me that in present-day Silicon Valley, thanks largely to the influence of networks like YCombinator, is almost entirely Altman wannabes. Altman is the template. It’s him and Peter Thiel and Elon Musk and Marc Andreessen and David droopy-dog Sacks. They have constructed an altar to founders and think disruption is inherently good because it enables such marvelous financial engineering. They don’t build shit, and they think the employees and managers who run their actual companies ought to show more deference.


I’ve been thinking about this lately because Paul Graham’s latest essay, “Founder Mode,” has been making the rounds. The essay is, on one level, an ode to micromanagement:

Hire good people and give them room to do their jobs. Sounds great when it's described that way, doesn't it? Except in practice, judging from the report of founder after founder, what this often turns out to mean is: hire professional fakers and let them drive the company into the ground.

But more than that, its a paean to the ethereal qualities that elevate “founders” from the rest of us.

There are things founders can do that managers can't, and not doing them feels wrong to founders, because it is.

(…)

Founders feel like they're being gaslit from both sides — by the people telling them they have to run their companies like managers, and by the people working for them when they do. Usually when everyone around you disagrees with you, your default assumption should be that you're mistaken. But this is one of the rare exceptions. VCs who haven't been founders themselves don't know how founders should run companies, and C-level execs, as a class, include some of the most skillful liars in the world.

Graham comes to this realization by hanging out with his founder buddies. These are some of the richest men in the world! And sometimes, the people around them push back on their ideas! But those people aren’t founders. It just isn’t right, for a founder to be questioned like that.

The single practical suggestion in the essay is that companies should follow in the footsteps of Steve Jobs (of course) and hold retreats of “the 100 most important people working at [the company],” regardless of job title. Graham insists this is unique to Jobs, that he has never heard of another company doing it. Dan Davies counters that this is, in fact, quite common, remarking:

“When I was at Credit Bloody Suisse, they used to have global offsites with 100 key employees from different levels (…) I don’t necessarily want to gainsay Paul Graham’s experience here, but if VCs are in the habit of imposing the kind of structures that he describes on their portfolio companies, then I think every business school prof in the world would agree with me that they are being dicks and should stop.”

The mood that animates Graham’s essay, though, is just sheer outrage that professional managers might constrain the vision and ambition of founders. In Graham’s rendering, the founders are akin Steve Jobs, while the professional managers are like John Sculley. (Nevermind that young-Steve-Jobs was a horrendous manager — not just in the sense that he was a cruel boss, but also in the sense that his products didn’t hit their sales targets and the company bled money. The notion that young-Jobs failed, older-Jobs succeeded, and he maybe learned something in between contains too much nuance for the YCombinator-class.)

Notice that, in this rendering, the story of Apple becomes Jobs-vs-Sculley, rather than Jobs-vs-Wozniak. The original legend of Apple is that the company combined Jobs’s once-in-a-generation talent for envisioning the trajectory of consumer technologies with Steve Wozniak’s generational skill for building an ahead-of-its-time product. And then it cast aside Woz, because he got in the way.

The Silicon Valley of the 1980s, 90s, and even the 00s still culturally elevated hackers like Woz. The “founders” (entrepreneurs, really) didn’t understand the tech stack, but they knew how to bring a product to market. Steve Jobs couldn’t code for shit, and for much of its history, Silicon Valley revered Woz as much as it did Jobs.

Aaron was, in a sense, my generation’s equivalent of Woz. It isn’t a perfect analogy. But as archtypes go, it fits well enough. They don’t even try to produce Aarons anymore. Everyone is trying to be Sam frickin’ Altman now.


YCombinator was one of the major sources of that cultural change, because YCombinator proved so effective at perpetuating its own mythology. Paul Graham developed a successful formula: bring together the best young entrepreneurs with the best potential ideas. Tell them they are special. Give them advice, connect them to funders, do everything in your power to help them succeed. Most of the companies will fail, but you can trumpet the ones that succeed as proof of your special skill at identifying the character traits of true founders. (In practice, this is quite simple: Paul Graham and his successors — Sam Altman and Garry Tan — just look for people that remind them of themselves.)

Notice the self-reinforcing nature of this model. If you have a ton of resources, and you get to pick first, it’s a lot easier to pick winners. Peter Thiel is a good example — much of Peter Thiel’s vast fortune comes from having been the first guy to invest in Mark Zuckerberg. Good for him, but he was also basically the first guy given the opportunity to invest in Mark Zuckerberg. Declaring that this wealth is due to a unique capacity to identify the special qualities of founders is a bit like saying the San Antonio Spurs are a uniquely well-run basketball franchise because they drafted Victor Wembanyama. We can recognize their good fortune without constructing a whole mythology around “Spurs mode.”

And YCombinator has indeed spawned many successful companies. It counts the founders of Reddit, AirBnB, Doordash, Dropbox, Coinbase, Stripe, and Twitch among its alumni. But less clear is how these companies would have fared in the absence of YCombinator. Did Paul Graham impart genuinely original knowledge to them, or just fete them with stories about what special boys they all were, while open the doors to copious amounts of seed funding?

The Cult of the Founder says that founders are all Steve Jobses. They are unique visionaries. They can make mistakes, but their mistakes are of the got-too-far-ahead-of-society variation. Non-founders just cannot understand them. Other techies can’t either. The most talented hackers in the world are really just employees, after all.

We could dismiss the Cult of the Founder if not for the tremendous capital that it has accumulated. The Cult of the Founder only really matters because of the gravitational force of money. To the Founder-class — Graham, Andreessen, Musk, et al — this is proof of their special brilliance. They’re rich and you’re poor, and the difference is their special skills and knowledge. But of course its more complicated than that. They have all that money because we created institutional rules that they learned to exploit. They get to invest early in promising ventures and cash out huge returns before the company ever has to think about generating a profit. They’ve found a dozen different ways to never pay taxes on those windfall profits. And existing regulations are for other people.

This is all of a piece with Andreessen’s techno-optimist manifesto and Balaji Srinivasan’s batshit bitcoin declarations. A small, cloistered elite of not-especially-bright billionaires have decided that they are very, very special, and that the problem with society these days is that people keep treating them like everyone else.

The tech industry was never perfect. It never lived up to its lofty ambitions. But it has gotten demonstrably worse. And I think the fork-in-the-road moment was when the industry stopped trying to celebrate old-school hackers like Aaron Swartz and started working full-time to build monuments to Sam Altman instead.

Paul Graham did that. More than anything, Graham’s cultural influence has been elevating and exalting “founders” as a unique and special boys. And the broader tech industry is worse off as a result.

Subscribe now

Read the whole story
tante
7 hours ago
reply
"A small, cloistered elite of not-especially-bright billionaires have decided that they are very, very special, and that the problem with society these days is that people keep treating them like everyone else. "
Berlin/Germany
Share this story
Delete

Video Game Developers Are Leaving The Industry And Doing Something, Anything Else

1 Comment

From line cooks to bike repairs, people who used to be video game developers are being forced to try something new

The post Video Game Developers Are Leaving The Industry And Doing Something, Anything Else appeared first on Aftermath.



Read the whole story
tante
4 days ago
reply
This story about video game developers just illustrates how the short term thinking in businesses based on pumping stock value (for example by firing a bunch of people) is cancerous: You lose your teams, you lose so many talented people and their years, sometimes decades of experience. Sure right now everyone hope "AI" will fix things but let's be grown up here: #AI can't really do any of that.
Berlin/Germany
Share this story
Delete

On “AI” Art

1 Comment

Many of my readers have probably seen Ted Chiang’s recent essay on “Why A.I. Isn’t Going to Make Art“, if you have not do so, it is excellent.

And while I don’t agree with everything Chiang wrote I think he got the core right:

Generative A.I. appeals to people who think they can express themselves in a medium without actually working in that medium. But the creators of traditional novels, paintings, and films are drawn to those art forms because they see the unique expressive potential that each medium affords. It is their eagerness to take full advantage of those potentialities that makes their work satisfying, whether as entertainment or as art.

Now there have been some criticisms of Chiang’s argument – some better, some worse. Of course the “idea guys” who’d love it if them having an idea meant that that thing appeared out of thin air (probably to sell it) feel offended, but they are not serious people.

A somewhat meta argument against Chiang is that he’s not the boss who gets to say what is art and what is not and that he doesn’t get to delegitimate AI artists. Fair but also doesn’t address Chaing’s thinking in the least. It’s a strategy of avoidance. Works if you want it to but it does not move the conversation forward an iota.

A better argument is that Chiang puts a lot of weight on “decisions” during the process that make up the work while not giving people massaging their prompts credit for their decisions. At first glance this hits: AI systems generate very boring stuff and it takes some trying and massaging of prompts to get things to look less obvious, less mediocre. Isn’t that a form of (curating) decision making?

Chiang paints things with a somewhat broad brush. Which I get. He is writing against an onslaught of tech influencers and investors and a mainstream based on the foundation of computationalism: If you want to stop an avalanche nuance might not be your focus.

I do think that people can make art with AI. I have seen art made with AI. More than 99% of things labeled “AI art” are none of the sort – regardless how long someone retried to generate exactly the right image. So maybe it’s not just decisions? (Chiang’s attempt to quantify decisions is the weakest part of his argument IMO) I think that Chiang himself kinda touched the actual argument without getting it fully in focus.

In the quote I pulled from his essay at the beginning of this text he speaks about mediums. About art being a way to engage with a medium and the requirements and limitations they bring: If you want to carve a marble statue you need to learn how to do that, but you also need to understand the material you work with, its properties and what it allows you to do and what it hinders. A moving marble statue for example is kind of a problem if you want to stick to the traditional medium. Just like a movie giving you all kinds of ways to show movement while restricting the viewer’s degrees of freedom (you cannot walk around a movie).

Art is about perspective. It’s a way for an artist to express their feelings, beliefs, lived experience, their perception of reality through a medium. The choice of medium gives artists a certain palette, a “toolbox” of sorts that come with abilities and restrictions, with cultural traditions and learned ways to read them, etc. A choice of medium is not random. Because it connects the work to other works, their traditions – even in opposition to them.

When people say “AI Art” they often mean “producing an image using a statistical system” claiming that the medium is the digital image. Which is not true. The medium is “statistically generated images” which is related but different, has a different meaning.

It make me think about a recent mastodon post by Kieran Healy:

Framing “AI” generated images as images tries to give them a different context, a different level of appreciation. It tries to circumvent the fact that people start associating “AI” with slop, with something bad and worthless of low quality. With something nobody gave a shit to actually make.

I get that defense but it is cheating. It’s trying to borrow a context with higher reputation for one’s own work and it is read as dishonest. Because it is.

Good “AI” Art tries to dive into the qualities of the – somewhat new – medium. What properties does statistical generation have as a material of expression? Where do the properties of that material connect with our real-world experiences? What do they allow an artist to say in a way that other mediums can not?

And that is why there is so very little “AI” Art. I believe. Because people using those system use them as a shortcut to create digital images without working with the material. That’s why it feels meaningless. Because – a lot of the time – it kinda is.

On the other hand: What do I know, I am neither an artist nor a creative 😉

Read the whole story
tante
7 days ago
reply
"Framing “AI” generated images as images tries to give them a different context, a different level of appreciation. It tries to circumvent the fact that people start associating “AI” with slop, with something bad and worthless of low quality. With something nobody gave a shit to actually make."
Berlin/Germany
Share this story
Delete

Google’s GameNGen AI Doom video game generator: dissecting a rigged demo

1 Comment

Did you know you can play Doom on a diffusion model now? It’s true, Google just announced it! Just don’t read the paper too closely.

In their paper “Diffusion models are real-time game engines,” Google researchers try to make out that they’ve run the venerable 3D first-person shooter game Doom as a playable game on a diffusion model — the AI models that generate images and video. [arXiv]

Lots of people read the paper’s abstract and proclaimed that a whole playable video game could now be AI-simulated:

We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality. 

But the researchers don’t show that at all. The paper’s claims fall apart under the slightest examination.

How long can the simulation keep a game running for a human player? How many choices did a human make in the gameplay videos? The paper says the simulation manages “long trajectories” — but none of the segments in the demo video are longer than a few seconds.

The video sure looks like someone playing Doom! But it’s glitchy, the numbers don’t work, and you’ll notice the player never looks behind them. This will be the best demo reel they managed to put together. [YouTube]

The researchers say that human evaluators could distinguish the simulation from the real game only 60% of the time. But the evaluators didn’t play the game — they were only shown random video clips of 1.6 seconds or 3.2 seconds next to clips of the original game. If you were trying to get a near-coinflip result, this is how you would get it.

The thing they’ve achieved seems to be:

  1. Imitate game video in a diffusion model, trained on repeated automatic gameplay.
  2. Humans can play the result for a few seconds.

This is arguably an interesting result worth writing up — a generated video that follows at least one user choice in real time.

But the researchers then dive head-first into wild claims:

GameNGen answers one of the important questions on the road towards a new paradigm for game engines, one where games are automatically generated, similarly to how images and videos are generated by neural models in recent years.

To which the obvious reply is: no it doesn’t, where did you get any of this? You’ve generated three seconds of fake gameplay video where your player shoots something and it shoots back. None of the mechanics of the game work. Nothing other than what’s on-screen can be known to the engine.

You don’t get to fake some video and then claim this will let games be generated from now on — not even that this is how they’ll be generated, but that you’ve even shown there’s a way forward.

Note the passive voice: “are automatically generated.” From what source material?

The researchers do note the minor detail that someone has to actually write the game and create the display assets before it can all be imitated:

Key questions remain, such as how these neural game engines would be trained and how games would be effectively created in the first place, including how to best leverage human inputs.

This is like using ChatGPT to simulate a calculator that gets wrong answers — they used stupendous computational resources to imitate a game that ran on a 386 in 1993, for three seconds.

This paper is a funding pitch directed at the sort of game studio that desperately wants nothing more than to replace its developers and artists with robots. Studios are already using low-quality generated art assets of dubious origin. [Wired, archive]

The researchers are telling those game executives that the magical robot developer they don’t have to pay and that won’t unionize will totally happen.

Stephanie Sterling of the Jimquisition outlines the thinking involved here. Well, she swears at everyone involved for twenty minutes. So, Steph. [YouTube

 

Read the whole story
tante
7 days ago
reply
On the whole "An AI can run Doom" thing: "researchers are telling those game executives that the magical robot developer they don’t have to pay and that won’t unionize will totally happen"
Berlin/Germany
Share this story
Delete

Ex-Google CEO says successful AI startups can steal IP and hire lawyers to ‘clean up the mess’

1 Comment
Collision 2022 - Day Two
Eric Schmidt. | Photo By Lukas Schulze/Sportsfile for Collision via Getty Images

Former Google CEO and chairman Eric Schmidt has made headlines for saying that Google was blindsided by the early the rise of ChatGPT because its employees decided that “working from home was more important than winning.”

The comment was made in front of Stanford students during a recent interview, video of which was removed from the university’s YouTube channel after Schmidt’s gaffe was widely picked up by the press. I managed to watch most of Schmidt’s chat with Stanford’s Erik Brynjolfsson before it was taken down, however, and something else he said stands out. (You can still read the full transcript here.)

While talking about a future world in which AI agents can do complex tasks on behalf of humans, Schmidt says:

If TikTok is...

Continue reading…

Read the whole story
tante
32 days ago
reply
Not only does Eric Schmidt show here, how Silicon Valley thinks, he also shows that he really has no idea what LLMs can ever do and how Software works.
Berlin/Germany
Share this story
Delete

Deep-Live-Cam goes viral, allowing anyone to become a digital doppelganger

1 Comment and 2 Shares
A still video capture of X user João Fiadeiro replacing his face with J.D. Vance in a test of Deep-Live-Cam.

Enlarge / A still video capture of X user João Fiadeiro replacing his face with J.D. Vance in a test of Deep-Live-Cam.

Over the past few days, a software package called Deep-Live-Cam has been going viral on social media because it can take the face of a person extracted from a single photo and apply it to a live webcam video source while following pose, lighting, and expressions performed by the person on the webcam. While the results aren't perfect, the software shows how quickly the tech is developing—and how the capability to deceive others remotely is getting dramatically easier over time.

The Deep-Live-Cam software project has been in the works since late last year, but example videos that show a person imitating Elon Musk and Republican Vice Presidential candidate J.D. Vance (among others) in real time have been making the rounds online. The avalanche of attention briefly made the open source project leap to No. 1 on GitHub's trending repositories list (it's currently at No. 4 as of this writing), where it is available for download for free.

"Weird how all the major innovations coming out of tech lately are under the Fraud skill tree," wrote illustrator Corey Brickley in an X thread reacting to an example video of Deep-Live-Cam in action. In another post, he wrote, "Nice remember to establish code words with your parents everyone," referring to the potential for similar tools to be used for remote deception—and the concept of using a safe word, shared among friends and family, to establish your true identity.

Read 7 remaining paragraphs | Comments

Read the whole story
tante
33 days ago
reply
"Weird how all the major innovations coming out of tech lately are under the Fraud skill tree"
Berlin/Germany
Share this story
Delete
Next Page of Stories