Sam Altman and America’s celebrity-billionaire problem

The collision of Elon Musk’s Twitter takeover and the chaos at OpenAI reveals the extent of the society-shaping power wielded by a very small cadre of Silicon Valley titans.

Nov 21, 2023 - 20:23
Sam Altman and America’s celebrity-billionaire problem

Where did you follow the OpenAI drama over the weekend?

For most people, the chaos of former OpenAI CEO Sam Altman’s ouster unfolded — and is still unfolding — largely on X.

For all the controversy around the platform formerly known as Twitter, it’s still as central to conversations in Silicon Valley as it was in its early days. (You could argue that Elon Musk’s takeover has performed a sort of version control, returning it to an early-Obama-era Wild West feel and laser focus on internecine techie politics.)

But the collision of Musk’s Twitter takeover and the chaos at OpenAI reveals something even bigger than social media’s shifting tectonic plates — the extent of the society-shaping power wielded by a very small cadre of Silicon Valley titans.

One of Sam Altman’s co-founders at OpenAI was, after all… Elon Musk. Musk turned his back on the project because of a complicated disagreement with Altman over their progress compared to Google DeepMind, as well as his own personal beliefs about AI development, and has now launched his own AI venture.

Meanwhile, Greg Brockman, the OpenAI president ousted alongside Altman, was the first CTO of Stripe, which raised early money from Musk and fellow OpenAI co-founder Peter Thiel. Musk also tried using his huge account on X to intervene personally in the OpenAI drama this weekend, tweeting directly at board member and the company’s chief scientist Ilya Sutskever, someone he says “has a good moral compass and does not seek power.”

And if you think OpenAI’s governance shakeup is a chance to shake off these ties, think again. New OpenAI CEO Emmett Shear is a visiting partner at Y Combinator, the startup accelerator where Sam Altman once was president. It still serves as a Silicon Valley business and social hub, and two of its co-founders were Jessica Livingston and Trevor Blackwell, also — you guessed it — OpenAI co-founders. Still following?

With “how to govern AI” still topic A in the Washington policy space (or close to it), the blowup at the most high-profile AI company shines a light on a particularly thorny challenge for regulators trying to shape the future.

Individual personalities — and individual fortunes — matter far more in the world of Silicon Valley startups than they do in corporate America’s more consensus-oriented, traditional bureaucracies. Once, industrial names like Morgan or Rockefeller or Ford drove national policy from their boardroom chairs, a version of America we might have thought we’ve put to rest. Not in tech: Today we take it for granted that Bezos, Zuckerberg and Musk are more or less synonymous with their corporate empires. (Perhaps blame Steve Jobs, the charismatic Apple founder and world-shaper who looms large over them all in the mind of business builders.)

Big organizations move slowly, and respond to rules. Startup titans, not so much. It’s extremely difficult to imagine more established tech giants like IBM or Microsoft changing their business model on a personal whim or passion, like with Musk’s free speech crusade for Twitter, or Mark Zuckerberg’s sudden commitment to the metaverse, or Altman’s belief in human-like AI superintelligence.

OpenAI, in particular, was intended to serve a greater mission under its unconventional nonprofit structure, but it’s become clear just how much the company is shaped by a single person, its ousted CEO. Samuel Hammond, a senior economist at the Foundation For American Innovation, and a blogger focusing on AI and governance, calls it a “cultish and borderline messianic employee culture, as shown in their willingness to all resign in solidarity,” citing social media reports that Altman personally interviewed every new hire at the company, a philosophy he once advocated for in a blog post.

He described to me how Sam Altman’s personal beliefs have come to define the company, and therefore the larger existential debates around the potential existence (and risk) of superhuman “AGI,” or artificial general intelligence.

“Over the last year, Altman reoriented OpenAI to be even more mission driven, changing their core values to emphasize that anything that didn't advance AGI was ‘out of scope,’” Hammond said.

The lesson for not just America, but humanity writ large, is that a very small group of people have come to wield total, personalized control over many of the systems, whether Musk’s social media platform or Altman’s intelligence machines, that are shaping society’s present and future.

Regulators and critics have proposed strategies for reining in that influence, from the European Union’s elaborate regulatory regime to Federal Trade Commission chair Lina Khan’s belief in antitrust enforcement to some proposals to emulate Silicon Valley governance itself.

None have yet succeeded. Consumer Financial Protection Bureau Director Rohit Chopra told Morning Money's Sam Sutton this week that the dawn of powerful AI creates new urgency for tech regulators: “There is a race to develop the foundational AI models. There will probably not be tons of those models. It may in fact be a natural oligopoly,” he said. “The fact that Big Tech companies are now overlapping with the major foundational AI models is adding even more questions about what we do to make sure that they do not have outsized power.”

Personal rule in Silicon Valley had major, well-documented ramifications for the era of startup culture that was dominated by app-based social and connectivity companies like Facebook or Uber. It will have even larger ones in the AI era, where, realistic or not, the discourse is characterized by arguments about the very fate of humanity.