Altman’s OpenAI ouster leaves D.C. asking: Now what?
As chaos engulfs Capitol Hill’s favorite AI company, policymakers are poised to weigh the emerging antitrust risks — and scrutinize the ideological brawl over AI safety that’s splitting Silicon Valley.
Four days and counting after the OpenAI board abruptly fired company co-founder and CEO Sam Altman, the corporate melodrama is far from resolved. And Washington policymakers already struggling to catch up with a fast-moving technology were caught off-guard by an even faster-moving boardroom drama.
“We lived through three, maybe four seasons of Succession this weekend,” said Divyansh Kaushik, an AI policy researcher and associate director for emerging technologies and national security at the Federation of American Scientists.
There’s no finale in sight. Reports emerged on Tuesday that Altman, who has since taken a job at Microsoft, could still return to OpenAI in some capacity. Regardless of how things shake out, the chaos is likely to impact how Congress and the Biden administration approach AI regulation.
Altman has played a central role as a kind of AI guide for Congress ever since OpenAI’s uncannily human chatbot, ChatGPT, triggered Washington’s newfound fixation on AI. His Senate testimony in May was lauded by both sides of the aisle, with lawmakers praising his “genuine and authentic” desire to help craft new AI rules.
The upheaval could spark new antitrust concerns. If Microsoft, formerly OpenAI’s top investor, emerges as the direct owner of most or all of the company’s infrastructure and talent, its direct clout in the space will only grow, raising new questions about corporate concentration in the AI industry.
“I wouldn't be surprised if antitrust advocates on the Hill want more scrutiny on Microsoft's new AI lab by making the argument that essentially Microsoft acquired OpenAI without any regulatory scrutiny,” said Kaushik. He added that the issue will be “even more pronounced” if most OpenAI employees follow through on their threat to join Altman at Microsoft.
Rohit Chopra, director of the Consumer Financial Protection Bureau, is already warning that the AI industry could be heading toward oligopoly. Altman’s ouster, and a Microsoft consolidation of power, could send it further in that direction.
The OpenAI board’s chaotic coup, reportedly undertaken in the name of AI safety, could also heighten Washington’s interest in existential risks posed by advanced AI — or discredit those fears and the tech-world ideologies associated with them, including effective altruism and longtermism.
AI experts across Silicon Valley are increasingly at odds over the risks and rewards associated with advanced AI, a debate that is also shaping Washington’s efforts to corral it.
While some researchers and venture capitalists push for the technology’s rapid development, others warn that long-term risks posed by advanced AI systems — including, in some readings, possible human extinction — require an incredibly cautious approach.
Both sides of the fight are already battling for influence in Washington.
Zach Graves, executive director at the Foundation for American Innovation, said how Washington responds to future arguments about existential AI risk will depend on why the OpenAI board chose to oust Altman.
“It might just be that they made a very big miscalculation, [and] they didn’t have much to back it up,” Graves said.
But if details emerge that suggest real safety concerns — including the possible development of “artificial general intelligence,” seen as a dangerous technological line to cross by some on OpenAI’s board — policymakers could become even more interested in AI’s cataclysmic potential and the organizations sounding the alarm.
“I think there's going to be more scrutiny on [effective altruist] stuff in general, probably from both sides of the aisle,” Graves said. “They broke through into the mainstream discourse.”
With the dust still swirling around OpenAI and Altman, even lawmakers at the forefront of Washington’s AI efforts were reluctant to immediately weigh in.
A spokesperson for Senate Majority Leader Chuck Schumer did not respond to a request to comment on the kerfuffle. Neither did spokespeople for Sens. Martin Heinrich (D-N.M.) and Richard Blumenthal (D-Conn.). Spokespeople for Sens. Todd Young (R-Ind.) and Josh Hawley (R-Mo.) declined to comment. Only Sen. Mike Rounds (R-S.D.) — who serves as part of the Senate’s AI “gang of four” policy leaders alongside Schumer, Heinrich and Young — had thoughts.
“I’ve had the opportunity to work with Sam Altman several times on Capitol Hill and have appreciated his insight into the development of artificial intelligence,” Rounds said in a statement to POLITICO. The senator added that Altman’s “willingness to work with policymakers has been extremely helpful,” and said he’s “confident [Altman] will continue to contribute to the advancement of this valuable new tool.”
Expect more scrutiny of OpenAI’s leadership shift once the dust settles. As of Tuesday, the company is now being led by Emmett Shear, the former CEO of video streaming service Twitch whose ties to effective altruism dovetail with his deep skepticism of rapid AI development. Shear has previously put the risk of AI “doom” at somewhere between five and 50 percent — and if he sticks around as CEO, he may take that apocalyptic message to Washington.
Kaushik said Shear’s appointment “might intensify calls in D.C. for stricter regulation and more rigorous AI safety measures.” He pointed to a June tweet from Shear that suggested a Nazi world takeover would be preferable to an advanced AI that goes rogue.
“I don't see how that bodes well for OpenAI's relationships in D.C., or anywhere for that matter,” Kaushik said.