Dispatches from the Empire

Before OpenAI, Sam Altman was fired from Y Combinator by his mentor

Though a revered tactician and chooser of promising start-ups, Altman had developed a reputation for favoring personal priorities over official duties and for an absenteeism that rankled his peers and some of the start-ups he was supposed to nurture, said two of the people, as well as an additional person, all of whom spoke on the condition of anonymity to candidly describe private deliberations. The largest of those priorities was his intense focus on growing OpenAI, which he saw as his life’s mission, one person said.

A separate concern, unrelated to his initial firing, was that Altman personally invested in start-ups he discovered through the incubator using a fund he created with his brother Jack — a kind of double-dipping for personal enrichment that was practiced by other founders and later limited by the organization.

“It was the school of loose management that is all about prioritizing what’s in it for me,” said one of the people.

I only now learned about this from Helen Toner's newly-released interview about Altman's firing from OpenAI in November. From The Verge

Toner says that one reason the board stopped trusting Altman was his failure to tell the board that he owned the OpenAI Startup Fund; another was how he gave inaccurate info about the company’s safety processes “on multiple occasions.” Additionally, Toner says she was personally targeted by the CEO after she published a research paper that angered him. “Sam started lying to other board members in order to try and push me off the board,” she says.

After two executives spoke directly to the board about their own experiences with Altman, describing a toxic atmosphere at OpenAI, accusing him of “psychological abuse,” and providing evidence of Altman “lying and being manipulative in different situations,” the board finally made its move.

Perhaps not the guy we want in charge of safety at one of the largest AI companies, given that the employees in charge of safety have been leaving in droves. From Vox

If you’ve been following the saga on social media, you might think OpenAI secretly made a huge technological breakthrough. The meme “What did Ilya see?” speculates that Sutskever, the former chief scientist, left because he saw something horrifying, like an AI system that could destroy humanity. 

But the real answer may have less to do with pessimism about technology and more to do with pessimism about humans — and one human in particular: Altman. According to sources familiar with the company, safety-minded employees have lost faith in him. 

“It’s a process of trust collapsing bit by bit, like dominoes falling one by one,” a person with inside knowledge of the company told me, speaking on condition of anonymity. 

Not many employees are willing to speak about this publicly. That’s partly because OpenAI is known for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you potentially lose out on millions of dollars.

Just what we want: an artificial intelligence company motivated by profit that creates something truly dangerous and disruptive, run by a guy like Altman.

Here are two fun excerpts from his Wikipedia page

He is an apocalypse preparer. Altman said in 2016: "I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israel Defense Forces, and a big patch of land in Big Sur I can fly to."


In 2021, Altman's sister Annie wrote on Twitter accusing Sam of "sexual, physical, emotional, verbal, and financial abuse".