Andy Haymaker Author

What Just Happened With Sam Altman

I’m not going to post often enough to comment much on current events, but the fall and redemption of Sam Altman is too big to ignore. I’m convinced that historians will still be dissecting this event decades from now, assuming humanity still exists then. So, what happened? Just boardroom drama, a conflict of egos? No, it’s much bigger than that. The accelerationists won, at least for now.

It’s a little hard to analyze this event since the board has, bizarrely, refused to provide any detail on why they lost confidence in Altman’s ability to lead the OpenAI in accordance with the company’s mission to develop AGI “to benefit all humanity”. We don’t know exactly what happened, or even if was a sudden event like a dramatic advance at the company that Altman kept secret, or a gradual erosion of trust. It doesn’t help that Ilya Sutskever inexplicably flip flopped from being the board member who delivered the news that Altman was fired to being a contrite member of Team Altman.

While we certainly don’t have all the key facts yet, we saw enough to know that it was in fact a battle between safety and profits, and thus ultimately for the soul of OpenAI. The main events that lead me to say that are:

  • Board member Helen Toner saying that if Altman’s firing led to the company being destroyed, that this would be “consistent with the mission”. 
  • The board reversing themselves and falling on their swords only after Satya Nadella hired Altman, Brockman, and any OpenAI colleagues who want to join them into a new division of Microsoft created for the purpose.

I read these two developments as confirming that it was about safety. This was hinted in the board’s initial announcement, which contained the phrase “[Altman] was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” The only ‘responsibilities’ that could possibly be significant enough to cause the board to fire Altman are the board’s obligations to keep AGI development safe. Toner’s remark about being okay with the company being destroyed shows how the board prioritized safety over everything else. Toner’s statement only makes sense from an AI safety perspective. From the perspective of the profit-seeking business talk we’re familiar with, it sounds insane.

The other event, Nadella’s Microsoft offering safe haven for all of OpenAI’s staff, is significant because OpenAI’s board only reversed course after this happened. Initially, they had to consider taking Altman back after they learned how badly they had miscalculated the blowback Altman’s firing would create. But they still rejected Altman’s terms for reinstatement initially, preferring to let the company implode or merge with Anthropic (which Anthropic’s CEO rejected). But once they saw what was going to happen with Microsoft, they realized they couldn’t win. If Altman and the OpenAI staff loyal to him joined Microsoft, they would not only be able to pick up where they left off, they would receive unlimited funding forever after and have no more worries about a non-profit board asserting the priority of safety. This was way worse for humanity than bringing Altman back to OpenAI on his own terms, since at least at OpenAI the non-profit board would theoretically still be in charge. So the board had to stop the Microsoft thing from happening at all costs.

But the board itself had to resign to satisfy Altman and Microsoft. Will the new board pay any attention at all to safety? It’s possible, but I think the boards of OpenAI and other companies will now see the writing on the wall. If you oppose accelerationism or profits in anyway, you’ll be removed from the game. Future historians, if there are any, will see this as the battle where capital and accelerationism started winning the war. If they keep winning, then this will be seen as the beginning of the end of our civilization, as observed by any survivors. 

AI alignment is so tricky that we might not be able to do it even if everyone is on the same page. What we must do is immediately halt development of models more powerful than GPT-4. We could spend ten years absorbing what these models can do and using them to solve our most critical problems, including AI alignment research. Only then, assuming AI alignment is solvable, will it be safe to try advancing to superhuman models. With capital in the drivers seat, AI alignment will not be solved in time, and we have no chance of making a smooth transition to a post-work protopian society.

This is personally sad, since I’ve been a big fan of ChatGPT and OpenAI so far. I’ll need to reevaluate what tools I use, but it’s not clear how me boycotting OpenAI would help. I think I’ll just need to focus on writing the rest of the books in the series. We can all do our best to keep highlighting the need for AI safety, and maybe we can turn the tide back towards sanity.