Friday, December 1, 2023
HomeBusinessThe fear and tension that led to Sam Altman’s ouster at OpenAI...

The fear and tension that led to Sam Altman’s ouster at OpenAI | Technology News


Over the past year, Sam Altman led OpenAI to the adult table of the technology industry. Thanks to its hugely popular ChatGPT chatbot, the San Francisco startup was at the center of an artificial intelligence boom, and Altman, OpenAI’s CEO, had become one of the most recognizable people in tech.

But that success raised tensions inside the company. Ilya Sutskever, a respected AI researcher who co-founded OpenAI with Altman and nine other people, was increasingly worried that OpenAI’s technology could be dangerous and that Altman was not paying enough attention to that risk, according to three people familiar with his thinking. Sutskever, a member of the company’s board of directors, also objected to what he saw as his diminished role in the company, according to two of the people.

That conflict between fast growth and AI safety came into focus Friday afternoon, when Altman was pushed out of his job by four of OpenAI’s six board members, led by Sutskever. The move shocked OpenAI employees and the rest of the tech industry, including Microsoft, which has invested $13 billion in the company. Some industry insiders were saying the split was as significant as when Steve Jobs was forced out of Apple in 1985.

Sam Altman Sam Altman, then CEO of OpenAI, at the company’s headquarters in San Francisco, on March 13, 2023. Altman, who became the face of the tech industry’s artificial intelligence boom, has been pushed out of the company by its board of directors, OpenAI said in a blog post on Friday, Nov. 17, 2023. (Jim Wilson/The New York Times)

But on Saturday, in a head-spinning turn, Altman was said to be in discussions with OpenAI’s board about returning to the company.

Festive offer

The ouster Friday of Altman, 38, drew attention to a longtime rift in the AI community between people who believe AI is the biggest business opportunity in a generation and others who worry that moving too fast could be dangerous. And the vote to remove him showed how a philosophical movement devoted to the fear of AI had become an unavoidable part of tech culture.

Since ChatGPT was released almost a year ago, artificial intelligence has captured the public’s imagination, with hopes that it could be used for important work such as drug research or to help teach children. But some AI scientists and political leaders worry about its risks, such as jobs getting automated out of existence or autonomous warfare that grows beyond human control.

Fears that AI researchers were building a dangerous thing have been a fundamental part of OpenAI’s culture. Its founders believed that because they understood those risks, they were the right people to build it.

OpenAI’s board has not offered a specific reason for why it pushed out Atman, other than to say in a blog post that it did not believe he was communicating honestly with them. OpenAI employees were told Saturday morning that his removal had nothing to do with “malfeasance or anything related to our financial, business, safety or security/privacy practice,” according to a message viewed by The New York Times.

Greg Brockman, another co-founder and the company’s president, quit in protest Friday night. So did OpenAI’s director of research. By Saturday morning, the company was in chaos, according to a half dozen current and former employees, and its roughly 700 employees were struggling to understand why the board made its move.

OpenAI team The senior executives of OpenAI, from left: Mira Murati, then chief technology officer; Sam Altman, then chief executive; Greg Brockman, president; and Ilya Sutskever, chief scientist, at the company’s headquarters in San Francisco on Monday, March 13, 2023. The departure of Sam Altman of the San Francisco company drew attention to a philosophical rift among the people building new AI systems. (Jim Wilson/The New York Times)

“I’m sure you all are feeling confusion, sadness, and perhaps some fear,” Brad Lightcap, OpenAI’s chief operating officer, said in a memo to OpenAI employees. “We are fully focused on handling this, pushing toward resolution and clarity, and getting back to work.”

On Friday, Altman was asked to join a board meeting via video at noon in San Francisco on Friday. There, Sutskever, 37, read from a script that closely resembled the blog post the company published minutes later, according to a person familiar with the matter. The post said that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

But in the hours that followed, OpenAI employees and others focused not only on what Altman may have done, but on the way the San Francisco startup is structured and the extreme views on the dangers of AI embedded in the company’s work since it was created in 2015.

Sutskever and Altman could not be reached for comment Saturday.

In recent weeks, Jakub Pachocki, who helped oversee GPT-4, the technology at the heart of ChatGPT, was promoted to director of research at the company. After previously occupying a position below Sutskever, he was elevated to a position alongside Sutskever, according to two people familiar with the matter.

Pachocki quit the company late Friday, the people said, soon after Brockman. Earlier in the day, OpenAI said Brockman had been removed as chair of the board and would report to the new interim CEO, Mira Murati. Other allies of Altman — including two senior researchers, Szymon Sidor and Alexander Madry — have also left the company.

Brockman said in a post on X, formerly known as Twitter, that even though he was the chair of the board, he was not part of the board meeting where Altman was ousted. That left Sutskever and three other board members: Adam D’Angelo, CEO of the question-and-answer site Quora; Tasha McCauley, an adjunct senior management scientist at Rand Corp.; and Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.
They could not be reached for comment Saturday.

McCauley and Toner have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that AI could one day destroy humanity. Today’s AI technology cannot destroy humanity. But this community believes that as the technology grows increasingly powerful, these dangers will arise.

In 2021, a researcher named Dario Amodei, who also has ties to this community, and about 15 other OpenAI employees left the company to form a new AI company called Anthropic.

Sutskever was increasingly aligned with those beliefs. Born in the Soviet Union, he spent his formative years in Israel and emigrated to Canada as a teenager. As an undergraduate at the University of Toronto, he helped create a breakthrough in an AI technology called neural networks.

In 2015, Sutskever left a job at Google and helped found OpenAI alongside Altman, Brockman and Tesla CEO Elon Musk. They built the lab as a nonprofit, saying that unlike Google and other companies, it would not be driven by commercial incentives. They vowed to build what is called artificial general intelligence, or AGI, a machine that can do anything the brain can do.

Altman transformed OpenAI into a for-profit company in 2018 and negotiated a $1 billion investment from Microsoft. Such enormous sums of money are essential to building technologies such as GPT-4, which was released this year. Since its initial investment, Microsoft has put another $12 billion into the company.

The company was still governed by the nonprofit board. Investors such as Microsoft do receive profits from OpenAI, but their profits are capped. Any money over the cap is funneled back into the nonprofit.

As he saw the power of GPT-4, Sutskever helped create a new Super Alignment team in the company that would explore ways of ensuring that future versions of the technology would not do harm.

Altman was open to those concerns, but he also wanted OpenAI to stay ahead of its much larger competitors. In late September, Altman flew to the Middle East for a meeting with investors, according to two people familiar with the matter. He sought as much as $1 billion in funding from SoftBank, the Japanese technology investor led by Masayoshi Son, for a potential OpenAI venture that would build a hardware device for running AI technologies such as ChatGPT.

Most Read

What happens to your body if you only eat fruits for 72 hours?
World Cup final: After defeat to Australia, Rahul Dravid says India ‘gave everything’; unsure about his own future

OpenAI is also in talks for “tender offer” funding that would allow employees to cash out shares in the company. That deal would value OpenAI at more than $80 billion, nearly triple its worth about six months ago.

But the company’s success appears to have only heightened concerns that something could go wrong with AI.

“It doesn’t seem at all implausible that we will have computers — data centers — that are much smarter than people,” Sutskever said on a podcast Nov. 2. “What would such AIs do? I don’t know.”

This article originally appeared in The New York Times.

Related articles


Please enter your comment!
Please enter your name here

Stay Connected


Latest posts