AI & RoboticsNews

OpenAI drama a reflection of AI’s pivotal moment

The philosophical battle between AI “accelerationists” and “doomers” has erupted into full view at OpenAI. Basically, the accelerationists advocate for rapid advancement in AI technology, emphasizing the enormous potential benefits. Conversely, the “doomers” desire a cautious approach that emphasizes the potential risks associated with unbridled AI development.

Various reports suggest a conflict between (now once again) CEO Sam Altman, who has sought to further monetize OpenAI’s development, and the board, which has prioritized safety measures. The board was acting in accordance with their non-profit charter, while the CEO was exploring avenues to secure necessary funding for ongoing development in this highly competitive field. The board won the initial conflict, resulting in the dismissal of Altman in what appears to have been a “palace coup.”

Altman is the protagonist in this story, while chief AI scientist and board member Ilya Sutskever is the antagonist. An early developer of deep learning and a star former student of AI pioneer Geoffrey Hinton, Sutskever has a strong understanding of the issues in play. He allegedly was the one pushing for a CEO change. Axios suggested that Sutskever may have persuaded board members that Altman’s accelerated approach to AI deployment was too risky, perhaps even dangerous.

In reporting by The Information, Sutskever told employees in an emergency meeting last Friday that the “board [was] doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.”

However, the reaction from industry watchers, OpenAI investors and many OpenAI employees has been to side with Altman. A “revolt” among these groups might not be too strong a word. Consequently, the board apparently went back to Altman, possibly sheepish, to negotiate the possibility of his return to the CEO post — which in a dramatic turnaround (and for now at least) is the outcome.

To call Altman flamboyant would be an overstatement. In fact, he often comes off as measured — a proponent of advancing AI while also warning of potential existential dangers. He famously went to Washington, D.C. last spring and warned about these risks and has often called for government regulation of “frontier” AI models. Of course, some have labeled this a disingenuous attempt at regulatory capture to freeze out smaller competitors.

It is true, too, that Altman has his hand in several side projects including World Coin, a cryptocurrency-based scheme to authenticate identity based on iris scans to facilitate future payments of universal basic income after AI eliminates jobs.

A Fortune story describes how Altman had recently been working on “Tigris,” an initiative to create “an AI-focused chip company that could produce semiconductors that compete against those from Nvidia.” Similarly, he has been raising money for a hardware device that he’s been developing in tandem with design specialist Jony Ive.

Whatever else may be true about Altman, his capitalist and venture capitalist credentials are first-rate. These viewpoints appear in sharp relief with the OpenAI non-profit mission to build artificial general intelligence (AGI) that benefits all of humanity.

In America, we value the “rainmaker.” Altman’s track record before joining OpenAI, and while at the company, speaks to his rainmaking ability through his driving of innovation, securing funding and leading initiatives that push the boundaries of technology and business. In short, his philosophy and accomplishments exemplify what Americans value most. The support for Altman in the OpenAI coup is therefore not surprising.

It is also not surprising that Altman had options beyond OpenAI. Microsoft CEO Satya Nadella had pledged to support him in whatever comes next.

As the sun rose only three days after his firing, we learned that Altman and others including OpenAI cofounder and president Greg Brockman would join Microsoft to lead a new AI research team. More than 700 OpenAI employees out of a total workforce of about 770 signed a letter threatening to leave and follow Altman saying: “We will take this step imminently, unless all current board members resign.”

As of Monday morning, the board did not resign and instead appointed former Twitch CEO Emmett Shear as interim CEO. However, at least one member of the board has expressed misgivings — surprisingly, that was Sutskever.

According to CNN, Shear said the process of firing Altman was “handled very badly, which has seriously damaged our trust.”

Now, with the stunning announcement that Altman and Brockman are returning to the company — which presumably will assuage employees and investors — we can only speculate on what happens next. Notably, the prior board has been replaced with new members presumably with less of a doomer focus.

As of this writing, it is not clear if Sutskever will remain with OpenAI. As this settles out, some of the players may have shifted where they sit, but the race in AI development is likely to continue.

We can safely assume Microsoft will push forward with the same sense of urgency they have displayed since ChatGPT first appeared almost exactly a year ago when they began pivoting their business model and aggressively incorporated OpenAI technology into many of their products. If anything, it is clear that OpenAI and Microsoft are now even more intrinsically bound together than before.

Anthropic this week also released Claude 2.1 featuring an enormous context window far surpassing GPT-4 and purportedly reducing hallucinations by 2X.

The drama at OpenAI serves as a microcosm of the larger ongoing debate: How do we balance the ambitious drive for AI innovation which promises unprecedented benefits, with the prudent need for safety and ethical considerations? 

There is validity to cautious voices urging thoughtful reflection on the potential downsides of unfettered AI advancement. This was the argument put forward by the now former OpenAI board.

This balance is not just a matter of corporate policy or strategy; it’s a reflection of our societal values and the future we aspire to create. The story of Sam Altman and OpenAI is not just about a clash of personalities or corporate strategies; it’s a reflection of a pivotal moment in our technological journey.

Gary Grossman is the EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


The philosophical battle between AI “accelerationists” and “doomers” has erupted into full view at OpenAI. Basically, the accelerationists advocate for rapid advancement in AI technology, emphasizing the enormous potential benefits. Conversely, the “doomers” desire a cautious approach that emphasizes the potential risks associated with unbridled AI development.

Various reports suggest a conflict between (now once again) CEO Sam Altman, who has sought to further monetize OpenAI’s development, and the board, which has prioritized safety measures. The board was acting in accordance with their non-profit charter, while the CEO was exploring avenues to secure necessary funding for ongoing development in this highly competitive field. The board won the initial conflict, resulting in the dismissal of Altman in what appears to have been a “palace coup.”

Altman is the protagonist in this story, while chief AI scientist and board member Ilya Sutskever is the antagonist. An early developer of deep learning and a star former student of AI pioneer Geoffrey Hinton, Sutskever has a strong understanding of the issues in play. He allegedly was the one pushing for a CEO change. Axios suggested that Sutskever may have persuaded board members that Altman’s accelerated approach to AI deployment was too risky, perhaps even dangerous.

In reporting by The Information, Sutskever told employees in an emergency meeting last Friday that the “board [was] doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.”

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

However, the reaction from industry watchers, OpenAI investors and many OpenAI employees has been to side with Altman. A “revolt” among these groups might not be too strong a word. Consequently, the board apparently went back to Altman, possibly sheepish, to negotiate the possibility of his return to the CEO post — which in a dramatic turnaround (and for now at least) is the outcome.

Altman’s vision and values

To call Altman flamboyant would be an overstatement. In fact, he often comes off as measured — a proponent of advancing AI while also warning of potential existential dangers. He famously went to Washington, D.C. last spring and warned about these risks and has often called for government regulation of “frontier” AI models. Of course, some have labeled this a disingenuous attempt at regulatory capture to freeze out smaller competitors.

It is true, too, that Altman has his hand in several side projects including World Coin, a cryptocurrency-based scheme to authenticate identity based on iris scans to facilitate future payments of universal basic income after AI eliminates jobs.

A Fortune story describes how Altman had recently been working on “Tigris,” an initiative to create “an AI-focused chip company that could produce semiconductors that compete against those from Nvidia.” Similarly, he has been raising money for a hardware device that he’s been developing in tandem with design specialist Jony Ive.

Whatever else may be true about Altman, his capitalist and venture capitalist credentials are first-rate. These viewpoints appear in sharp relief with the OpenAI non-profit mission to build artificial general intelligence (AGI) that benefits all of humanity.

Underscoring America’s values

In America, we value the “rainmaker.” Altman’s track record before joining OpenAI, and while at the company, speaks to his rainmaking ability through his driving of innovation, securing funding and leading initiatives that push the boundaries of technology and business. In short, his philosophy and accomplishments exemplify what Americans value most. The support for Altman in the OpenAI coup is therefore not surprising.

It is also not surprising that Altman had options beyond OpenAI. Microsoft CEO Satya Nadella had pledged to support him in whatever comes next.

As the sun rose only three days after his firing, we learned that Altman and others including OpenAI cofounder and president Greg Brockman would join Microsoft to lead a new AI research team. More than 700 OpenAI employees out of a total workforce of about 770 signed a letter threatening to leave and follow Altman saying: “We will take this step imminently, unless all current board members resign.”

The other shoe drops

As of Monday morning, the board did not resign and instead appointed former Twitch CEO Emmett Shear as interim CEO. However, at least one member of the board has expressed misgivings — surprisingly, that was Sutskever.

Source: https://twitter.com/ilyasut/status/1726590052392956028

According to CNN, Shear said the process of firing Altman was “handled very badly, which has seriously damaged our trust.”

Now, with the stunning announcement that Altman and Brockman are returning to the company — which presumably will assuage employees and investors — we can only speculate on what happens next. Notably, the prior board has been replaced with new members presumably with less of a doomer focus.

As of this writing, it is not clear if Sutskever will remain with OpenAI. As this settles out, some of the players may have shifted where they sit, but the race in AI development is likely to continue.

We can safely assume Microsoft will push forward with the same sense of urgency they have displayed since ChatGPT first appeared almost exactly a year ago when they began pivoting their business model and aggressively incorporated OpenAI technology into many of their products. If anything, it is clear that OpenAI and Microsoft are now even more intrinsically bound together than before.

Anthropic this week also released Claude 2.1 featuring an enormous context window far surpassing GPT-4 and purportedly reducing hallucinations by 2X.

OpenAI a microcosm of a larger ongoing debate

The drama at OpenAI serves as a microcosm of the larger ongoing debate: How do we balance the ambitious drive for AI innovation which promises unprecedented benefits, with the prudent need for safety and ethical considerations? 

There is validity to cautious voices urging thoughtful reflection on the potential downsides of unfettered AI advancement. This was the argument put forward by the now former OpenAI board.

This balance is not just a matter of corporate policy or strategy; it’s a reflection of our societal values and the future we aspire to create. The story of Sam Altman and OpenAI is not just about a clash of personalities or corporate strategies; it’s a reflection of a pivotal moment in our technological journey.

Gary Grossman is the EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Gary Grossman, Edelman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!