The tech world got its popcorn out last night after OpenAI dropped a new blog post that responded to the lawsuit Elon Musk filed last week against OpenAI, CEO Sam Altman and president Greg Brockman. Musk’s claims include breach of contract, breach of fiduciary duty, and unfair competition — all circling around the idea that OpenAI put profits and commercial interests in developing artificial general intelligence (AGI) ahead of its duty to protect the public good.
But OpenAI responded aggressively, and brought receipts — posting copies of emails between Musk, Altman, Brockman and chief scientist Ilya Sutskever. These were five of the most revealing details in the emails:
“As we get closer to building AI,” Sutskever wrote, “it will make sense to start being less open.”
The ‘open’ in OpenAI, he continued, “means that everyone should benefit from the fruits of AI after its built, but it’s totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).”
In a response to Sutskever saying that starting to be less open “will make sense” as OpenAI gets closer to building AI, and that “it’s totally OK to not share the science,” Musk responded with a single word: “Yup.”
Musk forwarded an email to Sutskever and Brockman that said OpenAI was “burning cash” and that the funding model “cannot reach the scale to seriously compete with Google.” The most promising option, the email said, “would be for OpenAI to attach to Tesla as its cash cow.”
Musk added “We may wish it otherwise, but, in my…opinion, Tesla is the only path that could even hope to hold a candle to Google. Even then, the probability of being a counterweight to Google is small. It just isn’t zero.”
While OpenAI remains ostensibly a nonprofit, a for-profit pivot — to its current unusual and complex nonprofit/capped profit structure — was discussed as early as 2018.
The email forwarded by Musk to Sutskever and Brockman, from February 2018, pointed out that a for-profit pivot “might create a more sustainable revenue stream over time and would, with the current team, likely bring in a lot of investment.” However, building a product from scratch would be too difficult to scale and steal focus from research.
The most promising option, the author wrote, was for OpenAI to “attach to Tesla as its cash cow,” helping to build a self-driving solution. “If we do this really well, the transportation industry is large enough that we could increase Tesla’s market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.”
Sutskever was clearly concerned about open sourcing unsafe AI, especially in the case of a ‘hard takeoff’ scenario.
“If a hard takeoff occurs, and a safe AI is harder to build than an unsafe one, then by open sourcing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff,” he wrote.
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.
The tech world got its popcorn out last night after OpenAI dropped a new blog post that responded to the lawsuit Elon Musk filed last week against OpenAI, CEO Sam Altman and president Greg Brockman. Musk’s claims include breach of contract, breach of fiduciary duty, and unfair competition — all circling around the idea that OpenAI put profits and commercial interests in developing artificial general intelligence (AGI) ahead of its duty to protect the public good.
But OpenAI responded aggressively, and brought receipts — posting copies of emails between Musk, Altman, Brockman and chief scientist Ilya Sutskever. These were five of the most revealing details in the emails:
1. Chief scientist Ilya Sutskever clarified the ‘open’ in ‘OpenAI’ didn’t necessarily mean “open source.”
“As we get closer to building AI,” Sutskever wrote, “it will make sense to start being less open.”
The ‘open’ in OpenAI, he continued, “means that everyone should benefit from the fruits of AI after its built, but it’s totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).”
2. Elon Musk agreed that AI should not necessarily be open-sourced
In a response to Sutskever saying that starting to be less open “will make sense” as OpenAI gets closer to building AI, and that “it’s totally OK to not share the science,” Musk responded with a single word: “Yup.”
3. Musk suggested ‘attaching’ OpenAI to Tesla to meet is funding goals ‘as a cash cow’
Musk forwarded an email to Sutskever and Brockman that said OpenAI was “burning cash” and that the funding model “cannot reach the scale to seriously compete with Google.” The most promising option, the email said, “would be for OpenAI to attach to Tesla as its cash cow.”
Musk added “We may wish it otherwise, but, in my…opinion, Tesla is the only path that could even hope to hold a candle to Google. Even then, the probability of being a counterweight to Google is small. It just isn’t zero.”
4. An OpenAI shift to for-profit was discussed as early as 2018
While OpenAI remains ostensibly a nonprofit, a for-profit pivot — to its current unusual and complex nonprofit/capped profit structure — was discussed as early as 2018.
The email forwarded by Musk to Sutskever and Brockman, from February 2018, pointed out that a for-profit pivot “might create a more sustainable revenue stream over time and would, with the current team, likely bring in a lot of investment.” However, building a product from scratch would be too difficult to scale and steal focus from research.
The most promising option, the author wrote, was for OpenAI to “attach to Tesla as its cash cow,” helping to build a self-driving solution. “If we do this really well, the transportation industry is large enough that we could increase Tesla’s market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.”
5. Sutskever was worried about a ‘hard takeoff’ where safe AI was harder to build than unsafe AI
Sutskever was clearly concerned about open sourcing unsafe AI, especially in the case of a ‘hard takeoff’ scenario.
“If a hard takeoff occurs, and a safe AI is harder to build than an unsafe one, then by open sourcing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff,” he wrote.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team