AI & RoboticsNews

As Anthropic seeks billions to take on OpenAI, ‘industrial capture’ is nigh. Or is it?

industrial capture

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


In May 2021, Dario Amodei, former VP of research at OpenAI, co-founded Anthropic with his sister Daniela (also an OpenAI employee), and explained that the company was primarily focused on AI safety research. But even back then, he said that he could see many opportunities for its work to create commercial value.

That commercial focus was on full display in TechCrunch’s article yesterday, which said it gained access to Anthropic company documents and revealed that the company is “working to raise as much as $5 billion over the next two years to take on rival OpenAI and enter over a dozen major industries.”

The news seems to underscore recent reports that “industrial capture” is looming over the AI landscape — that is, “a handful of individuals and corporations now control much of the resources and knowledge in the sector, and will ultimately shape its impact on our collective future.”

Anthropic was founded by former OpenAI employees

Keep in mind, for those who feel like it was just yesterday that Anthropic was touting itself as an “AI safety and research” company, this news may come as a shock. That’s because the Amodei siblings purportedly left OpenAI (and brought nine other OpenAI employees along) because of “differences over the group’s direction after it took a landmark $1 billion investment from Microsoft in 2019.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

But Anthropic has also been raising big bucks since its launch, when it announced $124 million in funding. Less than a year later, it suddenly announced a whopping $580 million in funding in a series B round. Most of that money, it turns out, came from Sam Bankman-Fried and the folks at FTX, the now-bankrupt cryptocurrency platform accused of fraud. There have been questions as to whether that money could be recovered by a bankruptcy court.

And in early February, Google announced a $300 million investment in Anthropic, which by then had already developed its AI chatbot, Claude, using a process called “constitutional AI,” which it says is based on concepts such as beneficence, non-maleficence and autonomy.

Open-source alternatives are saying ‘hold my beer’

There is no doubt that there are signs that “industrial capture” is nigh. Just this morning, the New York Times reported on Google and Microsoft’s aggressive moves to control generative AI since OpenAI launched ChatGPT in November 2022.

According to the article, when the tech industry suddenly shifts towards new tech, the first company to introduce a product “is the long-term winner just because they got started first,” wrote Sam Schillace, a technology executive at Microsoft, in an internal email. “Sometimes the difference is measured in weeks.”

But last week, the leaders at Hugging Face, who hosted an open source AI meetup event in San Francisco that quickly morphed into a 5,000-strong “Woodstock of AI” moment, were keen to point out the power and influence of open-source AI:

“If [it] wasn’t for the ‘Attention Is All You Need‘ paper, for ‘The BERT‘ paper, and for the ‘Latent Diffusion‘ paper, we might be 20, 30, 40 or 50 years away from where we are today in terms of capabilities and possibilities for AI,” said Hugging Face CEO Clement Delangue. “If it wasn’t for open-source libraries or languages, if it wasn’t for frameworks like PyTorch, TensorFlow, Keras, Hugging Face, transformers and diffusers, we wouldn’t be where we are today.”

And March was a particularly strong month for open-source AI, according to Sebastian Raschka, who in a blog post wrote that “after concerns that AI development was trending towards closed source (see GPT-4), open source got a second wind and is trending upwards, which is awesome.”

In addition to PyTorch 2.0 and Lightning 2.0, there was Lit-LLaMA, a rewrite of Meta’s LLaMA large language models. There were also several models jumping off of LLaMA, including Alpaca, released by Stanford, Vicuna, released by several universities, and Dolly, released by Databricks. All of these are smaller, less costly and more flexible than the Big Tech/Big Lab LLMs and avoid vendor lock-in.

Still, even the most well-funded open-source companies are not vulnerable. For example, Stability AI, which launched Stable Diffusion, is on shaky ground, according to new reporting from Semafor Tech — which said Stability AI has “burned through a significant chunk of the $100 million it raised late last year, and two venture investors who spoke to Semafor on condition of anonymity are having second thoughts about participating in a fundraising round that would quadruple the firm’s valuation to $4 billion, according to people briefed on the plans.”

The biggest companies and labs still have a leg up

Clearly, the biggest companies and labs have a strong leg up on the generative AI competition, most notably because of access to servers and compute — OpenAI runs on Microsoft Azure, for example, while Anthropic and DeepMind are aligned with Google.

On the other hand, in an era of high demand, startups and other companies are struggling to access what they need. This morning The Information reported that they “can’t find enough specialized computers to make their own AI software.” Major cloud-server providers including Amazon Web Services, Microsoft, Google and Oracle are limiting their availability for customers, the article said, and some customers have reported long wait times to rent the hardware.

DeepMind is currently missing from the discussion

DeepMind, the AI lab that was considered the biggest competitor when OpenAI launched, has been notably quiet of late. The hailed developer of AlphaFold, which predicted the 3D structures of nearly all the proteins known to humanity, has been mostly silent amid the loud AI debates, issuing no press releases with announcements and little news since before ChatGPT was released.

The only exception was a Time interview in January with CEO Demis Hassabis. In that interview, Hassabis urged a rare, stark message of caution: “I would advocate not moving fast and breaking things,” he said.

Hassabis did tell Time that DeepMind is also considering releasing its own chatbot, called Sparrow, for a “private beta” some time in 2023 (delaying release because “it’s right to be cautious on that front”). But he admitted that DeepMind might need to change its tune:

“We’re getting into an era where we have to start thinking about the freeloaders, or people who are reading but not contributing to that information base,” he says. “And that includes nation states as well.” According to Time, he declined “to name which states he means — ‘it’s pretty obvious, who you might think’ — but he suggests that the AI industry’s culture of publishing its findings openly may soon need to end.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Microsoft brings AI to the farm and factory floor, partnering with industry giants

AI & RoboticsNews

Edge data is critical to AI — here’s how Dell is helping enterprises unlock its value

AI & RoboticsNews

Box continues to expand beyond just data sharing, with agent-driven enterprise AI studio and no-code apps

Cleantech & EV'sNews

Porsche launches three new Taycan EV models, adding more performance and range

Sign up for our Newsletter and
stay informed!