AI & RoboticsNews

EU AI Act’s possible open-source regulation sparks Twitter debate

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Alex Engler, research fellow at the Brookings Institution, never expected his recent article, “The EU’s attempt to regulate open source AI is counterproductive,” to stir up a Twitter debate. 

According to Engler, as the European Union continues to debate its development of the Artificial Intelligence Act (AI Act), one step it has considered would be to regulate open-source general-purpose AI (GPAI). The EU AI Act defines GPAI as “AI systems that have a wide range of possible uses, both intended and unintended by the developers … these systems are sometimes referred to as ‘foundation models’ and are characterized by their widespread use as pre-trained models for other, more specialized AI systems.”  

In Engler’s piece, he said that while the proposal is meant to enable safer use of these artificial intelligence tools, it “would create legal liability for open-source GPAI models, undermining their development.” The result, he maintained, would “further concentrate power over the future of AI in large technology companies” and prevent critical research. 

“It’s an interesting issue that I did not expect to get any attention,” he told VentureBeat. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

“I’m always surprised to get press calls, frankly.” 

But after Emily Bender –- professor of linguistics at the University of Washington and a regular critic of how AI is covered on social media and in mainstream media –- wrote a thread about a piece that quoted from Engler’s article, a spirited Twitter back-and-forth began. 

EU AI Act’s open-source discussion in the hot seat

“I have not studied the AI Act, and I am not a lawyer, so I can’t really comment on whether it will work well as regulation,” Bender tweeted, pointing out later in her thread: “How do people get away with pretending, in 2022, that regulation isn’t needed to direct the innovation away from exploitative, harmful, unsustainable, etc practices?” 

Engler responded to Bender’s thread with his own. Broadly, he said, “I am in favor of AI regulation … still I don’t think regulating models at the point of open-sourcing helps at all. Instead, what is better and what the original EU Commission proposal did, is to regulate whenever a model is used for something dangerous or harmful, regardless of whether it is open-source.” 

He also maintained that he does not want to exempt open-source models from the EU AI Act, but wants to exempt the act of open sourcing AI. If it is harder to release open-source AI models, he argued, these same requirements will not prevent commercialization of these models behind APIs. “We end up with more OpenAIs, and fewer OS alternatives — not my favorite outcome,” he tweeted. 

Bender responded to Engler’s thread by emphasizing that if part of the purpose of the regulation is to require documentation, “the only people in a position to actually thoroughly document training data are those who collect it.” 

Perhaps this could be handled by disallowing any commercial products based on under-documented models, leaving the liability with the corporate interests doing the commercializing, she wrote, but added, “What about when HF [Hugging Face] or similar hosts GPT-4chan or Stable Diffusion and private individuals download copies & then maliciously use them to flood various online spaces with toxic content?” 

Obviously, she continued, the “Googles and Metas of the world should also be subject to strict regulation around the ways in which data can be amassed and deployed. But I think there’s enough danger in creating collections of data/models trained on those that OSS devs shouldn’t have free rein.” 

Engler, who studies the implications of AI and emerging data technologies on society, admitted to VentureBeat that “this issue is pretty complicated, even for people who broadly share fairly similar perspectives.” He and Bender, he said, “share a concern about where regulatory responsibility and commercialization should fall … it’s interesting that people with relatively similar perspectives land in a somewhat different place.”

The impact of open-source AI regulation

Engler made several points to VentureBeat about his views on the EU regulating open-source AI. First of all, he said the limited scope is a practical concern. “The EU passing requirements doesn’t affect the rest of the world, so you can still release this somewhere else and the EU requirements will have a very minimal impact,” he said. 

In addition, “the idea that a well-built, well-trained model that meets these regulatory requirements somehow wouldn’t be applicable for harmful uses just isn’t true,” he said. “I think we haven’t clearly shown that regulatory requirements and making good models will necessarily make them safe in malicious hands,” he added, pointing out that there is a lot of other software that people use for malicious purposes that would be hard to start regulating. 

“Even the software that automates how you interact with a browser has the same problem,” he said. “So if I’m trying to make lots of fake accounts to spam social media, the software that lets me do that has been public for 20 years, at least. So [the open-source issue] is a bit of a departure.” 

Finally, he said, the vast majority of open-source software is created without a goal of selling the software. “So you’re taking an already uphill battle, which is that they’re trying to build these large, expensive models that can even get close to competing with the big firms and you’re adding a legal and regulatory barrier as well,” he said. 

What the EU AI Act will and won’t do

Engler emphasized that the EU AI Act won’t be a cure-all for AI ills. What the EU AI Act will broadly help with, he said, is “preventing kind of fly-by-night AI applications for things it can’t really do or is doing very poorly.” 

In addition, Engler thinks the EU is doing a fairly good job attempting to “meaningfully solve a pretty hard problem about the proliferation of AI into dangerous and risky areas,” adding that he wishes the U.S. would take a more proactive regulatory role in the space (though he gives credit to work done by the Equal Employment Opportunity Commission’s work on bias and AI hiring systems). 

What the EU AI Act will not really address is dealing with the creation and public availability of models that people are simply using nefariously. 

“I think that is a different question that the EU AI Act doesn’t really address,” he said. “I’m not sure we’ve seen anything that prevents them from being out there, in a way that is actually going to work,” he added, while the open-source discussion is a bit “tacked on.”

“If there was a part of the EU AI Act that said, hey, the proliferation of these large models is dangerous and we want to slow them down, that would be one thing –- but it doesn’t say that,” he said. 

Debate will surely continue

Clearly, the Twitter debate around the EU AI Act and other AI regulations will continue, as stakeholders from across the AI research and industry spectrum weigh in on dozens of recommendations on a comprehensive AI regulatory framework that could potentially be a model for a global standard. 

And the debate continues offline, as well: Engler said that one of the European Parliamentary committees, advised by digital policy advisor Kai Zenner, plans to introduce a change to the EU AI Act that would take up the issue surrounding open-source AI – reflected in yet another tweet: 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!