AI & RoboticsNews

Hugging Face, GitHub and more unite to defend open source in EU AI legislation

A coalition of a half-dozen open source AI stakeholders, — Hugging Face, GitHub, EleutherAI, Creative Commons, LAION and Open Future — are calling on EU policymakers to protect open source innovation as they finalize the EU AI Act, which will be the world’s first comprehensive AI law.

In a policy paper released today, “Supporting Open Source and Open Science in the EU AI Act,” the open source AI leaders offered recommendations “for how to ensure the AI Act works for open source” — with the “aim to ensure that open AI development practices are not confronted with obligations that are structurally impractical to comply with or that would be otherwise counterproductive.”

According to the paper, “overbroad obligations” that favor closed and proprietary AI development — like models from top AI companies such as OpenAI, Anthropic and Google — “threaten to disadvantage the open AI ecosystem.”

The paper was released as the European Commission, Council and Parliament debate the final EU AI Act in what is known as the “trilogue,” which began after the European Parliament passed its version of the bill on June 14. The goal is to finish and pass the AI Act by the end of 2023 before the next European Parliament elections.

Yacine Jernite, ML and society lead at Hugging Face, a popular hub for open-source code and models, told VentureBeat that while the policy paper is detailed, the first main point the coalition wants to make is around innovation. “We think that it is important for people to be able to choose between base models, between components, to mix and match as they need,” he said.

In addition, the coalition seeks to emphasize that open source AI is necessary — and that regulation should not hinder open source AI innovation.

“Openness by itself does not guarantee responsible development,” Jacine explained. “But openness and transparency is necessary to responsible governance — so it is not that openness [should be] exempt from requirements, but requirements should not preclude open development.”

Since April 2021, when the European Commission proposed the first EU regulatory framework for AI, it has worked to focus on analyzing and classifying AI systems according to the risk they pose to users. The higher the risk level, the more regulation.

Peter Cihon, senior policy manager at GitHub, pointed out that as the EU Council, and subsequently the EU Parliament developed their drafts of the AI Act, the policy makers began to look up the value chain to how mitigate some of these risks at an earlier stage of AI development.

“With that kind of step, we really redoubled our efforts to make sure that they were not inadvertently imposing expectations that might make a lot of sense for companies or well-resourced actors, but would instead place them onto open source developers who are often hobbyists, nonprofits, or students,” he told VentureBeat. “Ultimately, policymakers have been quite focused on one particular value chain, one particular model, and that tends to be the API model — but that doesn’t really apply in the context of open source.”

Cihon added that he is optimistic that providing clear information about the open source approach to development will be very useful as the trilogue, which began in June, continues. “The provisions in the sections of the act that we’re talking about have not yet come up for discussion,” he said.

In addition, the EU has historically been a trendsetter when it comes to tech regulation, as it was with the GDPR — in what has become known as the “Brussels Effect.” So policymakers around the world, including in the U.S., are surely taking note.

“It certainly starts the global regulatory conversation,” said Cihon. “So we’re optimistic that this can have benefits in DC and beyond.” In particular, he noted that Senator Chuck Schumer’s announcement of AI-focused “Insight Forums” this fall are “a great opportunity to get more diverse input into the policymaking process than might be traditionally seen, and I’m really hopeful that open source developers will be given a seat at that table.”

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


A coalition of a half-dozen open source AI stakeholders, — Hugging Face, GitHub, EleutherAI, Creative Commons, LAION and Open Future — are calling on EU policymakers to protect open source innovation as they finalize the EU AI Act, which will be the world’s first comprehensive AI law.

In a policy paper released today, “Supporting Open Source and Open Science in the EU AI Act,” the open source AI leaders offered recommendations “for how to ensure the AI Act works for open source” — with the “aim to ensure that open AI development practices are not confronted with obligations that are structurally impractical to comply with or that would be otherwise counterproductive.”

According to the paper, “overbroad obligations” that favor closed and proprietary AI development — like models from top AI companies such as OpenAI, Anthropic and Google — “threaten to disadvantage the open AI ecosystem.”

The paper was released as the European Commission, Council and Parliament debate the final EU AI Act in what is known as the “trilogue,” which began after the European Parliament passed its version of the bill on June 14. The goal is to finish and pass the AI Act by the end of 2023 before the next European Parliament elections.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

Open source AI innovation is at stake

Yacine Jernite, ML and society lead at Hugging Face, a popular hub for open-source code and models, told VentureBeat that while the policy paper is detailed, the first main point the coalition wants to make is around innovation. “We think that it is important for people to be able to choose between base models, between components, to mix and match as they need,” he said.

In addition, the coalition seeks to emphasize that open source AI is necessary — and that regulation should not hinder open source AI innovation.

“Openness by itself does not guarantee responsible development,” Jacine explained. “But openness and transparency is necessary to responsible governance — so it is not that openness [should be] exempt from requirements, but requirements should not preclude open development.”

The EU AI Act is focused on application risk

Since April 2021, when the European Commission proposed the first EU regulatory framework for AI, it has worked to focus on analyzing and classifying AI systems according to the risk they pose to users. The higher the risk level, the more regulation.

Peter Cihon, senior policy manager at GitHub, pointed out that as the EU Council, and subsequently the EU Parliament developed their drafts of the AI Act, the policy makers began to look up the value chain to how mitigate some of these risks at an earlier stage of AI development.

“With that kind of step, we really redoubled our efforts to make sure that they were not inadvertently imposing expectations that might make a lot of sense for companies or well-resourced actors, but would instead place them onto open source developers who are often hobbyists, nonprofits, or students,” he told VentureBeat. “Ultimately, policymakers have been quite focused on one particular value chain, one particular model, and that tends to be the API model — but that doesn’t really apply in the context of open source.”

The ‘Brussels Effect’

Cihon added that he is optimistic that providing clear information about the open source approach to development will be very useful as the trilogue, which began in June, continues. “The provisions in the sections of the act that we’re talking about have not yet come up for discussion,” he said.

In addition, the EU has historically been a trendsetter when it comes to tech regulation, as it was with the GDPR — in what has become known as the “Brussels Effect.” So policymakers around the world, including in the U.S., are surely taking note.

“It certainly starts the global regulatory conversation,” said Cihon. “So we’re optimistic that this can have benefits in DC and beyond.” In particular, he noted that Senator Chuck Schumer’s announcement of AI-focused “Insight Forums” this fall are “a great opportunity to get more diverse input into the policymaking process than might be traditionally seen, and I’m really hopeful that open source developers will be given a seat at that table.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!