AI & RoboticsNews

AI Act: What does general purpose AI (GPAI) even mean?

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


The AI space is laden with acronyms — but arguably, one of the most-discussed right now is GPAI (general purpose AI).

As anyone paying attention to the AI landscape is well-aware, this term could eventually define — and regulate — systems in the European Union’s AI Act.

But, since it was proposed in an amendment earlier this year, many question its specificity (or lack thereof) and implications.

The GPAI definition in the AI Act is “far from being very robust,” Alexandra Belias, international public policy manager for DeepMind, said during a panel discussion hosted this week by the Information Technology and Innovation Foundation’s (ITIF) Center for Data Innovation.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

GPAI, in fact, is an acronym that no one was even using or aware of just a few months ago, she said. Researchers and the AI community can’t yet agree on an adequate term because, “how can you define something without having adequately scoped it?”

The AI Act and amendments: What’s on the table

The European Parliament first proposed the AI Act in April 2021. Several member states and Parliament committees have since weighed in and introduced amendments (and continually fueled worldwide debate). Most recently, proposed amendments seek to define GPAI systems, classify them as high-risk and regulate them. This would also potentially include open-source models.

The Act assigns three risk categories for AI applications and systems: unacceptable risk (to be banned), high-risk (requiring regulation), and unregulated.

One proposed article defines a ‘General purpose AI system’ as:

“An AI system that is able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation, etc., and is able to have multiple intended and unintended purposes.”

Another suggested article would require providers of GPAI systems to:

  • Ensure compliance with requirements.
  • Assess “the reasonably foreseeable misuse” of their systems.
  • Provide instructions and information about the safety of these systems to users and other relevant stakeholders in the supply chain.
  • Regularly assess whether the AI systems have presented any new risks, including risks discovered when investigating novel use cases.
  • Register their systems in the EU database.

Why a regulatory framework?

There are three main reasons why the European Parliament is seeking to address and define GPAI systems, said Kai Zenner, head of office and digital policy advisor to MEP Axel Voss.

First, “definitely there is a fear of technology and different developments,” said Zenner.

“There are a lot of possibilities; you do not really know what general purpose AI systems, and all these technologies falling under the umbrella term of AI, will be looking like in five or 10 years,” he said. “Many people are really scared about it. So it feels like there is a threat.”

The second is the competition element: GPAI systems could be dominated by big tech companies. Third is the responsibility along the value chain. Systems are not only complex technologically, they involve several market players — so, some are of the opinion that upstream providers or companies should at least be considered from a compliance standpoint, explained Zenner.

Shifting definitions, meanings

Still, many question the derivation of GPAI as a term — and what, ultimately, a regulatory definition should or could include or exclude. How can you identify AI’s current, expected and not-yet-foreseen applications? And should the EU be regulating it at all?

These were some of the questions addressed (and ultimately yet to be answered) in the Center for Data Innovation discussion moderated by senior analyst Hodan Omaar.

As explained by Anthony Aguirre, vice president for policy and strategy at the nonprofit Future of Life Institute, the colloquial notion of artificial general intelligence (AGI) is a “selective, human-like, broad, flexible kind of intelligence that can do all kinds of learning and perform all kinds of tasks.”

And, while that doesn’t yet exist, it eventually will, he said.

“We have to talk about how both today’s and tomorrow’s systems behave and talk about them in a way that captures this key characteristic,” said Aguirre. The reality is that they can perform and learn certain tasks, including those for which they weren’t originally intended, designed, or trained.

With larger and broader scale, size, parameters and datasets, GPAI systems are able to take a wider set of tasks and activities. But, one of the difficulties is seeing how they could be used downstream, said DeepMind’s Belias.

In legislation, definitions and lists of criteria should include specificity around depth and breadth of performance, number of tasks systems can perform and whether they can perform tasks on which they haven’t been trained before, she said.

But, “that would still not solve the fact that GPAI is not a perfect term.” Also, Belias emphasized, it is important that regulators and organizations alike identify responsible development, establish best practices, and build an overall culture of trust and trustworthy AI.

What is ‘future proof’?

Irene Solaiman, policy director for community and data science platform Hugging Face, pointed to the vagueness of the proposed conditions and the fact that they are not “future-proof.”

“My understanding of ‘future proof’ is that it implies some level of faith,” she said. It would be helpful, instead, to taxonomize sets of systems and use cases. “The way that AI advancement is going, ‘future proof’ is a real tough one to hit.”

When it comes to evaluating AI systems based on metrics or performance, Solaiman said her “dorky dream” is that state-of-the-art performance will someday not just mean technical accuracy, but will include qualitative aspects. This means fairness, privacy protection, ethical considerations and value alignment, among other factors.

Also, when benchmarking, what specific tasks should be considered?

“Never underestimate a bored teenager with decent coding skills and an internet connection to find use cases that you might not have thought of,” said Solaiman.

In crafting such landmark legislation with broad-reaching implications, it is critical to work with many disciplines, she said. This includes technical experts, ethical practitioners, social scientists and underrepresented groups, “to provide specific, technically implementable guidance on general purpose systems to not hinder, but to guide innovation.”

Establishing benchmarks, responsibility

Given that the AI Act’s definition of GPAI systems can involve a plurality of contexts, benchmarking and considering accuracy in different tasks — and how many tasks should be considered — can be a challenge, panelists largely agreed.

A percentage accuracy model wouldn’t be useful, said Future of Life’s Aguirre; rather, benchmarking should be context-dependent and compared to extant systems that are performing those same tasks. There should also be some sort of human-comparable threshold.

Andrea Miotti, head of AI policy and governance at research group Conjecture, agreed that GPAI systems such as GPT-3 can have thousands or tens of thousands of downstream customer-facing applications.

“I would especially focus on their capabilities angle, on the fact that they are able to be adaptive to a variety of downstream tasks,” he said.

He also pointed to the open-source quandary: Open models can result in great things, but they can also pose risk. “It can be irresponsible to have everything out there all the time,” said Miotti.

Open-source model exemptions, if implemented, could generate confusion and create loopholes. For example, a developer could release an initially closed-source model as open-source to skirt regulation. Ultimately, there should be an “equitable distribution of regulatory priority between developers and deployers,” said Miotti.

Aguirre agreed that there’s a balance to be struck when it comes to model responsibility.

“It takes a lot of expertise and labor to keep these models safe — in some cases huge teams to really bring them just into basic civility from their untamed initial state,” he said.

He added that “we don’t want a system where one party is responsible for the safety but they can’t really do that.”

A work in progress

Some lawmakers and other stakeholders are of the opinion that everyone working on an open-source system should be responsible or liable, Zenner pointed out.

But, “this would really destroy the whole concept, or the whole idea behind the open-source community,” he said.

Ultimately, this is all a good start, he said, and there is open, positive cooperation between regulatory parties. Regulators are “very much open to feedback,” said Zenner. Moving forward, the process should also involve “a lot of different actors and have a lot of different perspectives,” including from those not usually sought out by policymakers.

“We are aware that there is still a lot of work to be done,” said Zenner.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!