AI & RoboticsNews

The EU AI Act is near. US AI regulation is coming. Here’s what you need to know | The AI Beat

Six months after ChatGPT became an overnight success, The U.S. and the EU are racing to develop rules and draft laws to address both the benefits and risks of generative AI.

These days, the news on AI regulation efforts is piling up so fast that it’s hard to keep track. But now is definitely the time to perk up and pay attention — because AI regulation is coming, whether organizations are ready for it or not.

Companies are certainly champing at the bit to take advantage of generative AI: According to a new McKinsey study, generative AI’s impact on productivity could add trillions of dollars in value to the global economy. But there are also a host of risks tied to powerful AI that can’t be ignored, from AI systems that produce biased results to unauthorized deepfakes, cybersecurity concerns and high-risk military use cases.

>>Follow VentureBeat’s ongoing generative AI coverage<<

So the U.S. and the EU are moving as fast as … well, as fast as governments can. Here’s an overview of where AI regulation is at:

Sorry folks, the EU AI Act isn’t signed, sealed and delivered. But two years after draft rules were proposed and many months after negotiations began, the legislation — which would establish the first comprehensive AI regulation around high-risk AI systems, transparency for AI that interacts with humans, and AI systems in regulated products is headed to the final stretch. Last week, the European Parliament was the third of the three EU core institutions to pass a draft law, after the Council of the European Union and the European Commission. The next stage is called the trilogue, when EU lawmakers and member states negotiate the final details of the bill.

According to Brookings, apparently this trilogue will progress fairly quickly — the European Commission hopes to vote on the AI Act by the end of 2023, before any political impacts of the 2024 European Parliament elections. 

The U.S.’s AI regulation efforts are nowhere near the finish line — though multiple states and municipalities have passed or introduced a variety of AI-related bills. But the federal government is currently going through a flurry of hearings and forums about possible AI regulation, aiming to begin to prioritize what should be regulated and how.

For example, President Biden was in San Francisco yesterday meeting with AI experts and researchers, and the White House chief of staff’s office is meeting multiple times a week to develop ways for the federal government to ensure the safe use of artificial intelligence, the Biden administration said. Meanwhile, Sam Altman’s testimony to a Senate subcommittee was just the start of a series of congressional hearings on everything from AI and human rights to how AI can advance “innovation towards the national interest.”

Of course, the U.S. is hardly starting from scratch: Last October, the White House released its Blueprint for an AI Bill of Rights. Developed by the White House Office of Science and Technology Policy (OSTP), the Blueprint is a non-binding document that outlines five principles that should guide the design, use and deployment of automated systems, as well as technical guidance toward implementing the principles, including recommended action for a variety of federal agencies. In January, the NIST AI Risk Management Framework for trustworthy AI was released. And in May, the Biden Administration announced that it would publicly assess existing generative AI systems and that the Office of Management and Budget would release for public comment draft policy guidance on the use of AI systems by the U.S. government.

According to a new TIME investigation, while Sam Altman may have told U.S. senators that OpenAI welcomed — no, wanted — increased regulation, behind the scenes he has lobbied to water down elements of the EU AI Act to reduce the company’s regulatory burden.

In 2022, OpenAI argued that the EU AI Act should not consider GPT-3, the precursor to ChatGPT and DALL-E 2, to be “high risk,” which would have required increased transparency, traceability and human oversight.

But in May’s Senate testimony, Altman said “we need a new framework” that goes beyond Section 230 to regulate AI, and that empowering an agency to issue licenses and can take them away “clearly … should be part of what an agency can do.”

The U.S. Congress is making almost-daily moves when it comes to AI regulation. Today, Senator Chuck Schumer unveiled a long-awaited legislative framework in a speech, warning that “Congress must join the AI revolution” now or risk losing its only chance to regulate the powerful technology.

And public support is clearly growing for AI regulation: A poll released last month found that a majority — 54% — believe Congress “should take swift action to regulate AI in a way that promotes privacy, fairness, and safety, and ensures maximum benefit to society with minimal risks.”

But unfortunately, Congress also doesn’t have much to show from years of trying to regulate technology. While last year there were also multiple hearings, reports and proposals, Congress ended 2022 without taking major steps to regulate Big Tech.

No matter how AI regulation efforts play out, you can expect lots of news on the topic over the next couple of months. The White House considers AI to be a “top priority,” while there are more Congressional hearings planned. Even at the state and local level, there is plenty on the table: Enforcement begins in July, for example, on New York City’s Automated Employment Decision Tool (AEDT) law, one of the first in the U.S. aimed at reducing bias in AI-driven recruitment and employment decisions.

And in the EU, lawmakers will be working to get the EU AI Act to the finish line — and if they have any chance of getting final approval of the bill by the end of this year, they will certainly be negotiating all summer long.

>>Don’t miss our special issue: Building the foundation for customer data quality.<<

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Six months after ChatGPT became an overnight success, The U.S. and the EU are racing to develop rules and draft laws to address both the benefits and risks of generative AI.

These days, the news on AI regulation efforts is piling up so fast that it’s hard to keep track. But now is definitely the time to perk up and pay attention — because AI regulation is coming, whether organizations are ready for it or not.

Companies are certainly champing at the bit to take advantage of generative AI: According to a new McKinsey study, generative AI’s impact on productivity could add trillions of dollars in value to the global economy. But there are also a host of risks tied to powerful AI that can’t be ignored, from AI systems that produce biased results to unauthorized deepfakes, cybersecurity concerns and high-risk military use cases.

>>Follow VentureBeat’s ongoing generative AI coverage<<

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

So the U.S. and the EU are moving as fast as … well, as fast as governments can. Here’s an overview of where AI regulation is at:

The EU AI Act is not a done deal yet

Sorry folks, the EU AI Act isn’t signed, sealed and delivered. But two years after draft rules were proposed and many months after negotiations began, the legislation — which would establish the first comprehensive AI regulation around high-risk AI systems, transparency for AI that interacts with humans, and AI systems in regulated products is headed to the final stretch. Last week, the European Parliament was the third of the three EU core institutions to pass a draft law, after the Council of the European Union and the European Commission. The next stage is called the trilogue, when EU lawmakers and member states negotiate the final details of the bill.

According to Brookings, apparently this trilogue will progress fairly quickly — the European Commission hopes to vote on the AI Act by the end of 2023, before any political impacts of the 2024 European Parliament elections. 

Plans for U.S. AI regulation are far behind the EU … for now

The U.S.’s AI regulation efforts are nowhere near the finish line — though multiple states and municipalities have passed or introduced a variety of AI-related bills. But the federal government is currently going through a flurry of hearings and forums about possible AI regulation, aiming to begin to prioritize what should be regulated and how.

For example, President Biden was in San Francisco yesterday meeting with AI experts and researchers, and the White House chief of staff’s office is meeting multiple times a week to develop ways for the federal government to ensure the safe use of artificial intelligence, the Biden administration said. Meanwhile, Sam Altman’s testimony to a Senate subcommittee was just the start of a series of congressional hearings on everything from AI and human rights to how AI can advance “innovation towards the national interest.”

Of course, the U.S. is hardly starting from scratch: Last October, the White House released its Blueprint for an AI Bill of Rights. Developed by the White House Office of Science and Technology Policy (OSTP), the Blueprint is a non-binding document that outlines five principles that should guide the design, use and deployment of automated systems, as well as technical guidance toward implementing the principles, including recommended action for a variety of federal agencies. In January, the NIST AI Risk Management Framework for trustworthy AI was released. And in May, the Biden Administration announced that it would publicly assess existing generative AI systems and that the Office of Management and Budget would release for public comment draft policy guidance on the use of AI systems by the U.S. government.

OpenAI lobbied the EU to get less regulation, while telling the U.S. it wants more

According to a new TIME investigation, while Sam Altman may have told U.S. senators that OpenAI welcomed — no, wanted — increased regulation, behind the scenes he has lobbied to water down elements of the EU AI Act to reduce the company’s regulatory burden.

In 2022, OpenAI argued that the EU AI Act should not consider GPT-3, the precursor to ChatGPT and DALL-E 2, to be “high risk,” which would have required increased transparency, traceability and human oversight.

But in May’s Senate testimony, Altman said “we need a new framework” that goes beyond Section 230 to regulate AI, and that empowering an agency to issue licenses and can take them away “clearly … should be part of what an agency can do.”

Public support is growing, but Congress has had little success regulating tech

The U.S. Congress is making almost-daily moves when it comes to AI regulation. Today, Senator Chuck Schumer unveiled a long-awaited legislative framework in a speech, warning that “Congress must join the AI revolution” now or risk losing its only chance to regulate the powerful technology.

And public support is clearly growing for AI regulation: A poll released last month found that a majority — 54% — believe Congress “should take swift action to regulate AI in a way that promotes privacy, fairness, and safety, and ensures maximum benefit to society with minimal risks.”

But unfortunately, Congress also doesn’t have much to show from years of trying to regulate technology. While last year there were also multiple hearings, reports and proposals, Congress ended 2022 without taking major steps to regulate Big Tech.

Expect a long, hot summer when it comes to AI regulation

No matter how AI regulation efforts play out, you can expect lots of news on the topic over the next couple of months. The White House considers AI to be a “top priority,” while there are more Congressional hearings planned. Even at the state and local level, there is plenty on the table: Enforcement begins in July, for example, on New York City’s Automated Employment Decision Tool (AEDT) law, one of the first in the U.S. aimed at reducing bias in AI-driven recruitment and employment decisions.

And in the EU, lawmakers will be working to get the EU AI Act to the finish line — and if they have any chance of getting final approval of the bill by the end of this year, they will certainly be negotiating all summer long.

>>Don’t miss our special issue: Building the foundation for customer data quality.<<

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!