Nearly three years after draft rules were proposed, European Parliament lawmakers approved the AI Act today. The approval, which came a month earlier than expected, is the final endorsement of the first comprehensive regulation around high-risk AI systems, transparency for AI that interacts with humans, and AI systems in regulated products.
The act, which required the final endorsement of the EU Parliament, will now most likely enter into force this May. Italian lawmaker Brando Benifei, an AI Act co-lead, described it as “a historic day” in a press conference.
The news is important for US companies, say experts, who must make sure they comply with the EU AI Act while still moving forward with AI adoption plans. Steve Chase, vice chair of AI and digital innovation at consultancy KPMG US, said: “The EU AI Act “will have far-reaching implications not only for the European market, but also for the global AI landscape and for U.S. businesses,” adding that “US companies must ensure they have the right guardrails in place to comply with the EU AI Act and forthcoming regulation, without hitting the breaks on the path to value with generative AI.”
Forrester Principal Analyst Enza Iannopollo added: “Like it or not, with this regulation, the EU establishes the ‘de facto’ standard for trustworthy AI, AI risk mitigation, and responsible AI. Every other region can only play catch-up.” The fact that the EU moved the vote forward by a month also shows a recognition of the fast-moving space, she said.
The extra territorial effect of the rules, the “hefty” fines, and pervasive of the requirements across the AI value chain mean most global organizations using AI must comply with the Act — with some of its requirements ready to be enforced later this year.
“There is a lot to do and little time to do it,” she explained. “Organizations must assemble their ‘AI compliance team’ to get started. Meeting the requirements effectively will require strong collaboration among teams, from IT and data science to legal and risk management, and close support from the C-suite.”
While foundation model companies like OpenAI, Google and Anthropic have not released comment on the European Parliament’s approval of the EU AI Act, other tech leaders have voiced support.
In a statement, Christina Montgomery, vice president and chief privacy and trust officer at IBM, commended the EU. “The risk-based approach aligns with IBM’s commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems,” she said. “IBM stands ready to lend our technology and expertise – including our watsonx.governance product – to help our clients and other stakeholders comply with the EU AI Act and upcoming legislation worldwide so we can all unlock the incredible potential of responsible AI.”
And Eric Loeb, executive vice president of global government affairs at Salesforce said in a blog post that “We believe that by creating risk-based frameworks such as the EU AI Act, pushing for commitments to ethical and trustworthy AI, and convening multi-stakeholder groups, regulators can make a substantial positive impact. Salesforce applauds EU institutions for taking leadership in this domain.”
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.
Nearly three years after draft rules were proposed, European Parliament lawmakers approved the AI Act today. The approval, which came a month earlier than expected, is the final endorsement of the first comprehensive regulation around high-risk AI systems, transparency for AI that interacts with humans, and AI systems in regulated products.
The act, which required the final endorsement of the EU Parliament, will now most likely enter into force this May. Italian lawmaker Brando Benifei, an AI Act co-lead, described it as “a historic day” in a press conference.
US companies need to prepare for EU AI Act regulation
The news is important for US companies, say experts, who must make sure they comply with the EU AI Act while still moving forward with AI adoption plans. Steve Chase, vice chair of AI and digital innovation at consultancy KPMG US, said: “The EU AI Act “will have far-reaching implications not only for the European market, but also for the global AI landscape and for U.S. businesses,” adding that “US companies must ensure they have the right guardrails in place to comply with the EU AI Act and forthcoming regulation, without hitting the breaks on the path to value with generative AI.”
Forrester Principal Analyst Enza Iannopollo added: “Like it or not, with this regulation, the EU establishes the ‘de facto’ standard for trustworthy AI, AI risk mitigation, and responsible AI. Every other region can only play catch-up.” The fact that the EU moved the vote forward by a month also shows a recognition of the fast-moving space, she said.
The extra territorial effect of the rules, the “hefty” fines, and pervasive of the requirements across the AI value chain mean most global organizations using AI must comply with the Act — with some of its requirements ready to be enforced later this year.
“There is a lot to do and little time to do it,” she explained. “Organizations must assemble their ‘AI compliance team’ to get started. Meeting the requirements effectively will require strong collaboration among teams, from IT and data science to legal and risk management, and close support from the C-suite.”
IBM and Salesforce shared comment on the EU AI Act
While foundation model companies like OpenAI, Google and Anthropic have not released comment on the European Parliament’s approval of the EU AI Act, other tech leaders have voiced support.
In a statement, Christina Montgomery, vice president and chief privacy and trust officer at IBM, commended the EU. “The risk-based approach aligns with IBM’s commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems,” she said. “IBM stands ready to lend our technology and expertise – including our watsonx.governance product – to help our clients and other stakeholders comply with the EU AI Act and upcoming legislation worldwide so we can all unlock the incredible potential of responsible AI.”
And Eric Loeb, executive vice president of global government affairs at Salesforce said in a blog post that “We believe that by creating risk-based frameworks such as the EU AI Act, pushing for commitments to ethical and trustworthy AI, and convening multi-stakeholder groups, regulators can make a substantial positive impact. Salesforce applauds EU institutions for taking leadership in this domain.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team