AI & RoboticsNews

AI regulation in peril: Navigating uncertain times

The Supreme Court recently took a sledgehammer to federal agencies’ powers, as noted by Morning Brew.

Less than a year ago, the drive for AI regulation was gaining significant momentum, marked by key milestones such as the AI Safety Summit in the U.K., the Biden Administration’s AI Executive Order, and the EU AI Act. However, a recent judicial decision and potential political shifts are leading to more uncertainty about the future of AI regulation in the U.S. This article explores the implications of these developments on AI regulation and the potential challenges ahead.

The Supreme Court’s recent decision in Loper Bright Enterprises v. Raimondo weakens federal agencies’ authority to regulate various sectors, including AI. In overturning a precedent dating back forty years known as “Chevron deference,” the court decision shifts the power to interpret ambiguous laws passed by Congress from federal agencies to the judiciary.

Existing laws are often vague in many fields, including those related to the environment and technology, leaving interpretation and regulation to the agencies. This vagueness in legislation is often intentional, for both political and practical reasons. Now, however, any regulatory decision by a federal agency based on those laws can be more easily challenged in court, and federal judges have more power to decide what a law means. This shift could have significant consequences for AI regulation. Proponents argue that it ensures a more consistent interpretation of laws, free from potential agency overreach.

However, the danger of this ruling is that in a fast-moving field like AI, agencies often have more expertise than the courts. For example, the Federal Trade Commission (FTC) focuses on consumer protection and antitrust issues related to AI, the Equal Employment Opportunity Commission (EEOC) addresses AI use in hiring and employment decisions to prevent discrimination and the Food and Drug Administration (FDA) regulates AI in medical devices and software as a medical device (SaMD).

These agencies purposely hire people with AI knowledge for these activities. The judicial branch has no such existing expertise. Nevertheless, the majority opinion said that “…agencies have no special competence in resolving statutory ambiguities. Courts do.” 

The net effect of Loper Bright Enterprises v. Raimondo could be to undermine the ability to set up and enforce AI regulations. As stated by the New Lines Institute: “This change [to invalidate Chevron deference] means agencies must somehow develop arguments that involve complex technical details yet are sufficiently persuasive to an audience unfamiliar with the field to justify every regulation they impose.”

The dissenting view from Justice Elena Kagan disagreed on which body could more effectively provide useful regulation. In one fell swoop, the [court] majority today gives itself exclusive power over every open issue — no matter how expertise-driven or policy-laden — involving the meaning of regulatory law. As if it did not have enough on its plate, the majority turns itself into the country’s administrative czar.” Specific to AI, Kagan said during oral arguments of the case: “And what Congress wants, we presume, is for people who actually know about AI to decide those questions.” 

Going forward, then, when passing a new law affecting the development or use of AI, if Congress wished for federal agencies to lead on regulation, they would need to state this explicitly within the legislation. Otherwise, that authority would reside with the federal courts. Ellen Goodman, a professor who specializes in law related to information policy at Rutgers University said in FedScoop: “The solution was always getting clear legislation from Congress but ‘that’s even more true now.’” 

However, there is no guarantee that Congress would include this stipulation as doing so is subject to the makeup of the body. A conservative viewpoint expressed in the recently adopted platform of the Republican party clearly states an intention to overturn the existing AI Executive Order. Specifically, the platform says: “We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.” Per AI industry commentator Lance Eliot in Forbes: “This would presumably involve striking out the stipulations on AI-related reporting requirements, AI evaluation approaches, [and] AI uses and disuses limitations.” 

Based on reporting in another Forbes article, one of the people influencing the drive to repeal the AI Executive Order is tech entrepreneur Jacob He “believes that existing laws already govern AI appropriately, and that ‘a morass of red tape’ would harm U.S. competition with China.” However, it is those same laws and ensuing interpretation and regulation by federal agencies that have now been undercut by the decision in Loper Bright Enterprises v. Raimondo.

In lieu of the current executive order, the platform adds: “In its place, Republicans support AI development rooted in free speech and human flourishing.” New reporting from the Washington Post cites an effort led by allies of former president Donald Trump to create a new framework that would, among other things, “make America first in AI.” That could include reduced regulations as the platform states an intention to “cut costly and burdensome regulations,” especially those in their view that “stifle jobs, freedom, innovation and make everything more expensive.”

Regardless of which political party wins the White House and control of Congress, there will be a different AI regulatory environment in the U.S. 

Foremost, the Supreme Court’s decision in Loper Bright Enterprises v. Raimondo raises significant concerns about the ability of specialized federal agencies to enforce meaningful AI regulations. In a field as dynamic and technical as AI, the likely impact will be to slow or even thwart meaningful AI regulation. 

A change in leadership at the White House or Congress could also change AI regulatory efforts. Should conservatives win, it is likely there will be less regulation and that remaining regulation will be less restrictive on businesses developing and using AI technologies. 

This approach would be in stark contrast to the UK, where  the recently elected Labour party promised in its manifesto to introduce “binding regulation on the handful of companies developing the most powerful AI models.” The U.S. would also have a far different AI regulatory environment than the EU with its recently passed AI Act. 

The net effect of all these changes could be less global alignment on AI regulation, although it is unknown how this might impact AI development and international cooperation. This regulatory mismatch could complicate international research partnerships, data sharing agreements and the development of global AI standards. Less regulation of AI could indeed spur innovation in the U.S. but could also lead to increased concerns about AI ethics and safety, and the potential impact of AI on jobs. This unease could in turn have a negative impact on trust in AI technologies and the companies that build them. 

It is possible that in the face of weakened regulations, major AI companies would proactively collaborate on ethical uses and safety guidelines. Similarly, there could be a greater focus on developing AI systems that are more interpretable and easier to audit. This could help companies stay ahead of potential negative feedback and show responsible development. 

At a minimum, there will be a period of greater uncertainty about AI regulation. As the political landscape shifts and regulations change, it is crucial for policymakers, industry leaders and the tech community to collaborate effectively. Unified efforts are essential to ensure that AI development remains ethical, safe and beneficial for society.

Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


The Supreme Court recently took a sledgehammer to federal agencies’ powers, as noted by Morning Brew.

Less than a year ago, the drive for AI regulation was gaining significant momentum, marked by key milestones such as the AI Safety Summit in the U.K., the Biden Administration’s AI Executive Order, and the EU AI Act. However, a recent judicial decision and potential political shifts are leading to more uncertainty about the future of AI regulation in the U.S. This article explores the implications of these developments on AI regulation and the potential challenges ahead.

The Supreme Court’s recent decision in Loper Bright Enterprises v. Raimondo weakens federal agencies’ authority to regulate various sectors, including AI. In overturning a precedent dating back forty years known as “Chevron deference,” the court decision shifts the power to interpret ambiguous laws passed by Congress from federal agencies to the judiciary.

Agency expertise vs. judicial oversight

Existing laws are often vague in many fields, including those related to the environment and technology, leaving interpretation and regulation to the agencies. This vagueness in legislation is often intentional, for both political and practical reasons. Now, however, any regulatory decision by a federal agency based on those laws can be more easily challenged in court, and federal judges have more power to decide what a law means. This shift could have significant consequences for AI regulation. Proponents argue that it ensures a more consistent interpretation of laws, free from potential agency overreach.

However, the danger of this ruling is that in a fast-moving field like AI, agencies often have more expertise than the courts. For example, the Federal Trade Commission (FTC) focuses on consumer protection and antitrust issues related to AI, the Equal Employment Opportunity Commission (EEOC) addresses AI use in hiring and employment decisions to prevent discrimination and the Food and Drug Administration (FDA) regulates AI in medical devices and software as a medical device (SaMD).

These agencies purposely hire people with AI knowledge for these activities. The judicial branch has no such existing expertise. Nevertheless, the majority opinion said that “…agencies have no special competence in resolving statutory ambiguities. Courts do.” 

Challenges and legislative needs

The net effect of Loper Bright Enterprises v. Raimondo could be to undermine the ability to set up and enforce AI regulations. As stated by the New Lines Institute: “This change [to invalidate Chevron deference] means agencies must somehow develop arguments that involve complex technical details yet are sufficiently persuasive to an audience unfamiliar with the field to justify every regulation they impose.”

The dissenting view from Justice Elena Kagan disagreed on which body could more effectively provide useful regulation. In one fell swoop, the [court] majority today gives itself exclusive power over every open issue — no matter how expertise-driven or policy-laden — involving the meaning of regulatory law. As if it did not have enough on its plate, the majority turns itself into the country’s administrative czar.” Specific to AI, Kagan said during oral arguments of the case: “And what Congress wants, we presume, is for people who actually know about AI to decide those questions.” 

Going forward, then, when passing a new law affecting the development or use of AI, if Congress wished for federal agencies to lead on regulation, they would need to state this explicitly within the legislation. Otherwise, that authority would reside with the federal courts. Ellen Goodman, a professor who specializes in law related to information policy at Rutgers University said in FedScoop: “The solution was always getting clear legislation from Congress but ‘that’s even more true now.’” 

Political landscape

However, there is no guarantee that Congress would include this stipulation as doing so is subject to the makeup of the body. A conservative viewpoint expressed in the recently adopted platform of the Republican party clearly states an intention to overturn the existing AI Executive Order. Specifically, the platform says: “We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.” Per AI industry commentator Lance Eliot in Forbes: “This would presumably involve striking out the stipulations on AI-related reporting requirements, AI evaluation approaches, [and] AI uses and disuses limitations.” 

Based on reporting in another Forbes article, one of the people influencing the drive to repeal the AI Executive Order is tech entrepreneur Jacob He “believes that existing laws already govern AI appropriately, and that ‘a morass of red tape’ would harm U.S. competition with China.” However, it is those same laws and ensuing interpretation and regulation by federal agencies that have now been undercut by the decision in Loper Bright Enterprises v. Raimondo.

In lieu of the current executive order, the platform adds: “In its place, Republicans support AI development rooted in free speech and human flourishing.” New reporting from the Washington Post cites an effort led by allies of former president Donald Trump to create a new framework that would, among other things, “make America first in AI.” That could include reduced regulations as the platform states an intention to “cut costly and burdensome regulations,” especially those in their view that “stifle jobs, freedom, innovation and make everything more expensive.”

Regulatory outlook

Regardless of which political party wins the White House and control of Congress, there will be a different AI regulatory environment in the U.S. 

Foremost, the Supreme Court’s decision in Loper Bright Enterprises v. Raimondo raises significant concerns about the ability of specialized federal agencies to enforce meaningful AI regulations. In a field as dynamic and technical as AI, the likely impact will be to slow or even thwart meaningful AI regulation. 

A change in leadership at the White House or Congress could also change AI regulatory efforts. Should conservatives win, it is likely there will be less regulation and that remaining regulation will be less restrictive on businesses developing and using AI technologies. 

This approach would be in stark contrast to the UK, where  the recently elected Labour party promised in its manifesto to introduce “binding regulation on the handful of companies developing the most powerful AI models.” The U.S. would also have a far different AI regulatory environment than the EU with its recently passed AI Act. 

The net effect of all these changes could be less global alignment on AI regulation, although it is unknown how this might impact AI development and international cooperation. This regulatory mismatch could complicate international research partnerships, data sharing agreements and the development of global AI standards. Less regulation of AI could indeed spur innovation in the U.S. but could also lead to increased concerns about AI ethics and safety, and the potential impact of AI on jobs. This unease could in turn have a negative impact on trust in AI technologies and the companies that build them. 

It is possible that in the face of weakened regulations, major AI companies would proactively collaborate on ethical uses and safety guidelines. Similarly, there could be a greater focus on developing AI systems that are more interpretable and easier to audit. This could help companies stay ahead of potential negative feedback and show responsible development. 

At a minimum, there will be a period of greater uncertainty about AI regulation. As the political landscape shifts and regulations change, it is crucial for policymakers, industry leaders and the tech community to collaborate effectively. Unified efforts are essential to ensure that AI development remains ethical, safe and beneficial for society.

Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers





Author: Gary Grossman, Edelman
Source: Venturebeat
Reviewed By: Editorial Team
Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!