AI & RoboticsNews

In the push for development, is the U.S. prepared to regulate AI?

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


This article was contributed by Ching-Fong (CF) Su, VP of Machine Learning, Hyperscience

The emergence of Artificial Intelligence (AI) has captured the attention of nearly every industry — including the public sector. In fact, this June, Lynne Parker became the United States’ first AI czar, tasked with evaluating societal risks associated with AI and preventing harm from new technologies. More recently, the U.S. Department of Commerce announced plans to form a committee to advise federal agencies on AI research and development, and the National Artificial Intelligence Advisory committee said they plan to focus on several issues — including U.S. competitiveness and how AI can enhance opportunities for geographic regions.

What’s become increasingly clear today — supported by the continued investment of AI spending — is that the administration is adamant about bolstering AI efforts and addressing AI regulations across the public and private sectors. Yet, many still wonder whether the increased focus on AI will result in any tangible outcomes.

To regulate or not to regulate AI?

Since its inception, we’ve experienced the incredible ways AI can assist humanity — from Siri to Amazon Alexa to Tesla and even Carnegie Mellon University’s planned laboratory designed to automate labor-intensive experiments with robotics and AI. However, concerns about the unknown of AI continue to prevail.

Despite conversations reminiscent of sci-fi conventions, AI is just an algorithm — a series of numbers and equations that don’t warrant the same suspicion as an alien supervillain. Historically, most fundamental technologies like applied math, computer science, and physics have not been regulated. Medicine is the only exception in a similar innovation category as AI, given its unique nature of interacting with living things. The government’s push for AI development is the right path forward, but when it comes to the best way to regulate it, we will continue to walk a very fine line.

When it comes to AI research and development (R&D), it’s impractical for the government to regulate at scale. The barrier to entry for AI development is low, as it primarily uses widely accessible computing resources and data—rendering the regulation of AI advancement nearly impossible. As of right now, and for the foreseeable future, China and the U.S. are the two clear AI superpowers. To remain competitive, the U.S. should ramp up AI R&D efforts rather than inciting fear and slowing down the promising advancement of AI.

AI oversight is still a necessity

While we should continue accelerating AI R&D efforts, human oversight and legal-specific regulations can institute protections against the vulnerable. After all, as AI brings new societal advancements and improvements, a simultaneous goal of those in the industry is to protect individuals and societies from unintended outcomes from AI while simultaneously accelerating its adoption. For example, a legal-specific regulation could actually speed up — not slow down — the adoption of AI technology that has a direct impact on public safety, such as autonomous vehicles.

There are three levels of unintended outcomes that AI regulators and practitioners must monitor:

  • AI systems: They won’t always consistently deliver the correct results every single time. We need to design human oversight into AI systems, often referred to as “human-in-the-loop,” to create a system of checks and balances. Regulations and standards in applications, such as autonomous vehicles, are needed for safety, as they can cause direct and immediate harm.
  • The business world: AI is creating a different way for companies to compete, and the increasing access to data greatly changes the market dynamic. We need regulations that are similar to antitrust concerns for businesses utilizing AI technology and collecting data.
  • Societal impact: In the future, there could be job displacement of human workers. However, strong regulations and restrictions of AI applications are not appropriate or adequate remedies for potential job loss.

Strengthening human-machine collaboration to address today’s largest societal issues

Like most scenarios, AI regulation requires a delicate balance of continuously fostering advancement while carefully considering its widespread usage in society.

On the one hand, regulation can strengthen human-machine collaboration. AI offers so much potential, and setting standards will be essential to create equitable outcomes. The regulation can also ensure a balanced human-machine collaborative future. On the other hand, as AI continues its upward implementation trend, we must educate and upskill our society and workforce on this technology to successfully work alongside it.

If it is designed, implemented, and applied in the right ways, the possibilities for AI technology could be infinite. However, society must remain educated and able to trust in its future. Therefore, the Biden administration’s efforts and goals ultimately highlight the importance of ethical and regulated AI to ensure a balanced coexistent future.

While ethical and unbiased AI is the way of the future, ensuring that the two can harmoniously exist to address some of today’s most significant challenges globally will be its biggest test to overcome.

By CF Su, VP of Machine Learning, Hyperscience

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Ching-Fong Su
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!