AI & RoboticsNews

As AI booms, reducing risks of algorithmic systems is a must, says new ACM brief

Check out all the on-demand sessions from the Intelligent Security Summit here.


AI might be booming, but a new brief from The Association for Computing Machinery (ACM)’s global Technology Policy Council, which publishes tomorrow, notes that the ubiquity of algorithmic systems “creates serious risks that are not being adequately addressed.” 

According to the ACM brief, which the organization says is the first in a series on systems and trust, perfectly safe algorithmic systems are not possible. However, achievable steps can be taken to make them safer and should be a high research and policy priority of governments and all stakeholders.

The brief’s key conclusions:

  • To promote safer algorithmic systems, research is needed on both human-centered and technical software development methods, improved testing, audit trails, and monitoring mechanisms, as well as training and governance.
  • Building organizational safety cultures requires management leadership, focus in hiring and training, adoption of safety-related practices, and continuous attention.
  • Internal and independent human-centered oversight mechanisms, both within government and organizations, are necessary to promote safer algorithmic systems.

AI systems need safeguards and rigorous review

Computer scientist Ben Shneiderman, Professor Emeritus at the University of Maryland and author of Human-Centered AI, was the lead author on the brief, which is the latest in a series of short technical bulletins on the impact and policy implications of specific tech developments. 

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

While algorithmic systems — which go beyond AI and ML technology and involve people, organizations and management structures — have improved an immense number of products and processes, he noted, unsafe systems can cause profound harm (think self-driving cars or facial recognition).

Governments and stakeholders, he explained, need to prioritize and implement safeguards in the same way a new food product or pharmaceuticals must go through a rigorous review process before it is made available to the public.

Comparing AI to the civil aviation model

Shneiderman compared creating safer algorithmic systems to civil aviation — which still has risks but is generally acknowledged to be safe.

“That’s what we want for AI,” he explained in an interview with VentureBeat. “It’s hard to do. It takes a while to get there. It takes resources effort and focus, but that’s what’s going to make people’s companies competitive and make them durable. Otherwise, they will succumb to a failure that will potentially threaten their existence.”

The effort towards safer algorithmic systems is a shift from focusing on AI ethics, he added.

“Ethics are fine, we all we want them as a good foundation, but the shift is towards what do we do?” he said. “How do we make these things practical?”

That is particularly important when dealing with applications of AI that are not lightweight — that is, consequential decisions such as financial trading, legal issues and hiring and firing, as well as life-critical medical, transportation or military applications.

“We want to avoid the Chernobyl of AI, or the Three Mile Island of AI,” Shneiderman said. The degree of effort we put into safety has to rise as the risks grow.”

Developing an organizational safety culture

According to the ACM brief, organizations need to develop a “safety culture that embraces human factors engineering” — that is, how systems work in actual practice, with human beings at the controls — which must be “woven” into algorithmic system design.

The brief also noted that methods that have proven to be effective cybersecurity — including adversarial “red team” tests in which expert users try to break the system, and offer “bug bounties” to users who report omissions and errors capable of leading to major failures — could be useful in making safer algorithmic systems.

Many governments are already at work on these issues, such as the U.S.’s Blueprint for an AI Bill of Rights and the EU AI Act. But for enterprise businesses, these efforts could offer a competitive advantage, Shneiderman emphasized.

“This is not just good guy stuff,” he said. “This is a good business decision for you to make and a good decision for you to invest in in the notion of safety and the larger notion of a safety culture.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!