MobileNews

Stanford and NYU: Only 15% of AI federal agencies use is highly sophisticated

More than 40% of U.S. federal agencies and departments have experimented with AI tools, but only 15% currently use highly sophisticated AI, according to analysis by Stanford University computer scientists published today in “Government by Algorithm,” a joint report from Stanford and New York University.

“This is concerning because agencies will find it harder to realize gains in accuracy and efficiency with less sophisticated tools. This result also underscores AI’s potential to widen, not narrow, the public-private technology gap,” the report reads.

The warning comes from an analysis released today of 142 federal agencies and departments and the legal and policy implications of government use of machine learning or “algorithmic governance.” The report excludes analysis of military and intelligence agencies and any federal agency with less than 400 employees.

AI in use today include an autonomous vehicle project at the U.S. Postal Service; Food and Drug Administration detection of adverse drug events; and facial recognition by the U.S. Department of Homeland Security and ICE. Major use cases today focus heavily on enforcement of regulatory mandates, adjudicating benefits and privileges, service delivery, citizen engagement, regulation analysis, and personnel management.

The “Government by Algorithm” report found that 53% of AI use is a product of in-house use by agency technologists, and the remainder comes from contractors. It recommends that federal agencies get more in-house AI talent to vet systems from contractors and create AI that’s policy compliant, customized to meet agency needs, and accountable.

It also warns that AI use by government raises the potential to “fuel political anxieties” and creates the risk of AI systems being gamed by “better-heeled groups with resources and know-how.”

“An enforcement agency’s algorithmic predictions, for example, may fall more heavily on smaller businesses that, unlike larger firms, lack a stable of computer scientists who can reverse-engineer the agency’s model and keep out of its cross-hairs. If citizens come to believe that AI systems are rigged, political support for a more effective and tech-savvy government will evaporate quickly,” the report reads.

The report, put together by a group of lawyers, computer scientists, and social scientists, also acknowledges concerns that more use of AI in the public sector can lead to the growth of government power and the disempowerment of marginalized groups, something AI Now Institute’s Meredith Whittaker and Algorithmic Justice League’s Joy Buolamwini talked about in relation to facial recognition in testimony before Congress over the course of the past year.

The report calls its systematic survey of federal government use of AI essential for lawmakers to create “sensible and working prescriptions.”

“To achieve meaningful accountability, concrete and technically informed thinking within and across contexts — not facile calls for prohibition, nor blind faith in innovation — is urgently needed,” the report reads.

Drawing on resources from Stanford Law School, the Stanford Institute for Human-Centered AI, and Stanford Institute for Economic Policy Research, the report comes at a time when lawmakers from Washington state to Washington D.C. are considering facial recognition regulation. Last week, Senators Cory Booker (D-NJ) and Jeff Merkley (D-OR) proposed the Ethical Use of AI Act, which would require a facial recognition moratorium for federal agencies and employees until limits can be put in place.

The European Union Commission today presented a set of initiatives to attract billions in AI investment in member nations and require that high-risk AI used by police and law enforcement, health care, or things related to people’s rights be tested and certified.

“We want the application of these new technologies to deserve the trust of our citizens,” EU Commission president Ursula von der Leyen said in a statement.

The Trump administration is drafting its own set of regulatory AI principles for federal agencies that White House CTO Michael Kratsios said other nations should emulate.

A previous Stanford Institute for Human-Centered AI report called for a $120 billion federal government investment in AI by the federal government to maintain U.S. supremacy in AI, something government officials have called essential to U.S. national defense and economy.


Author: Khari Johnson.
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!