AI & RoboticsNews

This AI attorney says companies need a chief AI officer — pronto

When Bradford Newman began advocating for more artificial intelligence expertise in the C-suite in 2015, “people were laughing at me,” he said. 

Newman, who leads global law firm Baker McKenzie’s machine learning and AI practice in its Palo Alto office, added that when he mentioned the need for companies to appoint a chief AI officer, people typically responded, “What’s that?”

But as the use of artificial intelligence proliferates across the enterprise, and as issues around AI ethics, bias, risk, regulation and legislation currently swirl throughout the business landscape, the importance of appointing a chief AI officer is clearer than ever, he said.

This recognition led to a new Baker McKenzie report, released in March, called “Risky Business: Identifying Blind Spots in Corporate Oversight of Artificial Intelligence.” The report surveyed 500 US-based, C-level executives who self-identified as part of the decision-making team responsible for their organization’s adoption, use and management of AI-enabled tools. 

In a press release upon the survey’s release, Newman said: “Given the increase in state legislation and regulatory enforcement, companies need to step up their game when it comes to AI oversight and governance to ensure their AI is ethical and protect themselves from liability by managing their exposure to risk accordingly.” 

Corporate blind spots about AI risk

According to Newman, the survey found significant corporate blind spots around AI risk. For one thing, C-level executives inflated the risk of AI cyber intrusions but downplayed AI risks related to algorithm bias and reputation. And while all executives surveyed said that their board of directors has some awareness about AI’s potential enterprise risk, just 4% called these risks ‘significant.’ And more than half considered the risks ‘somewhat significant.’ 

The survey also found that organizations “lack a solid grasp on bias management once AI-enabled tools are in place.” When managing implicit bias in AI tools in-house, for example, just 61% have a team in place to up-rank or down-rank data, while 50% say they can override some – not all – AI-enabled outcomes. 

In addition, the survey found that two-thirds of companies do not have a chief artificial intelligence officer, leaving AI oversight to fall under the domain of the CTO or CIO. At the same time, only 41% of corporate boards have an expert in AI on them. 

An AI regulation inflection point

Newman emphasized that a greater focus on AI in the C-suite, and particularly in the boardroom, is a must. 

“We’re at an inflection point where Europe and the U.S. are going to be regulating AI,” he said. “I think corporations are going to be woefully on their back feet reacting, because they just don’t get it – they have a false sense of security.”

While he is anti-regulation in many areas, Newman claims that AI is profoundly different. “AI has to have an asterisk by it because of its impact,” he said. “It’s not just computer science, it’s about human ethics…it goes to the essence of who we are as humans and the fact that we are a Western liberal democratic society with a strong view of individual rights.” 

From a corporate governance standpoint, AI is different as well, he continued: “Unlike, for example, the financial function, which is the dollars and cents accounted for and reported properly within the corporate structure and disclosed to our shareholders, artificial intelligence and data science involves law, human resources and ethics,” he said. “There are a multitude of examples of things that are legally permissible, but are not in tune with the corporate culture.” 

However, AI in the enterprise tends to be fragmented and disparate, he explained. 

“There’s no omnibus regulation where that person who’s meaning well could go into the C-suite and say, ‘We need to follow this. We need to train. We need compliance.’ So, it’s still sort of theoretical, and C-suites do not usually respond to theoretical,” he said. 

Finally, Newman added, there are many internal political constituents around AI, including AI, data science and supply chain. “They all say, ‘it’s mine,’” he said. 

The need for a chief AI officer

What will help, said Newman, is to appoint a chief AI officer (CAIO) – that is, a C-suite level executive that reports to the CEO, at the same level as a CIO, CISO or CFO. The CAIO would have ultimate responsibility for oversight of all things AI in the corporation. 

“Many people want to know how one person can fit that role, but we’re not saying the CFO knows every calculation of financial aspects going on deep in the corporation – but it reports up to her,” he said.

So a CAIO would be charged with reporting to the shareholders and externally to regulators and governing bodies.

“Most importantly, they would have a role for corporate governance, oversight, monitoring and compliance of all things AI,” Newman added. 

Though, Newman admits the idea of installing a CAIO wouldn’t solve every AI-related challenge.

“Would it be perfect? No, nothing is – but it would be a large step forward,” he said.

The chief AI officer should have a background in some facets of AI, in computer science, as well as some facets of ethics and the law.

While just over a third of Baker McKenzie’s survey respondents said they currently have “something like” a chief artificial intelligence officer, Newman thinks that’s a “generous” statistic. 

“I think most boards are woefully behind, relying on a patchwork of chief information officers, chief security officers, or heads of HR sitting in the C-suite,” he said. “It’s very cobbled together and is not a true job description held by one person with the type of oversight and matrix responsibility I’m talking about as far as a real CAIO.” 

The future of the chief AI officer

These days, Newman says people no longer ask ‘What is a chief AI officer?’ as much. But instead, organizations claim they are “ethical” and that their AI is not implicitly biased.

“There’s a growing awareness that the corporation’s going to have to have oversight, as well as a false sense of security that the oversight that exists in most organizations right now is enough,” he continued. “It isn’t going to be enough when the regulators, the enforcers and the plaintiffs lawyers come – if I were to switch sides and start representing the consumers and the plaintiffs, I could poke giant size holes in the majority of corporate oversight and governance for AI.” 

Organizations need a chief AI officer, he emphasized because “the questions being posed by this technology far transcend the zeros, the ones, the data sets.” 

Organizations are “playing with live ammo,” he said. “AI is not an area that should be left solely to the data scientist.” 


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!