AI & RoboticsNews

Building and securing a governed AI infrastructure for the future

Unlocking AI’s potential to deliver greater efficiency, cost savings and deeper customer insights requires a consistent balance between cybersecurity and governance.

AI infrastructure must be designed to adapt and flex to a business’ changing directions. Cybersecurity must protect revenue and governance must stay in sync with compliance internally and across a company’s footprint.

Any business looking to scale AI safely must continually look for new ways to strengthen the core infrastructure components. Just as importantly, cybersecurity, governance and compliance must share a common data platform that enables real-time insights.

“AI governance defines a structured approach to managing, monitoring and controlling the effective operation of a domain and human-centric use and development of AI systems,” Venky Yerrapotu, founder and CEO of 4CRisk, told VentureBeat. “Packaged or integrated AI tools do come with risks, including biases in the AI models, data privacy issues and the potential for misuse.”

A robust AI infrastructure makes audits easier to automate, helps AI teams find roadblocks and identifies the most significant gaps in cybersecurity, governance and compliance.

Don’t miss our special issue: Fit for Purpose: Tailoring AI Infrastructure.

“With little to no current industry-approved governance or compliance frameworks to follow, organizations must implement the proper guardrails to innovate safely with AI,” Anand Oswal, SVP and GM of network security at Palo Alto Networks, told VentureBeat. “The alternative is too costly, as adversaries are actively looking to exploit the newest path of least resistance: AI.”

Defending against threats to AI infrastructure

While malicious attackers’ goals vary from financial gain to disrupting or destroying conflicting nations’ AI infrastructure, all seek to improve their tradecraft. Malicious attackers, cybercrime gangs and nation-state actors are all moving faster than even the most advanced enterprise or cybersecurity vendor.

“Regulations and AI are like a race between a mule and a Porsche,” Etay Maor, chief security strategist at Cato Networks, told VentureBeat. “There’s no competition. Regulators always play catch-up with technology, but in the case of AI, that’s particularly true. But here’s the thing: Threat actors don’t play nice. They’re not confined by regulations and are actively finding ways to jailbreak the restrictions on new AI tech.”

Chinese, North Korean and Russian-based cybercriminal and state-sponsored groups are actively targeting both physical and AI infrastructure and using AI-generated malware to exploit vulnerabilities more efficiently and in ways that are often undecipherable to traditional cybersecurity defenses.

Security teams are still at risk of losing the AI war as well-funded cybercriminal organizations and nation-states target AI infrastructures of countries and companies alike.

One effective security measure is model watermarking, which embeds a unique identifier into AI models to detect unauthorized use or tampering. Additionally, AI-driven anomaly detection tools are indispensable for real-time threat monitoring.

All of the companies VentureBeat spoke with on the condition of anonymity are actively using red teaming techniques. Anthropic, for one, proved the value of human-in-the-middle design to close security gaps in model testing.

“I think human-in-the-middle design is with us for the foreseeable future to provide contextual intelligence, human intuition to fine-tune an [large language model] LLM and to reduce the incidence of hallucinations,” Itamar Sher, CEO of Seal Security, told VentureBeat.

Models are the high-risk threat surfaces of an AI infrastructure

Every model released into production is a new threat surface an organization needs to protect. Gartner’s annual AI adoption survey found that 73% of enterprises have deployed hundreds or thousands of models.

Malicious attackers exploit weaknesses in models using a broad base of tradecraft techniques. NIST’s Artificial Intelligence Risk Management Framework is an indispensable document for anyone building AI infrastructure and provides insights into the most prevalent types of attacks, including data poisoning, evasion and model stealing.

AI Security writes, “AI models are often targeted through API queries to reverse-engineer their functionality.”

Getting AI infrastructure right is also a moving target, CISOs warn. “Even if you’re not using AI in explicitly security-centric ways, you’re using AI in ways that matter for your ability to know and secure your environment,” Merritt Baer, CISO at Reco, told VentureBeat.

Put design-for-trust at the center of AI infrastructure

Just as an operating system has specific design goals that strive to deliver accountability, explainability, fairness, robustness and transparency, so too does AI infrastructure.

Implicit throughout the NIST framework is a design-for-trust roadmap, which offers a practical, pragmatic definition to guide infrastructure architects. NIST emphasizes that validity and reliability are must-have design goals, especially in AI infrastructure, to deliver trustworthy, reliable results and performance.

 Source: NIST, January 2023, DOI: 10.6028/NIST.AI.100-1.

The critical role of governance in AI Infrastructure

AI systems and models must be developed, deployed and maintained ethically, securely and responsibly.  Governance must be designed to deliver workflows, visibility and real-time updates on algorithmic transparency, fairness, accountability and privacy. The cornerstone of strong governance starts when models are continuously monitored, audited and aligned with societal values.

Governance frameworks should be integrated into AI infrastructure from the first phases of development. “Governance by design” embeds these principles into the process.

“Implementing an ethical AI framework requires focus on security, bias and data privacy aspects not only during the designing process of the solution but also throughout the testing and validation of all the guardrails before deploying the solutions to end users,” WinWire CTO Vineet Arora told VentureBeat.

Designing AI infrastructures to reduce bias

Identifying and reducing biases in AI models is critical to delivering accurate, ethically sound results. Organizations need to step up and take accountability for how their AI infrastructures monitor, control and improve to reduce and eliminate biases.

Organizations that take accountability for their AI infrastructures rely on adversarial debiasing train models to minimize the relationship between protected attributes (including race or gender) and outcomes, reducing the risk of discrimination. Another approach is resampling training data to ensure a balanced representation relevant to different industries.

“Embedding transparency and explainability into the design of AI systems enables organizations to understand better how decisions are being made, allowing for more effective detection and correction of biased outputs,” says NIST. Providing transparent insights into how AI models make decisions allows organizations to better detect, correct and learn from biases.

How IBM is managing AI governance

IBM’s AI Ethics Board oversees the company’s AI infrastructure and AI projects, ensuring each stays ethically compliant with industry and internal standards. IBM initially established a governance framework to include what they’re calling “focal points,” or mid-level executives with AI expertise, who review projects in development to ensure compliance with IBM’s Principles of Trust and Transparency​.

IBM says this framework helps reduce and control risks at the project level, alleviating risks to AI infrastructures.

Christina Montgomery, IBM’s chief privacy and trust officer, says, “Our AI ethics board plays a critical role in overseeing our internal AI governance process, creating reasonable internal guardrails to ensure we introduce technology into the world responsibly and safely.”

Governance frameworks must be embedded in AI infrastructure from the design phase. The concept of governance by design ensures that transparency, fairness and accountability are integral parts of AI development and deployment.

AI infrastructure must deliver explainable AI

Closing gaps between cybersecurity, compliance and governance is accelerating across AI infrastructure use cases. Two trends emerged from VentureBeat research: agentic AI and explainable AI. Organizations with AI infrastructure are looking to flex and adapt their platforms to make the most of each.

Of the two, explainable AI is nascent in providing insights to improve model transparency and troubleshoot biases. “Just as we expect transparency and rationale in business decisions, AI systems should be able to provide clear explanations of how they reach their conclusions,” Joe Burton, CEO of Reputation, told VentureBeat. “This fosters trust and ensures accountability and continuous improvement.”

Burton added: “By focusing on these governance pillars — data rights, regulatory compliance, access control and transparency — we can leverage AI’s capabilities to drive innovation and success while upholding the highest standards of integrity and responsibility.”


Author: Louis Columbus
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
Cleantech & EV'sNews

Tesla progresses on Giga Berlin expansion, workers snuck in the middle of the night with police

Cleantech & EV'sNews

Stellantis-backed Leapmotor teases new B10 SUV before it launches in EU markets

Cleantech & EV'sNews

Rare earth element recycler Cyclic Materials to expand after $53M Series B funding round

CryptoNews

BNY Mellon Engages With Banking Regulators to Offer Crypto Custody Services 'at Scale'

Sign up for our Newsletter and
stay informed!

Share Your Thoughts!

This site uses Akismet to reduce spam. Learn how your comment data is processed.