Protect AI, an AI and machine learning (ML) security company, announced it has successfully raised $35 million in a series A funding round. Evolution Equity Partners led the round and saw participation from Salesforce Ventures and existing investors Acrew Capital, boldstart ventures, Knollwood Capital and Pelion Ventures.
Founded by Ian Swanson, who previously led Amazon Web Services’ worldwide AI and ML business, the company aims to strengthen ML systems and AI applications against security vulnerabilities, data breaches and emerging threats.
The AI/ML security challenge has become increasingly complex for companies striving to maintain comprehensive inventories of assets and elements in their ML systems. The rapid growth of supply chain assets, such as foundational models and external third-party training datasets, amplifies this difficulty.
These security challenges expose organizations to risks around regulatory compliance, PII leakages, data manipulation and model poisoning.
To address these concerns, Protect AI has developed a security platform, AI Radar, that provides AI developers, ML engineers and AppSec professionals real-time visibility, detection and management capabilities for their ML environments.
“Machine learning models and AI applications are typically built using an assortment of open-source libraries, foundational models and third-party datasets. AI Radar creates an immutable record to track all these components used in an ML model or AI application in the form of a ‘machine learning bill of materials (MLBOM),’” Ian Swanson, CEO and cofounder of Protect AI, told VentureBeat. “It then implements continuous security checks that can find and remediate vulnerabilities.”
>>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands.<<
Having secured total funding of $48.5 million to date, the company intends to use the newly acquired funds to scale sales and marketing efforts, enhance go-to-market activities, invest in research and development and strengthen customer success initiatives.
As part of the funding deal, Richard Seewald, founder and managing partner at Evolution Equity Partners, will join the Protect AI board of directors.
The company claims that traditional security tools lack the necessary visibility to monitor dynamic ML systems and data workflows, leaving organizations ill-equipped to detect threats and vulnerabilities in the ML supply chain.
To mitigate this concern, AI Radar incorporates continuously integrated security checks to safeguard ML environments against active data leakages, model vulnerabilities and other AI security risks.
The platform uses integrated model scanning tools for LLMs and other ML inference workloads to detect security policy violations, model vulnerabilities and malicious code injection attacks. Additionally, AI Radar can integrate with third-party AppSec and CI/CD orchestration tools and model robustness frameworks.
The company stated that the platform’s visualization layer provides real-time insights into an ML system’s attack surface. It also automatically generates and updates a secure, dynamic MLBOM that tracks all components and dependencies within the ML system.
Protect AI emphasizes that this approach guarantees comprehensive visibility and auditability in the AI/ML supply chain. The system maintains immutable time-stamped records, capturing any policy violations and changes made.
“AI Radar employs a code-first approach, allowing customers to enable their ML pipeline and CI/CD system to collect metadata during every pipeline execution. As a result, it creates an MLBOM containing comprehensive details about the data, model artifacts and code utilized in ML models and AI applications,” explained Protect AI’s Swanson. “Each time the pipeline runs, a version of the MLBOM is captured, enabling real-time querying and implementation of policies to assess vulnerabilities, PII leakages, model poisoning, infrastructure risks and regulatory compliance.”
Regarding the platform’s MLBOM compared to a traditional software bill of materials (SBOM), Swanson highlighted that while an SBOM constitutes a complete inventory of a codebase, an MLBOM encompasses a comprehensive inventory of data, model artifacts and code.
“The components of an MLBOM can include the data that was used in training, testing and validating an ML model, how the model was tuned, the features in the model, model package formatting, OSS supply chain artifacts and much more,” explained Swanson. “Unlike SBOM, our platform provides a list of all components and dependencies in an ML system so that users have full provenance of their AI/ML models.”
Swanson pointed out that numerous large enterprises use multiple ML software vendors such as Amazon Sagemaker, Azure Machine Learning and Dataiku resulting in various configurations of their ML pipelines.
In contrast, he highlighted that AI Radar remains vendor-agnostic and seamlessly integrates all these diverse ML systems, creating a unified abstraction or “single pane of glass.” Through this, customers can readily access crucial information about any ML model’s location and origin and the data and components employed in its creation.
Swanson said that the platform also aggregates metadata on users’ machine learning usage and workloads across all organizational environments.
“The metadata collected can be used to create policies, deliver model BoMs (bills of materials) to stakeholders, and to identify the impact and remediate risk of any component in your ML ecosystem over every platform in use,” he told VentureBeat. “The solution dashboards … user roles/permissions that bridge the gap between ML builder teams and app security professionals.”
Swanson told VentureBeat that the company plans to maintain R&D investment in three crucial areas: enhancing AI Radar’s capabilities, expanding research to identify and report additional critical vulnerabilities in the ML supply chain of both open-source and vendor offerings, and furthering investments in the company’s open-source projects NB Defense and Rebuff AI.
A successful AI deployment, he pointe dout, can swiftly enhance company value through innovation, improved customer experience and increased efficiency. Hence, safeguarding AI in proportion to the value it generates becomes paramount.
“We aim to educate the industry about the distinctions between typical application security and security of ML systems and AI applications. Simultaneously, we deliver easy-to-deploy solutions that ensure the security of the entire ML development lifecycle,” said Swanson. “Our focus lies in providing practical threat solutions, and we have introduced the industry’s first ML bill of materials (MLBOM) to identify and address risks in the ML supply chain.”
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Protect AI, an AI and machine learning (ML) security company, announced it has successfully raised $35 million in a series A funding round. Evolution Equity Partners led the round and saw participation from Salesforce Ventures and existing investors Acrew Capital, boldstart ventures, Knollwood Capital and Pelion Ventures.
Founded by Ian Swanson, who previously led Amazon Web Services’ worldwide AI and ML business, the company aims to strengthen ML systems and AI applications against security vulnerabilities, data breaches and emerging threats.
The AI/ML security challenge has become increasingly complex for companies striving to maintain comprehensive inventories of assets and elements in their ML systems. The rapid growth of supply chain assets, such as foundational models and external third-party training datasets, amplifies this difficulty.
These security challenges expose organizations to risks around regulatory compliance, PII leakages, data manipulation and model poisoning.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
To address these concerns, Protect AI has developed a security platform, AI Radar, that provides AI developers, ML engineers and AppSec professionals real-time visibility, detection and management capabilities for their ML environments.
“Machine learning models and AI applications are typically built using an assortment of open-source libraries, foundational models and third-party datasets. AI Radar creates an immutable record to track all these components used in an ML model or AI application in the form of a ‘machine learning bill of materials (MLBOM),’” Ian Swanson, CEO and cofounder of Protect AI, told VentureBeat. “It then implements continuous security checks that can find and remediate vulnerabilities.”
>>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands.<<
Having secured total funding of $48.5 million to date, the company intends to use the newly acquired funds to scale sales and marketing efforts, enhance go-to-market activities, invest in research and development and strengthen customer success initiatives.
As part of the funding deal, Richard Seewald, founder and managing partner at Evolution Equity Partners, will join the Protect AI board of directors.
Securing AI/ML models through proactive threat visibility
The company claims that traditional security tools lack the necessary visibility to monitor dynamic ML systems and data workflows, leaving organizations ill-equipped to detect threats and vulnerabilities in the ML supply chain.
To mitigate this concern, AI Radar incorporates continuously integrated security checks to safeguard ML environments against active data leakages, model vulnerabilities and other AI security risks.
The platform uses integrated model scanning tools for LLMs and other ML inference workloads to detect security policy violations, model vulnerabilities and malicious code injection attacks. Additionally, AI Radar can integrate with third-party AppSec and CI/CD orchestration tools and model robustness frameworks.
The company stated that the platform’s visualization layer provides real-time insights into an ML system’s attack surface. It also automatically generates and updates a secure, dynamic MLBOM that tracks all components and dependencies within the ML system.
Protect AI emphasizes that this approach guarantees comprehensive visibility and auditability in the AI/ML supply chain. The system maintains immutable time-stamped records, capturing any policy violations and changes made.
“AI Radar employs a code-first approach, allowing customers to enable their ML pipeline and CI/CD system to collect metadata during every pipeline execution. As a result, it creates an MLBOM containing comprehensive details about the data, model artifacts and code utilized in ML models and AI applications,” explained Protect AI’s Swanson. “Each time the pipeline runs, a version of the MLBOM is captured, enabling real-time querying and implementation of policies to assess vulnerabilities, PII leakages, model poisoning, infrastructure risks and regulatory compliance.”
Regarding the platform’s MLBOM compared to a traditional software bill of materials (SBOM), Swanson highlighted that while an SBOM constitutes a complete inventory of a codebase, an MLBOM encompasses a comprehensive inventory of data, model artifacts and code.
“The components of an MLBOM can include the data that was used in training, testing and validating an ML model, how the model was tuned, the features in the model, model package formatting, OSS supply chain artifacts and much more,” explained Swanson. “Unlike SBOM, our platform provides a list of all components and dependencies in an ML system so that users have full provenance of their AI/ML models.”
Swanson pointed out that numerous large enterprises use multiple ML software vendors such as Amazon Sagemaker, Azure Machine Learning and Dataiku resulting in various configurations of their ML pipelines.
In contrast, he highlighted that AI Radar remains vendor-agnostic and seamlessly integrates all these diverse ML systems, creating a unified abstraction or “single pane of glass.” Through this, customers can readily access crucial information about any ML model’s location and origin and the data and components employed in its creation.
Swanson said that the platform also aggregates metadata on users’ machine learning usage and workloads across all organizational environments.
“The metadata collected can be used to create policies, deliver model BoMs (bills of materials) to stakeholders, and to identify the impact and remediate risk of any component in your ML ecosystem over every platform in use,” he told VentureBeat. “The solution dashboards … user roles/permissions that bridge the gap between ML builder teams and app security professionals.”
What’s next for Protect AI?
Swanson told VentureBeat that the company plans to maintain R&D investment in three crucial areas: enhancing AI Radar’s capabilities, expanding research to identify and report additional critical vulnerabilities in the ML supply chain of both open-source and vendor offerings, and furthering investments in the company’s open-source projects NB Defense and Rebuff AI.
A successful AI deployment, he pointe dout, can swiftly enhance company value through innovation, improved customer experience and increased efficiency. Hence, safeguarding AI in proportion to the value it generates becomes paramount.
“We aim to educate the industry about the distinctions between typical application security and security of ML systems and AI applications. Simultaneously, we deliver easy-to-deploy solutions that ensure the security of the entire ML development lifecycle,” said Swanson. “Our focus lies in providing practical threat solutions, and we have introduced the industry’s first ML bill of materials (MLBOM) to identify and address risks in the ML supply chain.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Victor Dey
Source: Venturebeat