AI & RoboticsNews

Opaque Systems unveils confidential AI and analytics tools ahead of Confidential Computing Summit

AI and analytics company Opaque Systems today announced new innovations for its confidential computing platform. The new offerings prioritize the confidentiality of organizational data while using large language models (LLMs).

The company announced that it will showcase these innovations during Opaque’s keynote address at the inaugural Confidential Computing Summit, to be held June 29 in San Francisco. 

They comprise a privacy-preserving generative AI optimized for Microsoft Azure’s Confidential Computing Cloud, and a zero-trust analytics platform: Data Clean Room (DCR). According to the company, its generative AI harnesses multiple layers of protection by integrating secure hardware enclaves and unique cryptographic fortifications.

“The Opaque platform ensures data remains encrypted end to end during model training, fine-tuning and inference, thus guaranteeing that privacy is preserved,” Jay Harel, VP of product at Opaque Systems, told VentureBeat. “To minimize the likelihood of data breaches throughout the lifecycle, our platform safeguards data at rest, in transit and while in use.”

Through these new offerings, Opaque aims to enable organizations to securely analyze confidential data while ensuring its confidentiality and protecting against unauthorized access. 

To support confidential AI use cases, the platform has expanded its capabilities to safeguard machine learning and AI models. It achieves this by executing them on encrypted data within trusted execution environments (TEEs), thus preventing unauthorized access.

The company asserts that its zero-trust Data Clean Rooms (DCRs) can encrypt data at rest, in transit, and during usage. This approach ensures that all data sent to the clean room remains confidential throughout the process.

>>Don’t miss our special issue: Building the foundation for customer data quality.<<

LLMs like ChatGPT rely on public data for training. Opaque asserts that these models’ true potential can only be realized by training them on an organization’s confidential data without risk of exposure. 

Opaque recommends that companies adopt confidential computing to mitigate this risk. Confidential computing is a method that can safeguard data during the entire model training and inference process. The company claims that the method can unlock the transformative capabilities of LLMs.

“We utilize Confidential Computing technology to leverage specialized hardware made available by cloud providers,” Opaque’s Harel told VentureBeat. “This privacy-enhancing technology ensures that datasets are encrypted end-to-end throughout the machine learning lifecycle. With Opaque’s platform, the model, prompt and context remain encrypted during training and while running inference.”

Harel said that the lack of secure data sharing and analysis in organizations with multiple data owners has led to restrictions on data access, data set elimination, data field masking and outright prevention of data sharing.

He said that there are three main issues when it comes to generative AI and privacy, especially in terms of LLMs:

The company has developed its generative AI technology with these issues in mind. It aims to enable secure collaboration among organizations and data owners while ensuring regulatory compliance. 

For instance, one company can train and fine-tune a specialized LLM, while another can use it for inference. Both companies’ data remains private, with no access granted to the other’s.

“With Opaque’s platform ensuring that all data is encrypted throughout its entire lifecycle, organizations would be able to train, fine-tune and run inference on LLMs without actually gaining access to the raw data itself,” said Harel.

The company highlighted its use of secure hardware enclaves and cryptographic fortification for the zero-trust Data Clean Room (DCR) offering. It claims that this confidential computing approach provides multiple layers of protection against cyberattacks and data breaches.

Operating in a cloud-native environment, the system executes within a secure enclave on the user’s cloud instance (such as Azure or GCP). This setup restricts data movement, enabling businesses to retain their existing data infrastructure.

“Our mission is to ensure everybody can trust the privacy of their confidential data, be it customer PII or proprietary business process data. For AI workloads, we enable businesses to keep their data encrypted and secure throughout the lifecycle, from model training and fine-tuning to inference, thus guaranteeing that privacy is preserved,” added Harel. “Data is kept confidential at rest, in transit and while in use, significantly reducing the likelihood of loss.”

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


AI and analytics company Opaque Systems today announced new innovations for its confidential computing platform. The new offerings prioritize the confidentiality of organizational data while using large language models (LLMs).

The company announced that it will showcase these innovations during Opaque’s keynote address at the inaugural Confidential Computing Summit, to be held June 29 in San Francisco. 

They comprise a privacy-preserving generative AI optimized for Microsoft Azure’s Confidential Computing Cloud, and a zero-trust analytics platform: Data Clean Room (DCR). According to the company, its generative AI harnesses multiple layers of protection by integrating secure hardware enclaves and unique cryptographic fortifications.

“The Opaque platform ensures data remains encrypted end to end during model training, fine-tuning and inference, thus guaranteeing that privacy is preserved,” Jay Harel, VP of product at Opaque Systems, told VentureBeat. “To minimize the likelihood of data breaches throughout the lifecycle, our platform safeguards data at rest, in transit and while in use.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Through these new offerings, Opaque aims to enable organizations to securely analyze confidential data while ensuring its confidentiality and protecting against unauthorized access. 

To support confidential AI use cases, the platform has expanded its capabilities to safeguard machine learning and AI models. It achieves this by executing them on encrypted data within trusted execution environments (TEEs), thus preventing unauthorized access.

The company asserts that its zero-trust Data Clean Rooms (DCRs) can encrypt data at rest, in transit, and during usage. This approach ensures that all data sent to the clean room remains confidential throughout the process.

>>Don’t miss our special issue: Building the foundation for customer data quality.<<

Ensuring data security through confidential computing 

LLMs like ChatGPT rely on public data for training. Opaque asserts that these models’ true potential can only be realized by training them on an organization’s confidential data without risk of exposure. 

Opaque recommends that companies adopt confidential computing to mitigate this risk. Confidential computing is a method that can safeguard data during the entire model training and inference process. The company claims that the method can unlock the transformative capabilities of LLMs.

“We utilize Confidential Computing technology to leverage specialized hardware made available by cloud providers,” Opaque’s Harel told VentureBeat. “This privacy-enhancing technology ensures that datasets are encrypted end-to-end throughout the machine learning lifecycle. With Opaque’s platform, the model, prompt and context remain encrypted during training and while running inference.”

Harel said that the lack of secure data sharing and analysis in organizations with multiple data owners has led to restrictions on data access, data set elimination, data field masking and outright prevention of data sharing.

He said that there are three main issues when it comes to generative AI and privacy, especially in terms of LLMs:

  • Queries: LLM providers have visibility into user queries, raising the possibility of access to sensitive information like proprietary code or personally identifiable information (PII). This privacy concern intensifies with the growing risk of hacking.
  • Training models: To improve AI models, providers access and analyze their internal training data. However, this retention of training data can lead to an accumulation of confidential information, increasing vulnerability to data breaches.
  • IP issues for organizations with proprietary models: Fine-tuning models using company data necessitates granting proprietary LLM providers access to the data, or deploying proprietary models within the organization. As external individuals access private and sensitive data, the risk of hacking and data breaches increases.

The company has developed its generative AI technology with these issues in mind. It aims to enable secure collaboration among organizations and data owners while ensuring regulatory compliance. 

For instance, one company can train and fine-tune a specialized LLM, while another can use it for inference. Both companies’ data remains private, with no access granted to the other’s.

“With Opaque’s platform ensuring that all data is encrypted throughout its entire lifecycle, organizations would be able to train, fine-tune and run inference on LLMs without actually gaining access to the raw data itself,” said Harel.

The company highlighted its use of secure hardware enclaves and cryptographic fortification for the zero-trust Data Clean Room (DCR) offering. It claims that this confidential computing approach provides multiple layers of protection against cyberattacks and data breaches.

Operating in a cloud-native environment, the system executes within a secure enclave on the user’s cloud instance (such as Azure or GCP). This setup restricts data movement, enabling businesses to retain their existing data infrastructure.

“Our mission is to ensure everybody can trust the privacy of their confidential data, be it customer PII or proprietary business process data. For AI workloads, we enable businesses to keep their data encrypted and secure throughout the lifecycle, from model training and fine-tuning to inference, thus guaranteeing that privacy is preserved,” added Harel. “Data is kept confidential at rest, in transit and while in use, significantly reducing the likelihood of loss.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Victor Dey
Source: Venturebeat

Related posts
Cleantech & EV'sNews

RIZON class 4 and 5 electric MD trucks arrive in Canada

Cleantech & EV'sNews

777 hp electric overland concept from Italdesign bows in Beijing [video]

CryptoNews

Does Money Transmitting Require Control? DOJ Says No in Tornado Cash Litigation – Legal Bitcoin News

CryptoNews

Veteran Trader Peter Brandt Suggests BTC May Have Topped, Predicts a Decline to Mid-$30K – Featured Bitcoin News

Sign up for our Newsletter and
stay informed!