AI & RoboticsNews

Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks

Open AI‘s ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work. 

But its use in the enterprise is still a quandary: Businesses know that generative AI is a competitive force, yet the consequences of leaking sensitive information to the platforms are significant.

Workers aren’t content to wait until organizations work this question out, however: Many are already using ChatGPT and inadvertently leaking sensitive data — without their employers having any knowledge of them doing so. 

Companies need a gatekeeper, and Metomic aims to be one: The data security software company today released its new browser plugin Metomic for ChatGPT, which tracks user activity in OpenAI’s powerful large language model (LLM).

“There’s no perimeter to these apps, it’s a wild west of data sharing activities,” Rich Vibert, Metomic CEO, told VentureBeat. “No one’s actually got any visibility at all.” 

Research has shown that 15% of employees regularly paste company data into ChatGPT — the leading types being source code (31%), internal business information (43%) and personally identifiable information (PII) (12%). The top departments importing data into the model include R&D, financing and sales and marketing. 

“It’s a brand new problem,” said Vibert, adding that there is “massive fear” among enterprises. “They’re just naturally concerned about what employees could be putting into these tools. There’s no barrier to entry — you just need a browser.”

Metomic has found that employees are leaking financial data such as balance sheets, “whole snippets of code” and credentials including passwords. But one of the most significant data exposures comes from customer chat transcripts, said Vibert.

Customer chats can go on for hours or even days and weeks can accumulate “lines and lines and lines of text,” he said. Customer support teams are increasingly turning to ChatGPT to summarize all this, but it is rife with sensitive data including not only names and email addresses but credit card numbers and other financial information. 

“Basically complete customer profiles are being put into these tools,” said Vibert. 

Competitors and hackers can easily get ahold of this information, he noted, and its loss can also lead to breach of contract.

Beyond inadvertent leaks from unsuspecting users, other employees who may be departing a company can use gen AI tools in an attempt to take data with them (customer contacts, for instance, or login credentials). Then there’s the whole malicious insider problem, in which workers look to deliberately cause harm to a company by stealing or leaking company information. 

While some enterprises have moved to outright block the use of ChatGPT and other rival platforms among their workers, Vibert says this simply isn’t a viable option. 

“These tools are here to stay,” he said, adding that ChatGPT offers “massive value” and great competitive advantage. “It is the ultimate productivity platform, making entire workforces exponentially more efficient.” 

Metomic’s ChatGPT integration sits within a browser, identifying when an employee logs into the platform and performing real-time scanning of the data being uploaded. 

If sensitive data such as PII, security credentials or IP is detected, human users are notified in the browser or other platform — such as Slack — and they can redact or strip out sensitive data or respond to prompts such as ‘remind me tomorrow’ or ‘that’s not sensitive.’

Security teams can also receive alerts when employees upload sensitive data. 

Vibert emphasized that the platform does not block activities or tools, instead providing enterprises visibility and control over how they are being used to minimize their risk exposure.

“This is data security through the lens of employees,” he said. “It’s putting the controls in the hands of employees and feeding data back to the analytics team.” 

Otherwise it’s “just noise and noise and noise” that can be impossible for security and analytics teams to sift through, Vibert noted. 

“IT teams can’t solve this general problem of SaaS gen AI sharing,” he said. “That brings alert fatigue to whole new levels.”

Today’s enterprises are using a multitude of SaaS tools: A staggering 991 by one estimate — yet just a quarter of those are connected. 

“We’re seeing a massive rise in the number of SaaS apps being used across organizations,” said Vibert. 

Metomic’s platform connects to other SaaS tools across the business environment and is pre-built with 150 data classifiers to recognize common critical data risks based on context such as industry or geography-specific regulation. Enterprises can also create data classifiers to identify their most vulnerable information.

“Just knowing where people are putting data into one tool or another doesn’t really work, it’s when you put all this together,” said Vibert. 

IT teams can look beyond just data to “data hot spots” among certain departments or even particular employees, he explained. For example, they can determine how a marketing team is using ChatGPT and compare that to use in other apps such as Slack or Notion. Similarly, the platform can determine if data is in the wrong place or accessible to non-relevant people.

“It’s this idea of finding risks that matter,” said Vibert. 

He pointed out that there’s not only a browser version of ChatGPT — many apps simply have the model built in. For instance, data can be imported to Slack and may end up in ChatGPT one way or another along the way. 

“It’s hard to say where that supply chain ends,” said Vibert. “It’s complete lack of visibility, let alone controls.” 

Going forward, the number of SaaS apps will only continue to increase, as will the use of ChatGPT and other powerful gen AI tools and LLMs.

As Vibert put it: “It’s not even day zero of a long journey ahead of us.”

Open AI‘s ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work. 

But its use in the enterprise is still a quandary: Businesses know that generative AI is a competitive force, yet the consequences of leaking sensitive information to the platforms are significant.

Workers aren’t content to wait until organizations work this question out, however: Many are already using ChatGPT and inadvertently leaking sensitive data — without their employers having any knowledge of them doing so. 

Companies need a gatekeeper, and Metomic aims to be one: The data security software company today released its new browser plugin Metomic for ChatGPT, which tracks user activity in OpenAI’s powerful large language model (LLM).

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

“There’s no perimeter to these apps, it’s a wild west of data sharing activities,” Rich Vibert, Metomic CEO, told VentureBeat. “No one’s actually got any visibility at all.” 

From leaking balance sheets to ‘complete customer profiles’

Research has shown that 15% of employees regularly paste company data into ChatGPT — the leading types being source code (31%), internal business information (43%) and personally identifiable information (PII) (12%). The top departments importing data into the model include R&D, financing and sales and marketing. 

“It’s a brand new problem,” said Vibert, adding that there is “massive fear” among enterprises. “They’re just naturally concerned about what employees could be putting into these tools. There’s no barrier to entry — you just need a browser.”

Metomic has found that employees are leaking financial data such as balance sheets, “whole snippets of code” and credentials including passwords. But one of the most significant data exposures comes from customer chat transcripts, said Vibert.

Customer chats can go on for hours or even days and weeks can accumulate “lines and lines and lines of text,” he said. Customer support teams are increasingly turning to ChatGPT to summarize all this, but it is rife with sensitive data including not only names and email addresses but credit card numbers and other financial information. 

“Basically complete customer profiles are being put into these tools,” said Vibert. 

Competitors and hackers can easily get ahold of this information, he noted, and its loss can also lead to breach of contract.

Beyond inadvertent leaks from unsuspecting users, other employees who may be departing a company can use gen AI tools in an attempt to take data with them (customer contacts, for instance, or login credentials). Then there’s the whole malicious insider problem, in which workers look to deliberately cause harm to a company by stealing or leaking company information. 

While some enterprises have moved to outright block the use of ChatGPT and other rival platforms among their workers, Vibert says this simply isn’t a viable option. 

“These tools are here to stay,” he said, adding that ChatGPT offers “massive value” and great competitive advantage. “It is the ultimate productivity platform, making entire workforces exponentially more efficient.” 

Data security through the employee lens

Metomic’s ChatGPT integration sits within a browser, identifying when an employee logs into the platform and performing real-time scanning of the data being uploaded. 

If sensitive data such as PII, security credentials or IP is detected, human users are notified in the browser or other platform — such as Slack — and they can redact or strip out sensitive data or respond to prompts such as ‘remind me tomorrow’ or ‘that’s not sensitive.’

Security teams can also receive alerts when employees upload sensitive data. 

Vibert emphasized that the platform does not block activities or tools, instead providing enterprises visibility and control over how they are being used to minimize their risk exposure.

“This is data security through the lens of employees,” he said. “It’s putting the controls in the hands of employees and feeding data back to the analytics team.” 

Otherwise it’s “just noise and noise and noise” that can be impossible for security and analytics teams to sift through, Vibert noted. 

“IT teams can’t solve this general problem of SaaS gen AI sharing,” he said. “That brings alert fatigue to whole new levels.”

Staggering amount of SaaS apps in use

Today’s enterprises are using a multitude of SaaS tools: A staggering 991 by one estimate — yet just a quarter of those are connected. 

“We’re seeing a massive rise in the number of SaaS apps being used across organizations,” said Vibert. 

Metomic’s platform connects to other SaaS tools across the business environment and is pre-built with 150 data classifiers to recognize common critical data risks based on context such as industry or geography-specific regulation. Enterprises can also create data classifiers to identify their most vulnerable information.

“Just knowing where people are putting data into one tool or another doesn’t really work, it’s when you put all this together,” said Vibert. 

IT teams can look beyond just data to “data hot spots” among certain departments or even particular employees, he explained. For example, they can determine how a marketing team is using ChatGPT and compare that to use in other apps such as Slack or Notion. Similarly, the platform can determine if data is in the wrong place or accessible to non-relevant people.

“It’s this idea of finding risks that matter,” said Vibert. 

He pointed out that there’s not only a browser version of ChatGPT — many apps simply have the model built in. For instance, data can be imported to Slack and may end up in ChatGPT one way or another along the way. 

“It’s hard to say where that supply chain ends,” said Vibert. “It’s complete lack of visibility, let alone controls.” 

Going forward, the number of SaaS apps will only continue to increase, as will the use of ChatGPT and other powerful gen AI tools and LLMs.

As Vibert put it: “It’s not even day zero of a long journey ahead of us.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!