At the Microsoft Ignite conference today, the software giant unveiled new data security and compliance capabilities in Microsoft Purview aimed at protecting information used in generative AI systems like Copilot.
The new features will allow Copilot users on Microsoft 365 to control what data the AI coding assistant can access, automatically classify sensitive data in responses, and institute compliance controls around LLM usage.
Herain Oberoi, general manager of Microsoft data security, compliance, and privacy, and Rudra Mitra, corporate vice president at Microsoft, spoke with VentureBeat ahead of the announcement in a candid conversation where they shared key insights into Microsoft’s innovative approach.
“Data is effectively the foundation on which AI is built. AI is only as good as the data that goes in. And so it turns out, it’s an extremely important part of it,” Oberoi said, highlighting the critical role data plays in AI applications.
“With Purview, now, if you connect those two dots, Microsoft is looking to secure the future of AI or secure the future of the data with AI. And I think that’s just such a responsible approach,” Mitra told VentureBeat.
A new AI hub in Purview will give administrators visibility into Copilot usage across the organization. They can see which employees are interacting with the AI and assess associated risks.
Sensitive data will also be blocked from being input into Copilot based on user risk profiles. And output from the AI will inherit protective labels from source data.
“It’s not just visibility across Microsoft’s Copilots, we think the complete picture is what’s important for the customer here,” Mitra said.
Sensitive data will also be blocked from being input into Copilot based on user risk profiles. And output from the AI will inherit protective labels from source data.
On the compliance side, Purview’s auditing, retention and communication monitoring will now extend to Copilot interactions.
But this is just the beginning, as Microsoft plans to expand Purview’s protection beyond Copilot to in-house built AI and third party consumer apps like ChatGPT.
With AI poised for greater adoption, Microsoft is positioning itself at the forefront of responsible and ethical data use in enterprise AI systems. Robust data governance will be key to ensuring privacy and preventing misuse in this next frontier of technology.
However, true responsible AI will require buy-in across the entire tech industry. Competitors like Google, Amazon and IBM will need to make data ethics a priority as well. For if users can’t trust AI, it will never reach its full potential.
The path forward is clear — enterprises want both cutting edge innovation and cast iron data protection. Whichever company makes trust job one will lead us into the AI-powered future.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
At the Microsoft Ignite conference today, the software giant unveiled new data security and compliance capabilities in Microsoft Purview aimed at protecting information used in generative AI systems like Copilot.
The new features will allow Copilot users on Microsoft 365 to control what data the AI coding assistant can access, automatically classify sensitive data in responses, and institute compliance controls around LLM usage.
Herain Oberoi, general manager of Microsoft data security, compliance, and privacy, and Rudra Mitra, corporate vice president at Microsoft, spoke with VentureBeat ahead of the announcement in a candid conversation where they shared key insights into Microsoft’s innovative approach.
“Data is effectively the foundation on which AI is built. AI is only as good as the data that goes in. And so it turns out, it’s an extremely important part of it,” Oberoi said, highlighting the critical role data plays in AI applications.
VB Event
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
“With Purview, now, if you connect those two dots, Microsoft is looking to secure the future of AI or secure the future of the data with AI. And I think that’s just such a responsible approach,” Mitra told VentureBeat.
Visibility into Copilot risks and usage
A new AI hub in Purview will give administrators visibility into Copilot usage across the organization. They can see which employees are interacting with the AI and assess associated risks.
Sensitive data will also be blocked from being input into Copilot based on user risk profiles. And output from the AI will inherit protective labels from source data.
“It’s not just visibility across Microsoft’s Copilots, we think the complete picture is what’s important for the customer here,” Mitra said.
Sensitive data will also be blocked from being input into Copilot based on user risk profiles. And output from the AI will inherit protective labels from source data.
Compliance policies extended to Copilot
On the compliance side, Purview’s auditing, retention and communication monitoring will now extend to Copilot interactions.
But this is just the beginning, as Microsoft plans to expand Purview’s protection beyond Copilot to in-house built AI and third party consumer apps like ChatGPT.
With AI poised for greater adoption, Microsoft is positioning itself at the forefront of responsible and ethical data use in enterprise AI systems. Robust data governance will be key to ensuring privacy and preventing misuse in this next frontier of technology.
However, true responsible AI will require buy-in across the entire tech industry. Competitors like Google, Amazon and IBM will need to make data ethics a priority as well. For if users can’t trust AI, it will never reach its full potential.
The path forward is clear — enterprises want both cutting edge innovation and cast iron data protection. Whichever company makes trust job one will lead us into the AI-powered future.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Michael Nuñez
Source: Venturebeat
Reviewed By: Editorial Team