The issue of trust is never far behind the implementation of AI, especially in one as newly significant as generative AI. At Transform 2023, Hillary Ashton, CPO at Teradata, dove into the ways businesses must navigate the complexities of data governance, privacy and maintaining transparency to ensure trust in their AI-driven operations.
“Generative AI has brought the entire arena of advanced analytics to the boardroom as a discussion, whereas before it was a bit more of a back-office discussion, with a lot of untapped potential,” Ashton said. “It’s an exciting space, but also something that folks have to think about carefully from a trust perspective. Generative AI is really putting the point on what is protected data, what is PII data. How do you want to treat that data, especially with large language models and generative AI?”
Enterprises must inherently be trustworthy for their customers, Ashton said, and it starts with foundational data quality and clear separation of PII data.
“That sounds very table stakes, but it’s very difficult for many enterprises to actually put that into place,” she added. “They need technology and people and process to be able to do that.”
Enterprises must establish robust data governance frameworks, treating data as a product and ensuring clean, non-PII data is made available to users. Adhering to regulatory compliance is paramount, with organizations being transparent about their use of generative AI and its impact on data privacy. It’s also crucial to safeguard intellectual property (IP) and protecting proprietary information when collaborating with third-party vendors or utilizing LLMs.
“That’s where I go back to having a clear understanding of how you want to use advanced analytics — you need to understand what is not only your PII, but what is your IP?” she said. “How are you protecting that? You might contemplate that if you have your senior data scientist writing the prompts, that’s your IP as an organization. That’s not IP that you want to give away for free to a competitor. That’s maybe not explicitly stated when we think about things like PII data. Now prompts become your IP. Now you have a whole new legal practice around prompt protection and IP.”
That even includes how you’ve chosen to structure your data, which is highly proprietary. If you’re Bank A, competing with Bank B, you don’t want to actually give your competitor a leg up with a vendor using an LLM based on your data structure, no matter how sanitized it is.
From there, she said, it’s about “making sure that you understand whatever market you’re in, what the regulatory compliance looks like, and you’re building with that end state in mind, and then being transparent to your own customers about how you’re using gen AI, how you’re not using gen AI, so [it] can be trusted.”
Trust is also not limited to privacy; it extends to the reliability and accuracy of model outcomes. Regular evaluation of models and proactive measures to rectify underperformance are essential to maintain user trust.
Getting started means working back from the outcomes you want to achieve, Ashton says, and they fall into two buckets. The first is an area where you already have advanced capabilities and want to maintain that leadership advantage with advanced analytics. The second is addressing table-stakes challenges that perhaps the competition has but you don’t have.
From there, the considerations are data governance and respecting the sovereignty of IP and PII data. The second piece is being able to do that at scale, and finally, the last piece is model ops, or managing models as they go into production and understanding when a model starts to underperform or return unacceptable results.
And finally it’s crucial to not get lost in the novelty, but evaluate ROI and price performance.
“We’re all super excited about LLMs and gen AI in general,” Ashton said. “The cost of running some of that may become prohibitive over time for use cases that aren’t high-value. Make sure that you are not solving something that could be done with a BI pivot chart with an LLM just because it’s cool — it seems like a crazy example, but it’s not that crazy.”
The issue of trust is never far behind the implementation of AI, especially in one as newly significant as generative AI. At Transform 2023, Hillary Ashton, CPO at Teradata, dove into the ways businesses must navigate the complexities of data governance, privacy and maintaining transparency to ensure trust in their AI-driven operations.
“Generative AI has brought the entire arena of advanced analytics to the boardroom as a discussion, whereas before it was a bit more of a back-office discussion, with a lot of untapped potential,” Ashton said. “It’s an exciting space, but also something that folks have to think about carefully from a trust perspective. Generative AI is really putting the point on what is protected data, what is PII data. How do you want to treat that data, especially with large language models and generative AI?”
The role of trust in generative AI adoption
Enterprises must inherently be trustworthy for their customers, Ashton said, and it starts with foundational data quality and clear separation of PII data.
“That sounds very table stakes, but it’s very difficult for many enterprises to actually put that into place,” she added. “They need technology and people and process to be able to do that.”
Enterprises must establish robust data governance frameworks, treating data as a product and ensuring clean, non-PII data is made available to users. Adhering to regulatory compliance is paramount, with organizations being transparent about their use of generative AI and its impact on data privacy. It’s also crucial to safeguard intellectual property (IP) and protecting proprietary information when collaborating with third-party vendors or utilizing LLMs.
“That’s where I go back to having a clear understanding of how you want to use advanced analytics — you need to understand what is not only your PII, but what is your IP?” she said. “How are you protecting that? You might contemplate that if you have your senior data scientist writing the prompts, that’s your IP as an organization. That’s not IP that you want to give away for free to a competitor. That’s maybe not explicitly stated when we think about things like PII data. Now prompts become your IP. Now you have a whole new legal practice around prompt protection and IP.”
That even includes how you’ve chosen to structure your data, which is highly proprietary. If you’re Bank A, competing with Bank B, you don’t want to actually give your competitor a leg up with a vendor using an LLM based on your data structure, no matter how sanitized it is.
From there, she said, it’s about “making sure that you understand whatever market you’re in, what the regulatory compliance looks like, and you’re building with that end state in mind, and then being transparent to your own customers about how you’re using gen AI, how you’re not using gen AI, so [it] can be trusted.”
Trust is also not limited to privacy; it extends to the reliability and accuracy of model outcomes. Regular evaluation of models and proactive measures to rectify underperformance are essential to maintain user trust.
So where to get started?
Getting started means working back from the outcomes you want to achieve, Ashton says, and they fall into two buckets. The first is an area where you already have advanced capabilities and want to maintain that leadership advantage with advanced analytics. The second is addressing table-stakes challenges that perhaps the competition has but you don’t have.
From there, the considerations are data governance and respecting the sovereignty of IP and PII data. The second piece is being able to do that at scale, and finally, the last piece is model ops, or managing models as they go into production and understanding when a model starts to underperform or return unacceptable results.
And finally it’s crucial to not get lost in the novelty, but evaluate ROI and price performance.
“We’re all super excited about LLMs and gen AI in general,” Ashton said. “The cost of running some of that may become prohibitive over time for use cases that aren’t high-value. Make sure that you are not solving something that could be done with a BI pivot chart with an LLM just because it’s cool — it seems like a crazy example, but it’s not that crazy.”
Author: VB Staff
Source: Venturebeat