AI & RoboticsNews

Threat modeling can protect data used by AI assistants

All the sessions from Transform 2021 are available on-demand now. Watch now.


The growth of AI-driven voice assistants has brought challenges to privacy and security. Jamie Tomasello, head of security programs and GRC at Gusto, sat down with VentureBeat executive editor Fahmida Rashid at a session of VentureBeat’s Transform 2021 summit to discuss the data security questions that need to be considered when developing or deploying AI assistants.

This interview has been edited for clarity and brevity.

VentureBeat: What should organizations be thinking about as they collect and store user data?

Jamie Tomasello: Established security, compliance, and IT professionals should be depending on vendor security questionnaires like the Standardized Information Gathering (SIG) questionnaire or Consensus Assessment Initiative Questionnaire (CAIQ). Additionally, there are third-party audits, assessments, and certifications, such as the SOC 2 or ISO 27000 series.

As for newer companies, such as AI startups, which may not have the security maturity or business prioritization, focus needs to be given to several deeper questions, including:

  • What is the security road map for controls in the organizations and security features in the product?
  • What is the plan for security maturity, from a staffing and development perspective?
  • What data is the AI product trained on? Is it representative of your data or the people this AI product will be interacting with?

Finally, organizations need to determine if the product or service they are developing exists within their organization’s risk tolerance.

VentureBeat: What should organizations keep in mind while they are designing and deploying applications that use data they don’t own?

Tomasello: Typically, lifecycle security teams or privacy counsel are brought in at the last minute, after a feature or product is built. However, in order to eliminate any last-minute surprises or delays in release, incorporating security and privacy earlier in the design and development process is key. Product managers should include security and privacy during the specification development.

There is a tool called “threat modeling,” which, as introduced by the Electronic Frontier Foundation, poses a series of questions that help you think about your approach to data security, such as “What do I want to protect?” and “Who do I want to protect it from?”

When we think about who we want to protect, in particular, we need to think about not only our company or our data as a target … we need to also be thinking about our customers or our users as the target. We really need to think about how the AI or product or implementation of an AI service into our products could be abused.

VentureBeat: With AI assistants used as accessibility tools, how should companies address the potential of bias?

Tomasello: Your team should be representative of the people that you serve, having representative user personas. If we know that AI assistants are being used as an accessibility tool, then we need to ensure that we continue to include a user persona that has accessibility challenges, even as a product evolves for a more general audience.

We also need to be willing to accept feedback on our products and team composition and examine the inherent bias within and take action. Good intentions are not enough, especially considering how much impact AI solutions could have in a person’s life, whether they’re using the AI solution for an accessibility issue or just for general use.

VentureBeat: How would you recommend organizations think of their regulatory and compliance requirements right now?

Tomasello: Data mapping is an important practice needed in order to figure out where and who your data comes from in order to determine what laws your data falls under. Different countries and the U.S. federal and state governments have different laws meant to protect general consumer data and specialized financial and health data. Managing the applicable laws takes a robust data governance program. It all comes down to what are you doing to protect the confidentiality, the integrity, and the availability of this information and of your entire organization.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Zachariah Chou
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!