The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
Only 17% of consumers say they’d be comfortable with the idea of insurance claims for their home, renters, or vehicles being reviewed exclusively by AI. That’s according to a new survey commissioned by Policygenius, which also found that 60% of consumers would rather switch insurance companies than let AI review their claims.
The results point to a general reluctance to trust AI systems — particularly “black box” systems that lack an explainability component. For example, just 12% of respondents to a recent AAA report said they’d be comfortable riding in an autonomous car. High-profile failures in recent years haven’t instilled much confidence, with AI-powered recruitment tools showing bias against women, algorithms unfairly downgrading students’ grades, and facial recognition tech leading to false arrests.
The survey suggests that in the insurance domain, people — particularly drivers and homeowners — are wary of sacrificing privacy, even if it nets them policy discounts. More than half (58%) of auto insurance customers told Policygenius that no amount of savings was worth using an app that collected data about their driving behavior and location. And just one in three (32%) respondents said they’d be willing to install a smart home device that collected personal data, such as a doorbell camera, water sensor, or smart thermostat.
The findings agree with another survey — this one by insurtech company Breeze — that shows 56% of consumers don’t believe insurance companies should be allowed to use “big data” (e.g., personal daily health data and purchasing behavior) to determine insurance policy pricing. According to Policygenius property and casualty insurance expert Pat Howard, consumer sentiment hasn’t — and isn’t — shifting much in this regard.
“We’re seeing home and auto insurers integrate various data collection and analysis technology into policy distribution, pricing, and claims, but it’s clear consumers aren’t readily willing to trade personal data or give up the human touch for marginal savings,” Howard said in a press release.
Importance of explainability
In a recent report, McKinsey predicted insurance will shift from its current state of “detect and repair” to “predict and prevent,” transforming every aspect of the industry in the process. As AI becomes more deeply integrated in the industry, carriers must position themselves to respond to the changing business landscape, the firm wrote, while insurance executives must understand the factors that will contribute to this change.
As an online insurance marketplace, Policygenius has a horse in the race. But its survey is salient in light of efforts by the European Commission’s High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building “trustworthy AI.” Explainability continues to present major hurdles for companies adopting AI. According to FICO, 65% of employees can’t explain how AI model decisions or predictions are made.
That’s not to say every expert is convinced AI can become truly “trustworthy.” But researchers like Manoj Saxena, who chairs the Responsible AI Institute consultancy firm, assert that “checks” can ensure awareness of the context in which AI will be used and conditions that could create biased outcomes. By engaging product owners, risk assessors, and users (for example, insurance policyholders) in conversations about AI’s potential flaws, processes can be created that expose, test, and fix these flaws.
For the insurance market specifically, the Dutch Association of Insurers (DAI) offers a possible model of adopting AI responsibly. The organization’s Ethical Framework for the Application of AI in the Insurance Sector, which became binding in January, requires companies to consider how best to explain outcomes from AI or other data-driven apps to customers before those apps are deployed.
“Human governance is hugely important; there can’t be total reliance on technology and algorithms. Human involvement is essential to continuous learning and responding to questions and dilemmas that will inevitably occur,” DAI general director Richard Weurding told KPMG, which worked with DAI on an educational campaign around the framework’s rollout. “Companies want to use technology to build trust with customers, and human involvement is critical to achieving that.”
Responsible AI practices can bring major business value to bear. A study by Capgemini found customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them. Companies that don’t approach the issue thoughtfully could incur both reputational risk and a direct hit to their bottom line, according to Saxena.
“[Stakeholders need to] ensure that potential biases are understood and that the data being sourced to feed to these models is representative of various populations that the AI will impact,” Saxena told VentureBeat in a recent interview. “[They also need to] invest more to ensure members who are designing the systems are diverse.”
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Author: Kyle Wiggers
Source: Venturebeat