AI & RoboticsNews

What the EU AI Act means for the insurance industry

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Artificial Intelligence (AI) is being put to work across the insurance industry, providing insurers with a powerful edge in working more efficiently and improving customer-facing services. AI advancements are also helping insurers make better use of the growing volume of data. From visual to sensor data, AI enables real-time data analysis to improve everything from underwriting decisions and claims settlements to fraud detection and prevention.

According to Mckinsey, 25% of the insurance industry will be automated in 2025, thanks to AI and machine learning (ML) techniques, helping companies generate significant cost savings. Juniper Research forecasts that across property, health, life and motor insurance, the annual cost savings will exceed $1.2 billion by 2023, a five-fold increase over 2018.

These savings are also being passed on to consumers, as insurers are able to provide more customized, accurate and competitively-priced products and services. However, as the use of AI in the insurance sector continues to rise, so do the concerns surrounding AI transparency and explainability.

Coming soon: The EU AI Act

In April 2021, the European Commission presented a draft for a new EU AI Act, which outlined rules for the development, commodification and use of AI-driven products and services. The legislation will cover any company operating within the EU and in any industry.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

The EU AI Act introduces a framework to group AI systems into four categories based on the application’s level of “risk.” The goal is to encourage the development of responsible, trustworthy AI systems, starting from the first line of code. It will also prohibit the use of AI applications that create “unacceptable risk.” Currently, the AI systems that fall under the “unacceptable risk” category involve biometric identification systems in public spaces and social scoring applications.

While it will certainly take time for the EU AI Act to come into law, some countries like Spain are looking to test the risk framework as early as October 2022 in a sandbox environment. During this period, companies will be able to test AI systems related to law enforcement, health or educational purposes in compliance with the rules outlined in the EU AI Act and with regulator oversight.

According to Politico, “The [Spain] project seeks to give a head start to European startups and medium-sized companies, which make up a large part of Europe’s economic fabric, at a time when innovation in artificial intelligence is largely driven by Big Tech firms including Google, Microsoft, IBM and Meta.”

What the EU AI Act means for the insurance sector

The insurance industry is already highly regulated in most markets, with some regulatory frameworks already covering the uses of AI. However, both insurance companies and insurtech startups still need to be aware of what applications the EU AI Act might consider “high risk” — especially if using AI to perform tasks related to credit and insurance policy decisions — and to start planning for any potential impact on their offerings.

An updated draft of the EU AI Act, released in November 2021, does, in fact, classify “AI systems intended to be used for insurance purposes” under the high-risk category. Specifically, it refers to “AI systems intended to be used for insurance premium setting, underwritings and claims assessments.”

According to the draft regulation, “AI systems are increasingly used in insurance for premium setting, underwriting and claims assessment which, if not duly designed, developed and used, can lead to serious consequences for people’s life, including financial exclusion and discrimination.”

How companies can prepare for compliance

Similar to the early stages of GDPR, we are still many months, and possibly years, away from knowing exactly how the EU will enforce measures outlined in the AI Act. However, it’s certain that regulatory oversight of AI and the data fed into AI systems will only increase, both in Europe and globally.

Companies should already be thinking about what they can do to prepare for and comply with responsible AI policies. As Cognizant points out, “AI applications that learn from historical underwriting decisions could pick up gender or racial bias hidden in data.” Businesses should understand where bias might be able to creep into their systems.

Establishing internal responsible and ethical AI policies is a good first step. Companies can identify key stakeholders and get them involved in creating these policies, specifically those who are directing the strategy and development of AI projects. The next important step is establishing an internal governance system, which should include any outside vendors being used for AI development as well. Those teams should be involved from the start and help to identify any potential AI risks.

Numerous organizations have published guidance to support companies in the ethical use of data and AI, including the OECD, European Commission (EC), and the Financial Conduct Authority (FCA) and Open Data Institute (ODI) in the UK. The Business Roundtable, an organization made up of more than 200 CEOs and executives based in the US, has also released a thorough and helpful roadmap highlighting 10 principles for businesses to achieve responsible AI within their organizations.

From underwriting to claims processing, AI is radically transforming the insurance industry, and there is now a global focus on making sure the technology is being used fairly and ethically. Businesses must be able to explain precisely where their data is coming from and how AI is deployed throughout their strategy and operations, especially as regulatory pressure intensifies and consumers increasingly demand more transparency into insurance policies, pricing and procedures.

Julio Pernía Aznar is CEO of Bdeo

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Julio Pernía Aznar, Bdeo
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!