AI & RoboticsNews

Europe’s AI laws will cost companies a small fortune – but the payoff is trust

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Artificial intelligence isn’t tomorrow’s technology — it’s already here. Now too is the legislation proposing to regulate it.

Earlier this year, the European Union outlined its proposed artificial intelligence legislation and gathered feedback from hundreds of companies and organizations. The European Commission closed the consultation period in August, and next comes further debate in the European Parliament.

As well as banning some uses outright (facial recognition for identification in public spaces and social “scoring,” for instance), its focus is on regulation and review, especially for AI systems deemed “high risk” — those used in education or employment decisions, say.

Any company with a software product deemed high risk will require a Conformité Européenne (CE) badge to enter the market. The product must be designed to be overseen by humans, avoid automation bias, and be accurate to a level proportionate to its use.

Some are concerned about the knock-on effects of this. They argue that it could stifle European innovation as talent is lured to regions where restrictions aren’t as strict — such as the US. And the anticipated compliance costs high-risk AI products will incur in the region – perhaps as much as €400,000 ($452,000) for high risk systems, according to one US think tank — could prevent initial investment too.

So the argument goes. But I embrace the legislation and the risk-based approach the EU has taken.

Why should I care? I live in the UK, and my company, Healx, which uses AI to help discover new treatment opportunities for rare diseases, is based in Cambridge.

This autumn, the UK published its own national AI strategy, which has been designed to keep regulation at a “minimum,” according to a minister. But no tech company can afford to ignore what goes on in the EU.

EU General Data Protection Regulation (GDPR) laws required just about every company with a website either side of the Atlantic to react and adapt to them when they were rolled out in 2016. It would be naive to think that any company with an international outlook won’t run up against these proposed rules too. If you want to do business in Europe, you will still have to adhere to them from outside it.

And for areas like health, this is incredibly important. The use of artificial intelligence in healthcare will almost inevitably fall under the “high risk” label. And rightly so: Decisions that affect patient outcomes change lives.

Mistakes at the very start of this new era could damage public perception irrevocably. We already know how well-intentioned AI healthcare initiatives can end up perpetuating structural racism, for instance. Left unchecked, they will continue to.

That’s why the legislation’s focus on reducing bias in AI, and setting a gold standard for building public trust, is vital for the industry. If an AI system is fed patient data that does not accurately represent a target group (women and minority groups are often underrepresented in clinical trials), the results can be skewed.

That damages trust, and trust is crucial in healthcare. A lack of trust limits effectiveness. That’s part of the reason such large swathes of people in the West are still declining to get vaccinated against COVID. The problems that’s causing are plain to see.

AI breakthroughs will mean nothing if patients are suspicious of a diagnosis or therapy produced by an algorithm, or don’t understand how conclusions have been drawn. Both result in a damaging lack of trust.

In 2019, Harvard Business Review found that patients were wary of medical AI even when it was shown to out-perform doctors, simply because we believe our health issues to be unique. We can’t begin to shift that perception without trust.

Artificial intelligence has proven its potential to revolutionize healthcare, saving lives en route to becoming an estimated $200 billion industry by 2030.

The next step won’t just be to build on these breakthroughs but to build trust so that they can be implemented safely, without disregarding vulnerable groups, and with clear transparency, so worried individuals can understand how a decision has been made.

This is something that will always, and should always, be monitored. That’s why we should all take notice of the spirit of the EU’s proposed AI legislation, and embrace it, wherever we operate.

Tim Guilliams is a co-founder and CEO of drug discovery startup Healx.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Tim Guilliams, Healx
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!