AI & RoboticsNews

AI Weekly: MIT aims to reconcile data sharing with EU AI regulations

Join Transform 2021 this July 12-16. Register for the AI event of the year.


This week, the European Union (EU) unveiled regulations to govern the use of AI across the bloc’s 27 member states. The first-of-its-kind proposal spans more than 100 pages and will take years to implement, but the ramifications are far-reaching. It imposes a ban — with some exceptions — on the use of biometric identification systems in public, including facial recognition. Other prohibited applications of AI include social credit scoring, the infliction of harm, and subliminal behavior manipulation.

The regulations are emblematic of an increased desire on the part of consumers for privacy-preserving, responsible implementations of AI and machine learning. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. And 87% of executives told Juniper in a recent survey that they believe organizations have a responsibility to adopt policies that minimize the negative impacts of AI

Dovetailing with the EU proposal is a data privacy-focused initiative to bring computer science research together with public policy engagement, also announced this week. MIT professor Srini Devadas says that the MIT Future of Data, Trust, and Privacy (FOD) initiative, which will involve collaboration between experts in specific technical areas, gets at the heart of what the EU AI regulations hope to accomplish.

“Enterprises would like to legally share data to do collaborative analytics and consumers want privacy of their data, location, and pictures, and we would like to address both scenarios,” Devadas, a co-director of the initiative, told VentureBeat via email. “The initiative is focused on legal sharing, use and processing of data … Some of the new [EU] regulations ban the use of certain technologies, except for security reasons. I can imagine, for example, that surveillance cameras produce encrypted video streams, and face recognition technology is only applied in a scenario where safety and security considerations are paramount. This might mean that the encrypted streams are processed without decryption — something that is actually possible with available technologies.”

The initiative is the brainchild of MIT Computer Science and Artificial Intelligence Laboratory managing director Lori Glover and Danny Weitzner, who runs the Internet Policy Research Initiative at MIT. As Devadas explains, the goal is integrating research on privacy policy with privacy-preserving technologies to create a virtuous cycle of R&D and regulation. “In many fields, such as medical diagnostics and finance, sharing of data can produce significantly better outcomes and predictions, but sharing is disallowed by [laws] such as HIPAA,” Devadas said. “There are private collaborative analytics techniques that can help with this problem, but it is not always clear if particular techniques or approaches satisfy regulations, because regulations are oftentimes vague. The initiative would like to address this issue.”

Particularly in health care, consumers often aren’t fully aware their information is included in the datasets used to train AI systems. In 2019, The Wall Street Journal reported details on Project Nightingale, Google’s partnership with Ascension, which is America’s second-largest health system, that is collecting the personal health data of tens of millions of patients for the purposes of developing AI-based services for medical providers. A U.K. regulator concluded that The Royal Free London NHS Foundation Trust, a division of the U.K.’s National Health Service based in London, provided Google parent company Alphabet’s DeepMind with data on 1.6 million patients without their consent. And in 2019, Google and the University of Chicago Medical Center were sued for allegedly failing to scrub timestamps from anonymized medical records. (A judge tossed the suit in September.)

The EU regulations impose requirements on “high-risk” applications of AI, including medical devices and equipment. Companies developing them will have to use “high-quality” training data to avoid bias, agree to “human oversight,” and create detailed documentation that explains how the software works to both regulators and users. Moreover, in an effort to provide transparency about what technologies are in use, all high-risk AI systems will be indexed in an EU-wide database.

Devadas sees a number of technologies enabling ethical approaches to AI that align with the EU regulations. For example, secure multiparty computation can be used to arrive at an aggregated statistic in a way that doesn’t expose users’ data. On the horizon are approaches like secure hardware and homographic encryption, which allow AI to derive insights from encrypted data. While homomorphic encryption is possible today, it’s upwards of 100,000 times slower than unsecured techniques. As for secure hardware, it’s often not without security vulnerabilities and requires complete trust in hardware manufacturers.

“There are a lot of techniques that are perhaps close to being deployable but not quite there,” Devadas said. “[However], multiple projects in the initiative will directly address this performance gap.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

AI risk management startup ValidMind raises $8.1M to help banks comply with regulations

DefenseNews

Amid faltering domestic program, Taiwan orders more MQ-9B drones

DefenseNews

BAE demos platform that gives Army AMPVs turret system options

DefenseNews

US Army’s fresh look at watercraft includes unmanned options

Sign up for our Newsletter and
stay informed!