Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
It has been a challenging week for OpenAI, as calls for generative AI regulation grow louder: Today, Italy’s data protection agency said it was blocking access OpenAI’s popular ChatGPT chatbot and had opened a probe due to concerns about a suspected data collection breach.
The agency said the restriction was temporary, until OpenAI abides by the EU’s General Data Protection Regulation (GDPR) laws. A translation of the announcement said that “a data breach affecting ChatGPT users’ conversations and information on payments by subscribers to the service had been reported on 20 March.” It added that “no information is provided to users and data subjects whose data are collected by Open AI; more importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.”
>>Follow VentureBeat’s ongoing generative AI coverage<<
A week of calls for large-scale AI regulation
The announcement comes just a day after the Federal Trade Commission (FTC) received a complaint from the Center for AI and Digital Policy (CAIDP), which called for an investigation of OpenAI and its product GPT-4. The complaint argued that the FTC has declared that the use of AI should be “transparent, explainable, fair, and empirically sound while fostering accountability,” but claims that OpenAI’s GPT-4 “satisfies none of these requirements” and is “biased, deceptive, and a risk to privacy and public safety.”
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
And on Wednesday, an open letter calling for a six-month “pause” on large-scale AI development beyond OpenAI’s GPT-4 highlighted the complex discourse and fast-growing, fierce debate around AI’s various risks, both short-term and long-term.
Critics of the letter — which was signed by Elon Musk, Steve Wozniak, Yoshua Bengio, Gary Marcus and other AI experts, researchers and industry leaders — say it fosters unhelpful alarm around hypothetical dangers, leading to misinformation and disinformation about actual, real-world concerns. Others pointed out the unrealistic nature of a “pause” and said the letter did not address current efforts towards global AI regulation and legislation.
Questions about how the GDPR applies to ChatGPT
The EU is currently working on developing a proposed Artificial Intelligence Act. Avi Gesser, partner at Debevoise & Plimpton and co-chair of the firm’s Cybersecurity, Privacy and Artificial Intelligence Practice Group, told VentureBeat in December that the EU Act would be a “risk-based regime to address the highest-risk outcomes of artificial intelligence.”
However, the EU AI Act won’t be fully baked or take effect for some time, so some are turning to the GDPR, which was enacted in 2018, for regulatory authority on issues related to ChatGPT. In fact, according to an Infosecurity article from January, some experts are questioning “the very existence of OpenAI’s chatbot for privacy reasons.”
Infosecurity quoted Alexander Hanff, member of the European Data Protection Board’s (EDPB) support pool of experts, who said that “If OpenAI obtained its training data through trawling the internet, it’s unlawful.”
“Just because something is online doesn’t mean it’s legal to take it,” he added. “Scraping billions or trillions of data points from sites with terms and conditions which, in themselves, said that the data couldn’t be scraped by a third party, is a breach of the contract. Then, you also need to consider the rights of individuals to have their data protected under the EU’s GDPR, ePrivacy directive and Charter of Fundamental Rights.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Sharon Goldman
Source: Venturebeat