ChatGPT had no clue what I was talking about when I solicited its opinion on the Senate’s first “AI Insight Forum,” to be held this Wednesday and spearheaded by Senate Majority Leader Chuck Schumer.
“I apologize for any confusion, but I don’t have access to real-time information or the ability to provide opinions on events happening after my last knowledge update in September 2021,” wrote ChatGPT when I asked for information on what Schumer called “one of the most important conversations of the year.” The chatbot continued: “I’m not aware of specific AI regulation forums taking place this week or any other recent events.”
But while ChatGPT might be blissfully unaware of the constant chatter about regulating artificial intelligence, it seems like everyone else in the world of AI has an opinion on what Schumer calls a “coming together of top voices in business, civil rights, defense, research, labor, the arts, all together, in one room, having a much-needed conversation about how Congress can tackle AI.”
For example, Mustafa Suleyman, who co-founded AI research lab DeepMind and is now CEO of Inflection AI, says in his new book, The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma, there there should be cabinet-level positions for emerging technology like AI.
“In the twenty-first century it doesn’t make sense to have cabinet positions addressing matters like the economy, education, security, and defense without a similarly empowered and democratically accountable position in technology,” he wrote. He also says the United States should use its dominance in advanced chips to enforce global standards, and has called for the creation of a governance regime modeled on the Intergovernmental Panel on Climate Change.
Meanwhile, a new Axios survey found that AI experts at leading universities favor creating a federal “Department of AI” or a global regulator to govern artificial intelligence over leaving that to Congress, the White House or the private sector. And in Congress, Rep. Ted Lieu wants to create a “blue-ribbon commission” to study AI and advise lawmakers on how to regulate it. “My view is Congress doesn’t have the bandwidth to be able to regulate AI and every single possible application,” Lieu said. “That’s why I think we need a commission to give us some models to look at as to how we can regulate AI going into the future.”
Adding his thoughts to the mix is Alex Engler, a research fellow at the Brookings Institution, who last week offered a new idea for AI regulation that gives more authority to federal agencies. While he says it is less flashy than other ideas — including the notion of passing the EU AI Act in the US, or creating an FDA for algorithms — he maintains that his new regulatory tool, the Critical Algorithmic System Classification, would put a renewed focus on serious, systemic AI risks that have existed for decades, like discrimination and data privacy, rather than the less-demonstrated harms and risks of generative AI.
“I think the weight of the conversation should be on the harms that have really strong evidence behind them,” he told VentureBeat in a video interview. “Algorithms making large-scale determinations on critical socioeconomic decisions and determinations is still the core, first-order question of governance.” He said he is “sympathetic and interested” in ways to make sure the future of AI is a positive one including issues like transparency requirements on large language models, but says “let’s make sure we haven’t lost the ball here.”
Still, he said he expects that what he calls the “Schumer-ian process” will lead to a number of proposals from offices and a lot of public discussion. And while I recently questioned the closed-door format of Wednesday’s AI Insight Forum, Engler said “there is potentially value in not having public forums, if your actual, meaningful goal is knowledge development.”
What is distinct this time, he said, is the framing of Schumer’s build-up. “I feel like they kind of want the public’s attention, but still want to get the quality of more serious closed door questions,” he explained. “Frankly, if you look at the history of [public] technology policy hearings, they are not the most encouraging, the most substantive.”
According to a copy of this Wednesday’s event obtained by Axios, the kick-off AI Insight Forum will begin in the morning with a three-hour panel, in which the invited speakers — which includes a gaggle of Big Tech CEOs — will give remarks, followed by a three-hour afternoon session that explores “big questions in AI” and topics to be discussed in future forums. The notice states that Senators will not be able to provide remarks or to ask questions. Rather, it is structured as a dialogue between experts, rather than a traditional committee hearing.
Too bad ChatGPT is missing out on all those varied opinions, telling me “I don’t have personal opinions or feelings.” It sounds like we’re in for a packed autumn of debate around AI regulation, so I’m getting my own opinions — and popcorn — ready.
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
ChatGPT had no clue what I was talking about when I solicited its opinion on the Senate’s first “AI Insight Forum,” to be held this Wednesday and spearheaded by Senate Majority Leader Chuck Schumer.
“I apologize for any confusion, but I don’t have access to real-time information or the ability to provide opinions on events happening after my last knowledge update in September 2021,” wrote ChatGPT when I asked for information on what Schumer called “one of the most important conversations of the year.” The chatbot continued: “I’m not aware of specific AI regulation forums taking place this week or any other recent events.”
But while ChatGPT might be blissfully unaware of the constant chatter about regulating artificial intelligence, it seems like everyone else in the world of AI has an opinion on what Schumer calls a “coming together of top voices in business, civil rights, defense, research, labor, the arts, all together, in one room, having a much-needed conversation about how Congress can tackle AI.”
Calls for cabinet-level positions or a ‘Department of AI’
For example, Mustafa Suleyman, who co-founded AI research lab DeepMind and is now CEO of Inflection AI, says in his new book, The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma, there there should be cabinet-level positions for emerging technology like AI.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
“In the twenty-first century it doesn’t make sense to have cabinet positions addressing matters like the economy, education, security, and defense without a similarly empowered and democratically accountable position in technology,” he wrote. He also says the United States should use its dominance in advanced chips to enforce global standards, and has called for the creation of a governance regime modeled on the Intergovernmental Panel on Climate Change.
Meanwhile, a new Axios survey found that AI experts at leading universities favor creating a federal “Department of AI” or a global regulator to govern artificial intelligence over leaving that to Congress, the White House or the private sector. And in Congress, Rep. Ted Lieu wants to create a “blue-ribbon commission” to study AI and advise lawmakers on how to regulate it. “My view is Congress doesn’t have the bandwidth to be able to regulate AI and every single possible application,” Lieu said. “That’s why I think we need a commission to give us some models to look at as to how we can regulate AI going into the future.”
Focus on systemic risks
Adding his thoughts to the mix is Alex Engler, a research fellow at the Brookings Institution, who last week offered a new idea for AI regulation that gives more authority to federal agencies. While he says it is less flashy than other ideas — including the notion of passing the EU AI Act in the US, or creating an FDA for algorithms — he maintains that his new regulatory tool, the Critical Algorithmic System Classification, would put a renewed focus on serious, systemic AI risks that have existed for decades, like discrimination and data privacy, rather than the less-demonstrated harms and risks of generative AI.
“I think the weight of the conversation should be on the harms that have really strong evidence behind them,” he told VentureBeat in a video interview. “Algorithms making large-scale determinations on critical socioeconomic decisions and determinations is still the core, first-order question of governance.” He said he is “sympathetic and interested” in ways to make sure the future of AI is a positive one including issues like transparency requirements on large language models, but says “let’s make sure we haven’t lost the ball here.”
The ‘Schumer-ian’ process of AI regulation
Still, he said he expects that what he calls the “Schumer-ian process” will lead to a number of proposals from offices and a lot of public discussion. And while I recently questioned the closed-door format of Wednesday’s AI Insight Forum, Engler said “there is potentially value in not having public forums, if your actual, meaningful goal is knowledge development.”
What is distinct this time, he said, is the framing of Schumer’s build-up. “I feel like they kind of want the public’s attention, but still want to get the quality of more serious closed door questions,” he explained. “Frankly, if you look at the history of [public] technology policy hearings, they are not the most encouraging, the most substantive.”
According to a copy of this Wednesday’s event obtained by Axios, the kick-off AI Insight Forum will begin in the morning with a three-hour panel, in which the invited speakers — which includes a gaggle of Big Tech CEOs — will give remarks, followed by a three-hour afternoon session that explores “big questions in AI” and topics to be discussed in future forums. The notice states that Senators will not be able to provide remarks or to ask questions. Rather, it is structured as a dialogue between experts, rather than a traditional committee hearing.
Too bad ChatGPT is missing out on all those varied opinions, telling me “I don’t have personal opinions or feelings.” It sounds like we’re in for a packed autumn of debate around AI regulation, so I’m getting my own opinions — and popcorn — ready.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team