Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
On Sunday night, Senator Chris Murphy (D-CT) tweeted a shocking claim about ChatGPT — that the model “taught itself” to do advanced chemistry — and AI researchers immediately pushed back in frustration: “Your description of ChatGPT is dangerously misinformed,” Melanie Mitchell, an AI researcher and professor at the Santa Fe Institute, wrote in a tweet. “Every sentence is incorrect. I hope you will learn more about how this system actually works, how it was trained, and what its limitations are.”
Suresh Venkatasubramanian, former White House AI policy advisor to the Biden Administration from 2021-2022 (where he helped develop The Blueprint for an AI Bill of Rights) and professor of computer science at Brown University, said Murphy’s tweets are “perpetuating fear-mongering around generative AI.” Venkatasubramanian recently shared his thoughts with VentureBeat in a phone interview. He talked about the dangers of perpetuating discussions about “sentient” AI that does not exist, as well as what he considers to be an organized campaign around AI disinformation. (This interview has been edited and condensed for clarity.)
VentureBeat: What were your thoughts on Christopher Murphy’s tweets?
Suresh Venkatasubramanian: Overall, I think the senator’s comments are disappointing because they are perpetuating fear-mongering around generative AI systems that are not very constructive and are preventing us from actually engaging with the real issues with AI systems that are not generative. And to the extent there’s an issue, it is with the generative part and not the AI part. And no alien intelligence is not coming for us, in spite of what you’ve all heard. Sorry, I’m trying to be polite, but I’m struggling a little bit.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
VB: What did you think of his response to the response, where he still maintained something is coming and we’re not ready for it?
Venkatasubramanian: I would say something is already here and we haven’t been ready for it and we should do something about that, rather than worrying about a hypothetical that might be coming that hasn’t done anything yet. Focus on the harms that are already seen with AI, then worry about the potential takeover of the universe by generative AI.
VB: This made me think of our chat from last week or the week before where you talked about miscommunication between the policy people and the tech people. Do you feel like this falls under that context?
Venkatasubramanian: This is worse. It’s not misinformation, it’s disinformation. In other words, it’s overt and organized. It’s an organized campaign of fear-mongering. I have to figure out to what end but I feel like the goal, if anything, is to push a reaction against sentient AI that doesn’t exist so that we can ignore all the real problems of AI that do exist. I think it’s terrible. I think it’s really corrupting our policy discourse around the real impacts that AI is having — you know, when Black taxpayers are being audited at three times the rates of white taxpayers, that is not a sentient AI problem. That is an automated decision system problem. We need to fix that problem.
VB: Do you think Sen. Murphy just doesn’t understand, or do you think he’s actually trying to promote disinformation?
Venkatasubramanian: I don’t think the Senator is trying to promote disinformation. I think he’s just genuinely concerned. I think everyone is generally concerned. ChatGPT has heralded a new democratization of fear. Those of us who have been fearful and terrified for the last decade or so are now being joined by everyone in the country because of ChatGPT. So they are seeing now what we’ve been concerned about for a long time. I think it’s good to have that level of elevation of the concerns around AI. I just wish the Senator was not falling into the trap laid by the rhetoric around alien intelligence that frankly has forced people who are otherwise thoughtful to succumb to it. When you get New York Times op-eds by people who should know better, then you have a problem.
VB: Others pointed out on Twitter that anthropomorphizing ChatGPT in this way is also a problem. Do you think that’s a concern?
Venkatasubramanian: This is a deliberate design choice, by ChatGPT in particular. You know, Google Bard doesn’t do this. Google Bard is a system for making queries and getting answers. ChatGPT puts little three dots [as if it’s] “thinking” just like your text message does. ChatGPT puts out words one at a time as if it’s typing. The system is designed to make it look like there’s a person at the other end of it. That is deceptive. And that is not right, frankly.
VB: Do you think Senator Murphy’s comments are an example of what’s going to come from other leaders with the same sources of information about generative AI?
Venkatasubramanian: I think there’s again, a concerted campaign to send only that message to the folks at the highest levels of power. I don’t know by who. But when you have a show-and-tell in D.C. and San Francisco with deep fakes, and when you have op-eds being written talking about sentience, either it’s a collective mass freakout or it’s a collective loss freakout driven by the same group of people.
I will also say that this is a reflection of my own frustration with the discourse, where I feel like we were heading in a good direction at some point and I think we still are among the people who are more thoughtful and are thinking about this in government and in policy circles. But ChatGPT has changed the discourse, which I think is appropriate because it has changed things.
But it has also changed things in ways that are not helpful. Because the hypotheticals around generative AI are not as critical as the real harms. If ChatGPT are going to be used, as being claimed, in a suicide hotline, people are gonna get hurt. We can wait till then, or we can start saying that any system that gets used as a suicide hotline needs to be under strict guidance. And it doesn’t matter if it’s ChatGPT or not. That’s my point.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Sharon Goldman
Source: Venturebeat