AI & RoboticsNews

Princeton University’s ‘AI Snake Oil’ authors say generative AI hype has ‘spiraled out of control’

Back in 2019, Princeton University’s Arvind Narayanan, a professor of computer science and expert on algorithmic fairness, AI and privacy, shared a set of slides on Twitter called “AI Snake Oil.” The presentation, which claimed that “much of what’s being sold as ‘AI’ today is snake oil. It does not and cannot work,” quickly went viral.

Narayanan, who was recently named director of Princeton’s Center for Information Technology Policy, went on to start an “AI Snake Oil” Substack with his Ph.D. student Sayash Kapoor, previously a software engineer at Facebook, and the pair snagged a book deal to “explore what makes AI click, what makes certain problems resistant to AI, and how to tell the difference.”

>>Follow VentureBeat’s ongoing generative AI coverage<<

Now, with the generative AI craze, Narayanan and Kapoor are about to hand in a book draft that goes beyond their original thesis to tackle today’s gen AI hype, some of which they say has “spiraled out of control.” 

I drove down the New Jersey Turnpike to Princeton University a few weeks ago to chat with Narayanan and Kapoor in person. This interview has been edited and condensed for clarity.

VentureBeat: The AI landscape has changed so much since you first started publishing the AI Snake Oil Substack and announced the future publication of the book. Has your outlook on the idea of “AI snake oil” changed? 

Narayanan: When I first started speaking about AI snake oil, it was almost entirely focused on predictive AI. In fact, one of the main things we’ve been trying to do in our writing is make clear the distinction between generative and predictive and other types of AI — and why the rapid progress in one might not imply anything for the other. 

We were very clear as we started the process that we thought the progress in gen AI was real. But like almost everybody else, we were caught off guard by the extent to which things have been progressing — especially the way in which it’s become a consumer technology. That’s something I would not have predicted.

When something becomes a consumer tech, it just takes on a massively different kind of significance in people’s minds. So we had to refocus a lot of what our book was about. We didn’t change any of our arguments or positions, of course, but there’s a much more balanced focus between predictive and gen AI now.

Kapoor: Going one step further, with consumer technology there are also things like product safety that come in, which might not have been a big concern for companies like OpenAI in the past, but they become huge when you have 200 million people using your products every day. 

So the focus on AI has shifted from debunking predictive AI — pointing out why these textures cannot work in any possible domain, no matter no matter what models you use, no matter how much data you have — to gen AI, where we feel that they need more guardrails, more responsible tech. 

VentureBeat: When we think of snake oil, we think of salespeople. So in a way, that is a consumer-focused idea. So when you use that term now, what is your biggest message to people, whether they’re consumers or businesses?

Narayanan: We still want people to think about different types of AI differently — that is our core message. If somebody is trying to tell you how to think about all types of AI across the board, we think they’re definitely oversimplifying things. 

When it comes to gen AI, we clearly and repeatedly acknowledge in the book that this is a powerful technology and it’s already having useful impacts for a lot of people. But at the same time, there’s a lot of hype around it. While it’s very capable, some of the hype has spiraled out of control.

There are many risks. There are many harmful things already happening. There are many unethical development practices. So we want people to be mindful of all of that and to use their collective power, whether it’s in the workplace when they make decisions about what technology to adopt for their offices, or whether it’s in their personal life, to use that power to make change. 

VentureBeat: What kind of pushback feedback do you get from the wider community, not just on Twitter, but among other researchers in the academic community?

Kapoor: We started the blog last August and we did not expect it to become as big as it has. More importantly, we did not expect to receive so much good feedback, which has helped us shape many of the arguments in our book. We still receive feedback from academics, entrepreneurs or in some cases large companies have reached out to us to talk about how they should be shaping their policy. In other cases, there has been some criticism, which has also helped us reflect on how we’re presenting our arguments, both on the blog but also in the book. 

For example, when we started writing about large language models (LLMs) and security, we had this blog post out when the original LLaMA model came out — people were taken aback by our stance on some incidents where we argued that AI was not uniquely positioned to make disinformation worse. Based on that feedback, we did a lot more research and engagement with current and past literature, and talked to a few people, which really helped us refine our thinking.

Narayanan: We’ve also had pushback on ethical grounds. Some people are very concerned about the labor exploitation that goes into building gen AI. We are as well, we very much advocate for that to change and for policies that force companies to change those practices. But for some of our critics, those concerns are so dominant, that the only ethical course of action for someone who’s concerned about that is to not use gen AI at all. I respect that position. But we have a different position and we accept that people are going to criticize us for that. I think individual abstinence is not a solution to exploitative practices. A company’s policy change should be the response.

VentureBeat: As you lay out your arguments in “AI Snake Oil,” what would you like to see happen with gen AI in terms of action steps?

Kapoor: At the top of the list for me is usage transparency around gen AI, how people actually use these platforms. Compared to say, Facebook, which puts out a quarterly transparency report saying, “Oh, these many people use it for hate speech and this is what we’re doing to address it.” For gen AI, we have none of that — absolutely nothing. I think something similar is possible for gen AI companies, especially if they have a consumer product at the end of the pipeline. 

Narayanan: Taking it up a level from specific interventions to what might need to change structurally when it comes to policymaking. There need to be more technologists in government. So better funding of our enforcement agencies would help. People often think about AI policy as an issue where we have to start from scratch and figure out some silver bullet. That’s not at all the case. Something like 80% of what needs to happen is just enforcing laws that we already have and avoiding loopholes.

VentureBeat: What are your biggest pet peeves as far as AI hype? Or what do you want people, individuals, enterprise companies using AI to keep in mind? For me, for example, it’s the anthropomorphizing of AI.

Kapoor: Okay, this might turn out to be a bit controversial, but we’ll see. In the last few months, there has been this increasing so-called rift between the AI ethics and AI safety communities. There is a lot of talk about how this is an academic rift that needs to be resolved, how these communities are basically aiming for the same purpose. I think the thing that annoys me most about the discourse around this is that people don’t recognize this as a power struggle.

It is not really about intellectual merit of these ideas. Of course, there are lots of bad intellectual and academic claims that have been made on both sides. But that isn’t what this is really about. It’s about who gets funding, which concerns are prioritized. So looking at it as if it is like a clash of individuals or a clash of personalities just really undersells the whole thing, makes it sound like people are out there bickering, whereas in fact, it’s about something much deeper.

Navanayar: In terms of what everyday people should keep in mind when they’re reading a press story about AI, is to not be too impressed by numbers. We see all kinds of numbers and claims around AI — that ChatGPT scored 70% accurate on the bar exam, or let’s say there’s an earthquake detection AI that’s 80% accurate, or whatever.

Our view in the book is that these numbers mean virtually nothing. Because really, the whole ballgame is in how well the evaluation that someone conducted in the lab matches the conditions that AI will have to operate in the real world. And it’s because those can be so different. We’ve had, for instance, very promising proclamations on how close we are to self driving. But when you put cars out in the world, you start noticing these problems. 

VentureBeat: How optimistic are you that we can deal with “AI Snake Oil”?

Narayanan: I’ll speak for myself: I approach all of this from a place of optimism. The reason I do tech criticism is because of the belief that things can be better. And if we look at all kinds of past crises, things worked out in the end, but that’s because people worried about them at key moments.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Back in 2019, Princeton University’s Arvind Narayanan, a professor of computer science and expert on algorithmic fairness, AI and privacy, shared a set of slides on Twitter called “AI Snake Oil.” The presentation, which claimed that “much of what’s being sold as ‘AI’ today is snake oil. It does not and cannot work,” quickly went viral.

Narayanan, who was recently named director of Princeton’s Center for Information Technology Policy, went on to start an “AI Snake Oil” Substack with his Ph.D. student Sayash Kapoor, previously a software engineer at Facebook, and the pair snagged a book deal to “explore what makes AI click, what makes certain problems resistant to AI, and how to tell the difference.”

>>Follow VentureBeat’s ongoing generative AI coverage<<

Now, with the generative AI craze, Narayanan and Kapoor are about to hand in a book draft that goes beyond their original thesis to tackle today’s gen AI hype, some of which they say has “spiraled out of control.” 

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

I drove down the New Jersey Turnpike to Princeton University a few weeks ago to chat with Narayanan and Kapoor in person. This interview has been edited and condensed for clarity.

VentureBeat: The AI landscape has changed so much since you first started publishing the AI Snake Oil Substack and announced the future publication of the book. Has your outlook on the idea of “AI snake oil” changed? 

Narayanan: When I first started speaking about AI snake oil, it was almost entirely focused on predictive AI. In fact, one of the main things we’ve been trying to do in our writing is make clear the distinction between generative and predictive and other types of AI — and why the rapid progress in one might not imply anything for the other. 

We were very clear as we started the process that we thought the progress in gen AI was real. But like almost everybody else, we were caught off guard by the extent to which things have been progressing — especially the way in which it’s become a consumer technology. That’s something I would not have predicted.

When something becomes a consumer tech, it just takes on a massively different kind of significance in people’s minds. So we had to refocus a lot of what our book was about. We didn’t change any of our arguments or positions, of course, but there’s a much more balanced focus between predictive and gen AI now.

Kapoor: Going one step further, with consumer technology there are also things like product safety that come in, which might not have been a big concern for companies like OpenAI in the past, but they become huge when you have 200 million people using your products every day. 

So the focus on AI has shifted from debunking predictive AI — pointing out why these textures cannot work in any possible domain, no matter no matter what models you use, no matter how much data you have — to gen AI, where we feel that they need more guardrails, more responsible tech. 

VentureBeat: When we think of snake oil, we think of salespeople. So in a way, that is a consumer-focused idea. So when you use that term now, what is your biggest message to people, whether they’re consumers or businesses?

Narayanan: We still want people to think about different types of AI differently — that is our core message. If somebody is trying to tell you how to think about all types of AI across the board, we think they’re definitely oversimplifying things. 

When it comes to gen AI, we clearly and repeatedly acknowledge in the book that this is a powerful technology and it’s already having useful impacts for a lot of people. But at the same time, there’s a lot of hype around it. While it’s very capable, some of the hype has spiraled out of control.

There are many risks. There are many harmful things already happening. There are many unethical development practices. So we want people to be mindful of all of that and to use their collective power, whether it’s in the workplace when they make decisions about what technology to adopt for their offices, or whether it’s in their personal life, to use that power to make change. 

VentureBeat: What kind of pushback feedback do you get from the wider community, not just on Twitter, but among other researchers in the academic community?

Kapoor: We started the blog last August and we did not expect it to become as big as it has. More importantly, we did not expect to receive so much good feedback, which has helped us shape many of the arguments in our book. We still receive feedback from academics, entrepreneurs or in some cases large companies have reached out to us to talk about how they should be shaping their policy. In other cases, there has been some criticism, which has also helped us reflect on how we’re presenting our arguments, both on the blog but also in the book. 

For example, when we started writing about large language models (LLMs) and security, we had this blog post out when the original LLaMA model came out — people were taken aback by our stance on some incidents where we argued that AI was not uniquely positioned to make disinformation worse. Based on that feedback, we did a lot more research and engagement with current and past literature, and talked to a few people, which really helped us refine our thinking.

Narayanan: We’ve also had pushback on ethical grounds. Some people are very concerned about the labor exploitation that goes into building gen AI. We are as well, we very much advocate for that to change and for policies that force companies to change those practices. But for some of our critics, those concerns are so dominant, that the only ethical course of action for someone who’s concerned about that is to not use gen AI at all. I respect that position. But we have a different position and we accept that people are going to criticize us for that. I think individual abstinence is not a solution to exploitative practices. A company’s policy change should be the response.

VentureBeat: As you lay out your arguments in “AI Snake Oil,” what would you like to see happen with gen AI in terms of action steps?

Kapoor: At the top of the list for me is usage transparency around gen AI, how people actually use these platforms. Compared to say, Facebook, which puts out a quarterly transparency report saying, “Oh, these many people use it for hate speech and this is what we’re doing to address it.” For gen AI, we have none of that — absolutely nothing. I think something similar is possible for gen AI companies, especially if they have a consumer product at the end of the pipeline. 

Narayanan: Taking it up a level from specific interventions to what might need to change structurally when it comes to policymaking. There need to be more technologists in government. So better funding of our enforcement agencies would help. People often think about AI policy as an issue where we have to start from scratch and figure out some silver bullet. That’s not at all the case. Something like 80% of what needs to happen is just enforcing laws that we already have and avoiding loopholes.

VentureBeat: What are your biggest pet peeves as far as AI hype? Or what do you want people, individuals, enterprise companies using AI to keep in mind? For me, for example, it’s the anthropomorphizing of AI.

Kapoor: Okay, this might turn out to be a bit controversial, but we’ll see. In the last few months, there has been this increasing so-called rift between the AI ethics and AI safety communities. There is a lot of talk about how this is an academic rift that needs to be resolved, how these communities are basically aiming for the same purpose. I think the thing that annoys me most about the discourse around this is that people don’t recognize this as a power struggle.

It is not really about intellectual merit of these ideas. Of course, there are lots of bad intellectual and academic claims that have been made on both sides. But that isn’t what this is really about. It’s about who gets funding, which concerns are prioritized. So looking at it as if it is like a clash of individuals or a clash of personalities just really undersells the whole thing, makes it sound like people are out there bickering, whereas in fact, it’s about something much deeper.

Navanayar: In terms of what everyday people should keep in mind when they’re reading a press story about AI, is to not be too impressed by numbers. We see all kinds of numbers and claims around AI — that ChatGPT scored 70% accurate on the bar exam, or let’s say there’s an earthquake detection AI that’s 80% accurate, or whatever.

Our view in the book is that these numbers mean virtually nothing. Because really, the whole ballgame is in how well the evaluation that someone conducted in the lab matches the conditions that AI will have to operate in the real world. And it’s because those can be so different. We’ve had, for instance, very promising proclamations on how close we are to self driving. But when you put cars out in the world, you start noticing these problems. 

VentureBeat: How optimistic are you that we can deal with “AI Snake Oil”?

Narayanan: I’ll speak for myself: I approach all of this from a place of optimism. The reason I do tech criticism is because of the belief that things can be better. And if we look at all kinds of past crises, things worked out in the end, but that’s because people worried about them at key moments.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!