AI & RoboticsNews

How to avoid pitfalls and navigate the ethical landscape of generative AI

We still have a lot to figure out. That was my impression of a panel at our Transform 2023 event yesterday that drilled into the ethics of generative AI.

The panel was moderated by Philip Lawson, AI policy lead at the Armilla AI | Schwartz Reisman Institute for Technology and Society. It included Jen Carter, global head of technology at Google.org, and Ravi Jain, chair of the Association for Computing Machinery (ACM) working group on generative AI.

Lawson said that the aim was to dive deeper into better understanding the pitfalls of generative AI and how to successfully navigate its risks.

He noted that the underlying technology and Transformer-based architectures have been around for a number of years, though we’re all aware of the surge in attention in the last eight to 10 months or so with the launch of large language models like ChatGPT.

Carter noted that creators have been building on advances in AI since the 1950s and neural networks offered great advances. But the Transformer infrastructure has been a significant advance, starting around 2017. More recently, it’s taken off again with ChatGPT, giving large language models so much more breadth and depth to what they can do in response to queries. That’s been truly exciting, she said.

“There’s a tremendous amount of hype,” Jain said. “But for once, I think the hype is really worth it. The speed of development that I’ve seen in the last year — or eight months — in this area has just been unprecedented, in terms of just the technical capabilities and applications. It’s something I’ve never seen before at this scale in the industry. So that’s been tremendously exciting.”

>>Follow all our VentureBeat Transform 2023 coverage<<

He added, “What we’re seeing is the kinds of applications which even a couple of years ago would have required these models being built [by those with] lots of data, compute and lots of expertise. Now, you can have applications and broad domains for search, ecommerce, augmented reality, virtual reality. Using these foundational models, which work really well out of the box, they can be fine-tuned. They can be used as components and become part of an application ecosystem to build really complex applications that would not have been really possible a few years ago.”

But lots of people are pointing out the risks that come with the rapid advances. These aren’t just theoretical problems. The applications are so broad and advancing so rapidly that the risks are real, Jain said.

The risks cover areas such as privacy, ownership and copyright disputes, a lack of transparency, security and even national security, Jain said.

“The thing that’s important in my mind is really, as we are developing this, we have to make sure that the benefits are commensurate with the risks,” he said. “For instance, if somebody is talking with or having a conversation with an AI agent, then the human should always be informed [that] that conversation is with an AI.”

Jain said that the risks cannot be mitigated by a single company or a government.

“It really has to be a multi-stakeholder conversation that has to be an open, transparent conversation where all sectors of society that might be impacted are part of that. And I’m not sure we fully have that yet,” he said.

Carter is part of Google.org, Google’s philanthropic arm. She sees generative AI helping nonprofits with all kinds of potential benefits, but she also agreed there are serious risks. The ones that are top of mind are those with social impact.

“The first is always around bias and fairness,” she said.

Generative AI has the potential to reduce bias, but it “absolutely also has the risk of reflecting back or reinforcing the existing biases. And so that’s something that’s always top-of-mind for us. Our work is typically trying to serve underserved populations. And so if you are just reflecting that bias, you’re going to be doing harm to already vulnerable communities that you’re trying to help. And so it’s especially important for us to look at and understand those risks as one basic example.”

For instance, AI has been used to make risk assessments in the criminal justice system, but if it’s “retraining off of historical data that obviously itself is biased,” then that’s an example of a risk of the technology being misused.

And while generative AI has the potential to help everyone, it also risks further exacerbating the digital divide, helping wealthy businesses or corporations advance while leaving nonprofits and vulnerable communities behind, Carter said.

“What are the things that we can do to ensure that this new technology is going to be accessible and useful to everyone so that we can all benefit,” she said.

She noted it’s often very expensive to compile all of this data. And so it’s not always representative or it doesn’t exist for low and middle-income countries, Carter said.

She also noted there is potential for environmental impact.

“It’s computationally heavy to do a lot of this work. And we’re doing a lot of work reducing carbon emissions. And there’s a risk here of increasing emissions as more and more people are training these models and making use of them in the real world,” Carter said.

Jain’s organization, the ACM, is concerned about the risks as a professional computer science society.

“We’re building these models, but we really have to look at it much more holistically,” Jain said.

Engineers might be focused on accuracy, technical performance and technical aspects. But Jain said that researchers need to also look holistically at the implications of technology. Is it an inclusive technology that can serve all populations? The ACM recently issued a statement of principles to help guide communities and lay out a framework for how to think about these issues, like identifying the appropriate limits of the technology. Another question is the implications of the technology for transparency. Should we only use the technology if we let people know that it is AI-generated rather than human-generated? According to Jain, the ACM said researchers have a responsibility on this front.

Carter also pointed out that Google has released its own AI principles to guide the work that it does in a responsible direction and in a way that can be held accountable. Researchers have to think about the reviews that should happen for any new technology. She noted some nonprofits like the Rockefeller Foundation are looking at the impact of AI on vulnerable communities.

Enterprises, meanwhile, also face risks as they acquire the technology and start to apply it inside their walls.

“The first thing for an enterprise is, before rushing headlong into adopting the technology, actually spend the time and the effort to have an internal conversation about what are the organization’s objectives and values,” Jain said.

That kind of conversation should consider the unintended consequences of adopting new technology. That’s again why more stakeholders need to be involved in the conversation. The impact on labor is going to be a big risk, Jain said. You don’t want to be blindsided by the impacts.

Carter agreed it’s important for organizations to develop their own principles. She said it’s worth taking a look at the disability rights movement’s discussions that note you should do “nothing about us without us.” That means you should involve any affected populations in the discussion of the new technology.

“The best ideas come from those who are closest to the issues,” Carter said. “I’d also just encourage organizations that are getting into this space to take a really collaborative approach. We work really closely with governments and academics and nonprofits and communities in all these different groups, policymakers, and all of these different groups together.”

Jain said the government’s role is to create a regulatory framework, whether it’s an office or agency at the government level. The reason we need that is to put guardrails up to level the playing field so adoption can happen faster and in a responsible way. On top of that, we need something like the Advanced Research Projects Agency for AI, as the private sector has its limits when thinking about the long-term research that is necessary for dealing with risks like national security and safety research.

Carter said that AI is “too important not to regulate, and it’s too important not to regulate well, and so policymakers have just a critically important role across what is a really complex issue.”

She said education is a core part of enlisting the government and getting collaboration across sectors in a kind of AI campus.

“The goal is to train up policymakers both in our foundational principles and ideas, but also in the latest understanding of capabilities and risks,” Carter said.

Missed the GamesBeat Summit excitement? Don’t worry! Tune in now to catch all of the live and virtual sessions here.


We still have a lot to figure out. That was my impression of a panel at our Transform 2023 event yesterday that drilled into the ethics of generative AI.

The panel was moderated by Philip Lawson, AI policy lead at the Armilla AI | Schwartz Reisman Institute for Technology and Society. It included Jen Carter, global head of technology at Google.org, and Ravi Jain, chair of the Association for Computing Machinery (ACM) working group on generative AI.

Lawson said that the aim was to dive deeper into better understanding the pitfalls of generative AI and how to successfully navigate its risks.

He noted that the underlying technology and Transformer-based architectures have been around for a number of years, though we’re all aware of the surge in attention in the last eight to 10 months or so with the launch of large language models like ChatGPT.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Ravi Jain of ACM talks about AI ethics at Transform 2023.
Ravi Jain of ACM talks about AI ethics at Transform 2023.

Carter noted that creators have been building on advances in AI since the 1950s and neural networks offered great advances. But the Transformer infrastructure has been a significant advance, starting around 2017. More recently, it’s taken off again with ChatGPT, giving large language models so much more breadth and depth to what they can do in response to queries. That’s been truly exciting, she said.

“There’s a tremendous amount of hype,” Jain said. “But for once, I think the hype is really worth it. The speed of development that I’ve seen in the last year — or eight months — in this area has just been unprecedented, in terms of just the technical capabilities and applications. It’s something I’ve never seen before at this scale in the industry. So that’s been tremendously exciting.”

>>Follow all our VentureBeat Transform 2023 coverage<<

He added, “What we’re seeing is the kinds of applications which even a couple of years ago would have required these models being built [by those with] lots of data, compute and lots of expertise. Now, you can have applications and broad domains for search, ecommerce, augmented reality, virtual reality. Using these foundational models, which work really well out of the box, they can be fine-tuned. They can be used as components and become part of an application ecosystem to build really complex applications that would not have been really possible a few years ago.”

The risks of generative AI

But lots of people are pointing out the risks that come with the rapid advances. These aren’t just theoretical problems. The applications are so broad and advancing so rapidly that the risks are real, Jain said.

The risks cover areas such as privacy, ownership and copyright disputes, a lack of transparency, security and even national security, Jain said.

“The thing that’s important in my mind is really, as we are developing this, we have to make sure that the benefits are commensurate with the risks,” he said. “For instance, if somebody is talking with or having a conversation with an AI agent, then the human should always be informed [that] that conversation is with an AI.”

Jain said that the risks cannot be mitigated by a single company or a government.

“It really has to be a multi-stakeholder conversation that has to be an open, transparent conversation where all sectors of society that might be impacted are part of that. And I’m not sure we fully have that yet,” he said.

Carter is part of Google.org, Google’s philanthropic arm. She sees generative AI helping nonprofits with all kinds of potential benefits, but she also agreed there are serious risks. The ones that are top of mind are those with social impact.

“The first is always around bias and fairness,” she said.

Generative AI has the potential to reduce bias, but it “absolutely also has the risk of reflecting back or reinforcing the existing biases. And so that’s something that’s always top-of-mind for us. Our work is typically trying to serve underserved populations. And so if you are just reflecting that bias, you’re going to be doing harm to already vulnerable communities that you’re trying to help. And so it’s especially important for us to look at and understand those risks as one basic example.”

For instance, AI has been used to make risk assessments in the criminal justice system, but if it’s “retraining off of historical data that obviously itself is biased,” then that’s an example of a risk of the technology being misused.

And while generative AI has the potential to help everyone, it also risks further exacerbating the digital divide, helping wealthy businesses or corporations advance while leaving nonprofits and vulnerable communities behind, Carter said.

“What are the things that we can do to ensure that this new technology is going to be accessible and useful to everyone so that we can all benefit,” she said.

She noted it’s often very expensive to compile all of this data. And so it’s not always representative or it doesn’t exist for low and middle-income countries, Carter said.

She also noted there is potential for environmental impact.

Jen Carter of Google.org talks about AI ethics at Transform 2023.

“It’s computationally heavy to do a lot of this work. And we’re doing a lot of work reducing carbon emissions. And there’s a risk here of increasing emissions as more and more people are training these models and making use of them in the real world,” Carter said.

A holistic look at technology’s implications

Jain’s organization, the ACM, is concerned about the risks as a professional computer science society.

“We’re building these models, but we really have to look at it much more holistically,” Jain said.

Engineers might be focused on accuracy, technical performance and technical aspects. But Jain said that researchers need to also look holistically at the implications of technology. Is it an inclusive technology that can serve all populations? The ACM recently issued a statement of principles to help guide communities and lay out a framework for how to think about these issues, like identifying the appropriate limits of the technology. Another question is the implications of the technology for transparency. Should we only use the technology if we let people know that it is AI-generated rather than human-generated? According to Jain, the ACM said researchers have a responsibility on this front.

Carter also pointed out that Google has released its own AI principles to guide the work that it does in a responsible direction and in a way that can be held accountable. Researchers have to think about the reviews that should happen for any new technology. She noted some nonprofits like the Rockefeller Foundation are looking at the impact of AI on vulnerable communities.

AI risks for enterprises

Enterprises, meanwhile, also face risks as they acquire the technology and start to apply it inside their walls.

“The first thing for an enterprise is, before rushing headlong into adopting the technology, actually spend the time and the effort to have an internal conversation about what are the organization’s objectives and values,” Jain said.

That kind of conversation should consider the unintended consequences of adopting new technology. That’s again why more stakeholders need to be involved in the conversation. The impact on labor is going to be a big risk, Jain said. You don’t want to be blindsided by the impacts.

Carter agreed it’s important for organizations to develop their own principles. She said it’s worth taking a look at the disability rights movement’s discussions that note you should do “nothing about us without us.” That means you should involve any affected populations in the discussion of the new technology.

“The best ideas come from those who are closest to the issues,” Carter said. “I’d also just encourage organizations that are getting into this space to take a really collaborative approach. We work really closely with governments and academics and nonprofits and communities in all these different groups, policymakers, and all of these different groups together.”

Government’s role

Jain said the government’s role is to create a regulatory framework, whether it’s an office or agency at the government level. The reason we need that is to put guardrails up to level the playing field so adoption can happen faster and in a responsible way. On top of that, we need something like the Advanced Research Projects Agency for AI, as the private sector has its limits when thinking about the long-term research that is necessary for dealing with risks like national security and safety research.

Carter said that AI is “too important not to regulate, and it’s too important not to regulate well, and so policymakers have just a critically important role across what is a really complex issue.”

She said education is a core part of enlisting the government and getting collaboration across sectors in a kind of AI campus.

“The goal is to train up policymakers both in our foundational principles and ideas, but also in the latest understanding of capabilities and risks,” Carter said.

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.


Author: Dean Takahashi
Source: Venturebeat

Related posts
DefenseNews

UK Navy to buy six vessels, enters new ‘golden age’ of shipbuilding

DefenseNews

House bill would block F-22 retirements, keep buying Air Force F-15EXs

DefenseNews

House panel takes aim at Navy size, new capabilities in defense bill

Cleantech & EV'sNews

VW just released details of the 2025 VW ID. Buzz's US trims

Sign up for our Newsletter and
stay informed!