AI & RoboticsNews

Is OpenAI’s ‘moonshot’ to integrate democracy into AI tech more than PR? | The AI Beat

Last week, an OpenAI PR rep reached out by email to let me know the company had formed a new “Collective Alignment” team that would focus on “prototyping processes” that allow OpenAI to “incorporate public input to guide AI model behavior.” The goal? Nothing less than democratic AI governance — building on the work of ten recipients of OpenAI’s Democratic Inputs to AI grant program.

I immediately giggled. The cynical me enjoyed rolling my eyes at the idea of OpenAI, with its lofty ideals of ‘creating safe AGI that benefits all of humanity’ while it faces the mundane reality of hawking APIs and GPT stores and scouring for more compute and fending off copyright lawsuits, attempting to tackle one of humanity’s thorniest challenges throughout history — that is, crowdsourcing a democratic, public consensus about anything.

After all, isn’t American democracy itself currently being tested like never before? Aren’t AI systems at the core of deep-seated fears about deepfakes and disinformation threatening democracy in the 2024 elections? How could something as subjective as public opinion ever be applied to the rules of AI systems — and by OpenAI, no less, a company which I think can objectively be described as the king of today’s commercial AI?

Still, I was fascinated by the idea that there are people at OpenAI whose full-time job is to make a go at creating a more democratic AI guided by humans — which is, undeniably, a hopeful, optimistic and important goal. But is this effort more than a PR stunt, a gesture by an AI company under increased scrutiny by regulators?

I wanted to know more, so I got on a Zoom with the two current members of the new Collective Alignment team: Tyna Eloundou, an OpenAI researcher focused on the societal impacts of technology, and Teddy Lee, a product manager at OpenAI who previously led human data labeling products and operations to ensure responsible deployment of GPT, ChatGPT, DALL-E, and OpenAI API. The team is “actively looking” to add a research engineer and research scientist to the mix, which will work closely with OpenAI’s “Human Data” team, “which builds infrastructure for collecting human input on the company’s AI models, and other research teams.”

I asked Eloundou how challenging it would be to reach the team’s goals of developing democratic processes for deciding what rules AI systems should follow. In an OpenAI blog post in May 2023 that announced the grant program, “democratic processes” were defined as “a process in which a broadly representative group of people exchange opinions, engage in deliberative discussions, and ultimately decide on an outcome via a transparent decision making process.”

Eloundou admitted that many would call it a “moonshot.”

“But as a society, we’ve had to face up to this challenge,” she added. “Democracy itself is complicated, messy, and we arrange ourselves in different ways to have some hope of governing our societies or respective societies.” For example, she explained, it is people who decide on all the parameters of democracy — how many representatives, what voting looks like — and people decide whether the rules make sense and whether to revise the rules.

Lee pointed out that one anxiety-producing challenge is the myriad of directions that attempting to integrate democracy into AI systems can go.

“Part of the reason for having a grant program in the first place is to see what other people who are already doing a lot of exciting work in the space are doing, what are they going to focus on,” he said. “It’s a very intimidating space to step into — the socio-technical world of how do you see these models collectively, but at the same time, there’s a lot of low-hanging fruit, a lot of ways that we can see our own blind spots.”

According to a new OpenAI blog post published last week, the democratic inputs to AI grant program awarded $100,000 to 10 diverse teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. “Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public,” the blog post says.

Each team tackled these challenges in different ways — they included “novel video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behavior.”

There were, not surprisingly, immediate roadblocks. Many of the ten teams quickly learned that public opinion can change on a dime, even day-to-day. Reaching the right participants across digital and cultural divides is tough and can skew results. Finding agreement among polarized groups? You guessed it — hard.

But OpenAI’s Collective Alignment team is undeterred. In addition to advisors on the original grant program including Hélène Landemore, a professor of political science at Yale, Eloundou said the team has reached out to several researchers in the social sciences, “in particular those who are involved in citizens assemblies — I think those are the closest modern corollary.” (I had to look that one up — a citizens assembly is “a group of people selected by lottery from the general population to deliberate on important public questions so as to exert an influence.”)

One of the grant program’s starting points, said Lee, was “we don’t know what we don’t know.” The grantees came from domains like journalism, medicine, law, and social science, some had worked on U.N. peace negotiations — but the sheer amount of excitement and expertise in this space, he explained, imbued the projects with a sense of energy. “We just need to help to focus that towards our own technology,” he said. “That’s been pretty exciting and also humbling.”

But is the Collective Alignment team’s goal ultimately doable? “I think it’s just like democracy itself,” he said. “It’s a bit of a continual effort. We won’t solve it. As long as people are involved, as people’s views change and people interact with these models in new ways, we’ll have to keep working at it.”

Eloundou agreed. “We’ll definitely give it our best shot,” she said.

PR stunt or not, I can’t argue with that — at a moment when democratic processes seem to be hanging by a string, it seems like any effort to boost them in AI system decision-making should be applauded. So, I say to OpenAI: Hit me with your best shot.

Last week, an OpenAI PR rep reached out by email to let me know the company had formed a new “Collective Alignment” team that would focus on “prototyping processes” that allow OpenAI to “incorporate public input to guide AI model behavior.” The goal? Nothing less than democratic AI governance — building on the work of ten recipients of OpenAI’s Democratic Inputs to AI grant program.

I immediately giggled. The cynical me enjoyed rolling my eyes at the idea of OpenAI, with its lofty ideals of ‘creating safe AGI that benefits all of humanity’ while it faces the mundane reality of hawking APIs and GPT stores and scouring for more compute and fending off copyright lawsuits, attempting to tackle one of humanity’s thorniest challenges throughout history — that is, crowdsourcing a democratic, public consensus about anything.

After all, isn’t American democracy itself currently being tested like never before? Aren’t AI systems at the core of deep-seated fears about deepfakes and disinformation threatening democracy in the 2024 elections? How could something as subjective as public opinion ever be applied to the rules of AI systems — and by OpenAI, no less, a company which I think can objectively be described as the king of today’s commercial AI?

Still, I was fascinated by the idea that there are people at OpenAI whose full-time job is to make a go at creating a more democratic AI guided by humans — which is, undeniably, a hopeful, optimistic and important goal. But is this effort more than a PR stunt, a gesture by an AI company under increased scrutiny by regulators?

OpenAI researcher admits collective alignment could be a ‘moonshot’

I wanted to know more, so I got on a Zoom with the two current members of the new Collective Alignment team: Tyna Eloundou, an OpenAI researcher focused on the societal impacts of technology, and Teddy Lee, a product manager at OpenAI who previously led human data labeling products and operations to ensure responsible deployment of GPT, ChatGPT, DALL-E, and OpenAI API. The team is “actively looking” to add a research engineer and research scientist to the mix, which will work closely with OpenAI’s “Human Data” team, “which builds infrastructure for collecting human input on the company’s AI models, and other research teams.”

I asked Eloundou how challenging it would be to reach the team’s goals of developing democratic processes for deciding what rules AI systems should follow. In an OpenAI blog post in May 2023 that announced the grant program, “democratic processes” were defined as “a process in which a broadly representative group of people exchange opinions, engage in deliberative discussions, and ultimately decide on an outcome via a transparent decision making process.”

Eloundou admitted that many would call it a “moonshot.”

“But as a society, we’ve had to face up to this challenge,” she added. “Democracy itself is complicated, messy, and we arrange ourselves in different ways to have some hope of governing our societies or respective societies.” For example, she explained, it is people who decide on all the parameters of democracy — how many representatives, what voting looks like — and people decide whether the rules make sense and whether to revise the rules.

Lee pointed out that one anxiety-producing challenge is the myriad of directions that attempting to integrate democracy into AI systems can go.

“Part of the reason for having a grant program in the first place is to see what other people who are already doing a lot of exciting work in the space are doing, what are they going to focus on,” he said. “It’s a very intimidating space to step into — the socio-technical world of how do you see these models collectively, but at the same time, there’s a lot of low-hanging fruit, a lot of ways that we can see our own blind spots.”

10 teams designed, built and tested ideas using democratic methods

According to a new OpenAI blog post published last week, the democratic inputs to AI grant program awarded $100,000 to 10 diverse teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. “Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public,” the blog post says.

Each team tackled these challenges in different ways — they included “novel video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behavior.”

There were, not surprisingly, immediate roadblocks. Many of the ten teams quickly learned that public opinion can change on a dime, even day-to-day. Reaching the right participants across digital and cultural divides is tough and can skew results. Finding agreement among polarized groups? You guessed it — hard.

But OpenAI’s Collective Alignment team is undeterred. In addition to advisors on the original grant program including Hélène Landemore, a professor of political science at Yale, Eloundou said the team has reached out to several researchers in the social sciences, “in particular those who are involved in citizens assemblies — I think those are the closest modern corollary.” (I had to look that one up — a citizens assembly is “a group of people selected by lottery from the general population to deliberate on important public questions so as to exert an influence.”)

Giving democratic processes in AI ‘our best shot’

One of the grant program’s starting points, said Lee, was “we don’t know what we don’t know.” The grantees came from domains like journalism, medicine, law, and social science, some had worked on U.N. peace negotiations — but the sheer amount of excitement and expertise in this space, he explained, imbued the projects with a sense of energy. “We just need to help to focus that towards our own technology,” he said. “That’s been pretty exciting and also humbling.”

But is the Collective Alignment team’s goal ultimately doable? “I think it’s just like democracy itself,” he said. “It’s a bit of a continual effort. We won’t solve it. As long as people are involved, as people’s views change and people interact with these models in new ways, we’ll have to keep working at it.”

Eloundou agreed. “We’ll definitely give it our best shot,” she said.

PR stunt or not, I can’t argue with that — at a moment when democratic processes seem to be hanging by a string, it seems like any effort to boost them in AI system decision-making should be applauded. So, I say to OpenAI: Hit me with your best shot.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
DefenseNews

Raytheon to develop two Standard Missile types with better targeting

DefenseNews

Boeing’s defense unit shows profit, despite $222M loss on KC-46, T-7

DefenseNews

Here are the two companies creating drone wingmen for the US Air Force

Cleantech & EV'sNews

CATL unveils world's first LFP battery with 4C ultra-fast charging for 370-mi in 10 mins

Sign up for our Newsletter and
stay informed!