AI & RoboticsNews

Is AI moving too fast for ethics? | The AI Beat

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Last weekend may have been a holiday in Silicon Valley, with AI researchers wolfing down turkey before preparing to fly to New Orleans for the start of NeurIPS (which one researcher called an “annual gala” of AI and another “Burning Man” for AI).  But nothing seems to stop the pace of news — or the debate — about AI models and research, even Thanksgiving in the U.S. My question: Is it all moving too fast and furiously for responsible and ethical AI efforts to keep up? 

For example, it was the end of the day on November 23rd — a time when most Americans were likely in holiday travel mode — when Stability AI announced the release of Stable Diffusion 2.0.  The announcement was an updated version of its open-source text-to-image generator, which immediately became wildly popular when it was released just three months ago

By the time most of the U.S. tech crowd was munching on Thanksgiving turkey leftovers, there was already controversy afoot. While there were new welcome features announced in Stable Diffusion 2.0— including a new text encoder called OpenCLIP that “greatly improves the quality of the generated images compared to earlier V1 releases” and a text-guided inpainting model that simplifies swapping out parts of an image — some users complained about the newfound inability to generate pictures in the styles of specific artists or generate “not safe for work” (NSFW) images. 

Others hailed the fact that Stable Diffusion removed nude and pornographic images from its training data, which can be used to generate photorealistic and anime-style pictures, including generating non-consensual pornography images and images of child abuse. Still others pointed out that since it is open source, developers can still train Stable Diffusion on NSFW data, or on an artist’s data without their consent. 

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Is filtering for NSFW data enough?

But are Stable Diffusion’s data filtering efforts enough? When one Twitter thread highlighted a debate by others around whether the removal of the NSFW training data constituted “censorship,” Sara Hooker, head of Cohere AI and a former Google Brain researcher, weighed in. 

“Why is this even presented as a reasonable debate?,” she tweeted. “Completely absurd. I honestly give up on our ML community sometimes.” 

In addition, she said that “the lack of awareness of the safety issues these models present is appalling. Frankly, it is not clear to me that only filtering for NSFW is sufficient.” 

Part of the risk is this is “moving too fast,” she added. “We have readily available models with very limited safety checks in place.”  She pointed to a paper showcasing some of the shortcomings of the safety filter for an earlier version of Stable Diffusion.

AI for negotiation and persuasion

The Stable Diffusion news nearly drowned out the applause and chatter of the previous two days, which was all around Meta’s latest AI research announcement about Cicero, an AI agent that masters the difficult and popular strategy game Diplomacy — showing off the machine’s ability to master negotiation, persuasion and cooperation with humans. In a paper published last week in Science, Cicero is said to have ranked in the top 10 percent of players in an online Diplomacy league and achieved more than double the average score of the human players — by combining language models with strategic reasoning.

Even AI critics like Gary Marcus found plenty to cheer about regarding Cicero’s prowess: “Cicero is in many ways a marvel,” he said. “It has achieved by far the deepest and most extensive integration of language and action in a dynamic world of any AI system built to date. It has also succeeded in carrying out complex interactions with humans of a form not previously seen.” 

Still, with the Cicero news coming just six days after Meta took its widely criticized demo of Galactica offline, there were some questions about what the Cicero research means for the future of AI. Is AI that is increasingly cunning and manipulative coming down the pike? 

Athul Paul Jacob, one of the Cicero researchers and a Ph.D. student at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), points out that in order to play Diplomacy well, honesty is the best policy. 

“Most of the best players will tell you that, so there’s a lot of effort into actually making sure the system tries to be as honest as possible,” he told VentureBeat. 

That said, so far Cicero is only trained on Diplomacy. While Jacob says that future applications of the techniques created for Cicero could range from self-driving cars to customer service bots, it’s clear that there is still a long way to go. 

Meta’s Cicero is open-source

Noam Brown, the lead author of the Cicero paper and research scientist at Meta AI’s Fundamental AI Research (FAIR) working on multi-agent artificial intelligence, emphasized that Cicero is not intended for a particular product. “We’re part of an organization that’s doing fundamental research and we’re really just trying to push the boundaries of what AI is capable of,” he told VentureBeat. 

However, Brown added that he hopes that by open-sourcing the code and the models and making the data accessible to researchers (something that Google subsidiary DeepMind has not done, for example, with its AlphaGo), others are able to build on the work and take it even further. 

“I think that it’s an excellent domain for investigating multi-agents, artificial intelligence, cooperative AI, and dialogue models that are grounded,” he said. “There are several things we learned from this project, like the fact that relying on human data is so effective in multi-agent settings, that conditioning dialogue generation on planning ends up being so helpful. That is a general lesson that is quite broadly applicable.”  

A responsible approach to AI research

The response to Cicero since arriving at NeurIPS, he added, has been overwhelmingly positive. 

“Honestly, I’ve been so happy by the reception in the community,” he said. “We just did an impromptu talk a couple of hours ago and it was just overflowing, people were sitting on the floor, because they didn’t have enough seats for everybody — I think the community is excited that there is this combination of strategic reasoning with language models, and they see that as a path forward for progress in AI.” 

When it comes to ethics, Brown said he could only speak to his work specifically on Cicero. 

“I can only comment on our own projects and [ethics] certainly was for us a priority,” he said. “That’s why we’re making the data, the models, accessible to the academic community. It’s  really at the core of what FAIR (Facebook AI Research Group) stands for. I think that we’re trying to take in a responsible approach to our research.” 

That said, Brown agreed that AI research is progressing very quickly. “It’s incredible to see the progress that’s being made across the field of AI, not just in our domain,” he said. “But I think it’s important to keep in mind that when you see these kinds of results, it might seem like they happen so quickly, but it has built on top of a lot and we spent years getting to this point.”

Will slow and steady win the AI race?

I liked what Andrew Ng had to say in his The Batch newsletter this week about Meta’s Galactica, in the aftermath of controversy around the model’s potential to generate false or misleading scientific articles:

One problem with the way Galactica was released is that we don’t yet have a robust framework for understanding of the balance of benefit versus harm for this model, and different people have very different opinions. Prior to a careful analysis of benefit versus harm, I would not recommend “move fast and break things” as a recipe for releasing any product with potential for significant harm. I would love to see more extensive work — perhaps through limited-access trials — that validates the product’s utility to third parties, explores and develops ways to ameliorate harm, and documents this thinking clearly.

So perhaps slow and steady will win the race, both for AI research and ethics? To add another cliche to the mix, time will tell. In the meantime, no rest for the weary in New Orleans: Keep me updated on all things NeurIPS!

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!