AI & RoboticsNews

NeurIPS requires AI researchers to account for societal impact and financial conflicts of interest

For the first time ever, researchers who submit papers to NeurIPS, one of the biggest AI research conferences in the world, must now state the “potential broader impact of their work” on society as well as any financial conflict of interest, conference organizers told VentureBeat.

NeurIPS is one of the first and largest AI research conferences to enact the requirements. The social impact statement will require AI researchers to confront and account for both positive and negative potential outcomes of their work, while the financial disclosure requirement may illuminate the role industry and big tech companies play in the field. Financial disclosures must state both potential conflicts of interests directly related to the submitted research and any potential unrelated conflict of interest.

NeurIPS 2020 communications chair Michael Littman told VentureBeat in an email that the “ethical aspects and future societal consequences” statements will be published with each paper. However, they’ll appear only in the camera-ready versions of the papers so they do not compromise the double-blind nature of the reviewing process.

Research with potential ethical concerns will be given special consideration. “Reviewers and area chairs’ assessment will be done on the basis of technical contributions only. However, if a paper is flagged for potential ethical concerns, then the paper will be sent to another set of reviewers with expertise in ethics and machine learning. The final acceptance of these papers is contingent on the positive assessment by these second set of reviewers as well,” Littman said.

At a town hall last year, NeurIPS 2019 organizers suggested that researchers this year may be required to state their model’s carbon footprint, perhaps using calculators like ML CO2 Impact. The impact a model will have on climate change can certainly be categorized as related to “future societal impact,” but no such explicit requirement is included in the 2020 call for papers.

“The norms around the societal consequences statements are not yet well established,” Littman said. “We expect them to take form over the next several conferences and, very likely, to evolve over time with the concerns of the society more broadly. Note that there are many papers submitted to the conference that are conceptual in nature and do not require the use of large scale computational resources, so this particular concern, while extremely important, is not universally relevant.”

Responses to the new rules vary.

To be clear, I don’t think this is a positive step. Societal impacts of AI is a tough field, and there are researchers and organizations that study it professionally. Most authors do not have expertise in the area and won’t do good enough scholarship to say something meaningful.

— Roger Grosse (@RogerGrosse) February 20, 2020

Roger Grosse, a faculty member at the Vector Institute for AI at the University of Toronto, complained that the new policy will “lead to trivialization of important issues.” Grosse is a member of CIFAR leadership and suggested corporate sponsors be required to share statements about their broader impact on society.

In recent years, NeurIPS has faced criticism for the growing role of the major AI research arms of tech giants like Google AI, OpenAI, and Facebook AI Research (FAIR).

In response to Grosse’s argument that social impact should be left to researchers who focus on ethics or social impact, Joe Redmon said he stopped doing computer vision research because of his concerns about the potential dangerous impacts it could have on the world. Redmon created the YOLO real-time object detection system together with Ali Farhadi, whose company, Xnor, was recently acquired by Apple.

Redmon described himself as a person who used to buy into the myth that science is apolitical. He declared that virtually no facial recognition research would get published if the broader society impact was taken seriously.

I stopped doing CV research because I saw the impact my work was having. I loved the work but the military applications and privacy concerns eventually became impossible to ignore.https://t.co/DMa6evaQZr

— Joe Redmon (@pjreddie) February 20, 2020

Last month, Deborah Raji, a fellow at the AI Now Institute at New York University, introduced an internal auditing framework for businesses to evaluate the ethical performance of their AI systems and close what she and coauthors call an “AI accountability gap.”

The framework draws on tools for documenting impact like data sheets for data sets and model cards from Google AI. It also draws on regulatory practices from other industries like aviation and pharmaceutical drug testing.

“I think there’s a lot of things that we don’t do that other fields do,” Raji told VentureBeat in a phone interview.

Raji said requiring social impact statements at conferences like NeurIPS may be emerging in response to the publication of ethically questionable research at conferences in the past year, such as a comment-generating algorithm that can disseminate misinformation in social media.

She calls the new rules a step in the right direction.

“Industry is just very present [in the AI research field], so the way that things get operationalized, the rapidness through which it enters the market and industry, is so quick that it makes sense for you to actually be thinking about how your work is going to be interpreted and how it’s going to be used, and I think that forcing researchers to take the time to do that is going to be important with respect to getting them to be more reflective on the impact of their work,” she said.

Raji calls attitudes that say only researchers focused on ethics or social impact should evaluate a model “a misunderstanding of your responsibility as a researcher.”

“In my education, we weren’t really given a vocabulary around any of this. I think that’s why people are freaking out a little bit, but ultimately it’s not that big a deal,” she said.

Conference organizers shared new rules alongside a call for papers last week for the 34th annual Neural Information Processing Systems conference (NeurIPS). The conference will take place in December in Vancouver, Canada. NeurIPS papers are due May 12.


Author: Khari Johnson.
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!

Worth reading...
Lenovo ThinkPad refresh offers 10th-gen Intel CPUs and AMD’s Ryzen 4000 Pro