Generative AI models have been co-opted to synthesize things from faces and apartments to butterflies, but a novel subcategory seeks to bring awareness to climate change by illustrating the consequences of catastrophic flooding. In an effort to establish a metric to quantify the veracity of these synthetic climate change images, researchers University of Montreal and Stanford University researchers recently detailed “several” evaluation methods in a preprint paper. They say that their work, while preliminary, begins to bridge the gap between automated and human-based generative quantification.
The research was notably coauthored by Turing Award winner and University of Montreal professor Yoshua Bengio, who was one of the first to combine neural networks with probabilistic models of sequences. In a paper published nearly two decades ago, he introduced the concept of word embeddings, a language modeling and feature learning paradigm in which words or phrases from a vocabulary are mapped to vectors of real numbers. Embeddings — and Bengio’s more recent work with computer scientist and Google Brain researcher Ian Goodfellow on generative adversarial networks (GANs) — have revolutionized the fields of machine translation, image generation, audio synthesis, and text to speech systems.
“Historically, climate change has been an issue around which it is hard to mobilize collective action … One reason [is] that it is difficult for people to mentally simulate the complex and probabilistic effects of climate change, which are often perceived to be distant in terms of time and space,” wrote the paper’s coauthors. “Climate communication literature has asserted that effective communications arises from messages that are emotionally charged and personally relevant over traditional forms of expert communication such as scientific reports, and that images in particular are key in increasing the awareness and concern regarding the issue of climate change.”
The researchers note that existing evaluation methods that could be applied to generated climate change images have “strong limitations” in that they don’t correlate with human judgement, which makes measuring the sophistication of the image generation models difficult. They propose one an alternative in a manual process involving human volunteers tasked with evaluating image-style combinations drawn from models, based on input images of diverse locations and building types (houses, farms, streets, cities) each with over a dozen AI-generated styles. The evaluators choose between real and half generated images, and an average error rate is computing reflecting the proportion of evaluators who judged the image as real, with higher values indicate more realistic images.
In pursuit of a less expensive and time-consuming approach, the team evaluate eight different automated methods in total. They report that the best used embeddings from an intermediary AI model layer with Fréchet Inception Distance, a metric that takes photos from both the target distribution and the model being evaluated and uses an object recognition system to suss out similarities among important features.
The team leaves to future work exploring better methods for evaluation, and developing a state-of-the-art generative climate change image synthesizer of their own.
“The ultimate vision of this work is to create an ML architecture which, given an image from Google StreetView based on a user-chosen location, is able to generate the most realistic image of climate-change induced extreme weather phenomena, given the contextual characteristics of that given image,” wrote the paper’s contributors. “While representing flooding realistically is the first step to achieve this goal, we later aim to represent other catastrophic events that are being bolstered by climate change (e.g. tropical cyclones or wildfires) using a similar approach.
Author: Kyle Wiggers
Source: Venturebeat