In their work, the researchers surveyed academic papers, online platforms, and apps that generate art using AI, selecting examples that focused on simulating established art schools and styles. To investigate biases, they considered state-of-the-art AI systems trained on movements (e.g., Renaissance art, cubism, futurism, impressionism, expressionism, post-impressionism, and romanticism), genres (landscapes, portraits, battle paintings, sketches, and illustrations), materials (woodblock prints, engravings, paint) styles, and artists (Clementine Hunter, Mary Cassatt, Van Gogh, Gustave Dore, Gino Severini).
By using causal models called directed acrylic graphs, or DAGs, the coauthors say they were able to identify aspects relevant to AI-generated pieces of art and how these different aspects influenced each other. In one example, they found that DeepArt, a platform that lets users repaint pictures in the style of other artists, failed to account for movement in translating the Cubism artwork Propellers by Fernand Leger into a Futurist style. In another, they report that a piece of realism translated into expressionism by DeepArt — Mary Cassatt’s Miss Mary Ellison — didn’t have the hallmark distorted subjects of expressionism.
Some of these biases are more harmful than others. GoArt, a platform similar to DeepArt, changes the face color of Clementine Hunter’s Black Matriarch from Black to red in translating it to an expressionist style while preserving the color of artwork with white faces like Desiderio da Settignano’s Giovinetto, a sculpture. And another generative art tool — Abacus — mistook young men with long hair in artwork by Raphael and Cosimo as women.
The researchers peg the blame on imbalances in the datasets used to train generative AI models, which they note might be influenced by dataset curators’ preferences. One app referenced in the study — AI Portraits — was trained using 45,000 Renaissance portraits of mostly white people, for example. Another potential source of bias could be inconsistencies in the labeling process, or the process of annotating the datasets with labels from which the models learn, according to the researchers. Different annotators have different preferences, cultures, and beliefs that might be reflected in the labels that they create.
“There may be imbalances with respect to art genres (e.g. large number of photographs vs few sculptures), artists (e.g. mostly European artists vs few native artists), art movements (large number of works concerning Renaissance and modern art movements as opposed to others), and so on,” the coauthors wrote. “Faces depicting different races, appearances, etc. have not been pooled into the dataset, thus contributing to representational bias.”
The researchers warn that by wrongly modeling or overlooking certain subtle components, generative art can contribute to false perceptions about social, cultural, and political aspects of past times and hinder awareness about important historical events. For this reason, they urge AI researchers and practitioners to inspect the design choices and systems and the sociopolitical contexts that shape their use.
VentureBeat
VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you,
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more.
Author: Kyle Wiggers
Source: Venturebeat