We’ve all been there: You’re snapping pics with your phone — perhaps of a high-speed bike ride or of a hockey match — and don’t think to check whether the autofocus is in lockstep with the action. It isn’t, as you later discover, and you’re stuck with a gallery of unusably blurry photos.
In search of a solution, scientists at the Inception Institue of Artificial Intelligence in the United Arab Emirates, the Beijing Institute of Technology, and Stony Brook University developed an AI system that removes blur from images in post-production. They note in a paper that it’s human-aware, meaning it’s able to deblur human faces, and that it performs “favorably” against state-of-the-art motion deblurring methods.
Due to the relative motion between a camera and objects, the foreground and background often undergo different types of degradation. Plus, subjects experience varied motion due to the distance between them and image plane.
The researchers’ model, then, learns and leverages human and background masks to capture blurs by disentangling foreground and background blurs. Training it required compiling a data set — Human-aware Image Deblurring (HIDE) — of pairs of blurry images and ground truth sharp images obtained using a camera covering thousands of outdoor scenes, complex backgrounds, and diverse foreground motions and sizes. Each pair was fed through a human detection model that provided “roughly accurate” bounding boxes around subjects, which were later refined by human annotators.
Using a machine with a single Nvidia Titan X graphics card, the researchers trained their deblurring model on both a portion of HIDE and a supplementary GoPro Hero data set of video frames for a total of 10,742 images. (The GoPro data set was used only to train the background-recognizing portion of the system, because it contained very few pedestrians.) The researchers say that their model achieves state-of-the-art performance with respect to dynamic deblurring and leads to better restoration results compared with several baselines.
“By comprehensively fusing the deblurring features from different domains, [our model is] able to reconstruct the image with explicit structure and semantic details,” wrote the paper’s coauthors. “Such a design leads to a unified, human-aware, and attentive deblurring network. By explicitly and separately modeling the human-related and [background] blurs, our method can better capture the diverse motion patterns and rich semantics of humans, leading to better deblurring results for both [foreground] humans and [backgrounds].”
The researchers aren’t the first to tap AI to clean up messy photos. Nvidia, MIT, and Aalto University recently proposed a machine learning technique to reduce image noise, and Chinese smartphone giant Xiaomi devised a model that restores details and enhances colors in poorly exposed photos.
Author: Kyle Wiggers.
Source: Venturebeat