Google hopes to tap AI and machine learning to make speedy local weather predictions. In a paper and accompanying blog post, the tech giant detailed an AI system that uses satellite images to produce “nearly instantaneous” and high-resolution forecasts — on average, with a roughly one kilometer resolution and a latency of only 5-10 minutes. The researchers behind it say it outperforms traditional models “even at these early stages of development.”
The system takes a data-driven and physics-free approach to weather modeling, meaning it learns to approximate atmospheric physics from examples alone and not by incorporating prior knowledge. Underpinning it is a convolutional neural network that takes as input images of weather patterns and transforms them into new output images.
As the Google researchers explain, a convolutional network comprises a sequence of layers where each layer is a set of mathematical operations In this case, it’s a U-Net, where the layers are arranged in an encoding phase that decreases the resolution of images passing through them. A separate decoding phase expands the low-dimensional image representations created during the encoding phase.
Inputs to the U-Net contain one channel per a multispectral satellite image in a sequence of observations over a given hour. For example, if there were 10 satellite images collected in an hour and each of those images was taken at 10 wavelengths, the image input would have 100 channels.
For their initial work, the engineering team trained a model from historical observations over the U.S. split into four-week chunks between 2017 and 2019, a portion of which they reserved for evaluation. They compared the model’s performance to three baselines: the High Resolution Rapid Refresh (HRRR) numerical forecast from the National Oceanic and Atmospheric Administration (specifically the 1-hour total accumulated surface prediction), an optical flow algorithm that attempted to track moving objects through a sequence of images, and a persistence model in which each location was assumed to be raining in the future at the same rate it was raining then.
The researchers report that the quality of their system was generally superior to all three models, but that the HRRR began to outperform it when the prediction horizon reached about 5 to 6 hours. However, they note that the HRRR had a computational latency of 1-3 hours, or significantly longer than that of theirs.
“The numerical model used in HRRR can make better long term predictions, in part because it uses a full 3D physical model — cloud formation is harder to observe from 2D images, and so it is harder for [machine learning] methods to learn convective processes,” explained the researchers. “It’s possible that combining these two systems, our [machine learning] model for rapid forecasts and HRRR for long-term forecasts … could produce better results overall.”
The researchers leave to future work applying machine learning directly to 3D observations.
Of course, Google isn’t the only one tapping AI to predict the weather and natural events — or disasters. Early last year, IBM launched a new forecasting system developed by The Weather Company — the weather forecasting and information technology company it acquired in 2016 — capable of providing “high-precision” and local forecasting across the globe. For their part, Facebook researchers developed a method to analyze satellite imagery and determine the level of damages an area has suffered following disasters like fires and floods. And scientists at Stanford’s Department of Geophysics experimented with a system — dubbed Cnn-Rnn Earthquake Detector, or CRED — that can isolate and identify a range of seismic signals from historical and continuous data.
Author: Kyle Wiggers
Source: Venturebeat