During a workshop hosted at the International Conference on Learning Representations (ICLR) 2020, taking place on the web this week, panelists discussed how AI and machine learning might be — and already has been — applied to agricultural challenges. As several experts pointed out, countries around the world face a food supply shortfall — an estimated 9% of the population (697 million people) are severely “food insecure,” meaning they’re without reliable access to affordable, nutritious food.
Factors like labor shortages, the spread of pests and pathogens, and climate change threaten to escalate the crisis. But AI can help. IBM scientists spoke about their work in Africa with agricultural “digital twins,” or digital models of crops used to forecast specific crop yields. Acadia University researchers spotlighted an algorithm that measures grape yields ostensibly more accurately than human workers can. And a team from the University of California, Davis detailed an effort to use satellite images to predict foraging conditions for livestock in Kenya.
Cultivation recommendations from digital farm ‘twins’
Software quality assurance lead Mohamed Akram detailed IBM’s work last year to digitally “clone” farms in Nigeria, which entailed collecting histories of multi-spectral images and metadata like sensor readings, weather, and soil conditions to build a simulation of farms on IBM’s cloud platform. A portion of the work was an outgrowth of a partnership between IBM and Hello Tractor, a subscription service that connects small-scale farmers to equipment and data analytics for better crop production.
Akram asserts that digital crop doubles are of value not only to the farmers themselves but to the distributors, governments, and banks that use can use them to keep track of market dynamics, plan and establish policies, and minimize their investment risks. He noted that the world’s population is expected to exceed 8 billion people within five years but that by the end of the century, farmable land will decrease by 20%.
GamesBeat Summit 2020 Online | Live Now, Upgrade your pass for networking and speaker Q&A.
“Tackling food security challenges will depend on making the supply chain simpler, safer, and less wasteful,” said Akram.
Akram and his team leveraged IBM’s PAIRS Geoscope, a service designed to host and manage petabytes of geospatial-temporal data like maps and drone imagery, to store satellite, weather, and ground-level sensor data about each individual farm. Another IBM service — Watson Decision Platform for Agriculture, which pairs algorithms from IBM-owned The Weather Company with internet of things data ingestion tools — allowed the engineers to obtain yield forecasts after feeding in moisture readings taken at multiple depths, soil nutrient content and fertility, farm practices and workflow information, and high-definition visual satellite imagery.
One challenge to overcome was the relative scarcity of data for the farms, which were small in scale. Satellite imagery provided only pixels’ worth of information, and not all of the farms could afford monitoring devices. The team’s solution was to model groups of farms as over 40,000 clusters in the target region. This enabled the engineers to train a recommender system to answer two key questions: (1) When should a farmer do a specific cultivation activity and (2) what is the optimal plowing day that maximizes crop yield for small-scale farmers?
This system comprised an ensemble learning model that recommended cultivation dates, drawing on historical states (from the digital “twins”) and future metadata forecasts such as recent weather history (humidity, visibility, temperature, precipitation, and wind speed), weather forecasts (soil moisture at four different depths), multi-spectral satellite imagery, and ground-truth event information (locations and dates). In experiments, missing metadata — like crop types and soil conditions — impeded the model’s predictions. But the researchers claim their solution outperformed a heuristics-based system by a wide margin.
Using computer vision to estimate grape yield
Daniel L. Silver and Jabun Nasa, two researchers affiliated with Acadia University’s Institue for Data Analytics, presented work on a computer vision system they developed that measures grape yield from images of grapevines. Accurate grape yield estimates are critical for planning harvests and for making wine production choices, but as Silver and Nasa point out, taking measurements has historically been a costly process — not to mention an imprecise one (75% to 90% accurate).
To build a training set for their yield-estimating machine learning model, the researchers recruited volunteers and tasked them with snapping pictures of grapes on the vine and measuring the grapes’ weights using a digital scale. Post-collection, Silver and Nasa digitized the measurement data and cropped, normalized, and resized the photos before combining both data sets and feeding them into a convolutional neural network (a type of AI model well-suited to analyzing visual imagery).
They report that their best-performing model was 85.15% accurate on average at predicting yield six days prior to harvest, and 82% accurate at predicting yield 16 days to harvest. In future work, they plan to refine it by incorporating an automatic image cropper and long-term weather forecast data.
Predicting forage conditions with satellite imagery
Researchers hailing from the University of California, Davis and Weights and Biases, an AI consultancy company, spoke on an effort to predict forage conditions for livestock in Kenya. The strife and struggle of Northern Kenya’s pastoralists motivated its launch — they depend on livestock for food and income but are often unable to anticipate where drought might occur.
The ideal predictive model, then, would prevent livestock loss and hunger by analyzing public data. When drought strikes, it could be linked to a platform that quickly transfers resources to the pastoralists that they could use to cover household expenses or livestock needs.
The researchers pursued this idea by compiling a training corpus consisting of human-labeled, ground-level images with data points like timestamps, forage quality (on a 0-3 scale, with zero indicating severe drought), plant and animal types, and distance to water. They linked it with over 100,000 satellite images taken at the same places and times, with the goal of predicting quality using only the aforementioned satellite images.
The team published the data set on Weights and Biases’ benchmarking website, which allows contributors to submit models trained on it to a communal leaderboard. At the time of writing, the best-performing algorithm could predict drought with 77.8% accuracy, with the next-best model achieving 77.5% accuracy.
Going forward, the researchers hope to expand the scope of their work to other regions, in part by collecting ground-level and forage data with geolocations for staple crops like maize, cassava, rice, and more.
Author: Kyle Wiggers.
Source: Venturebeat