Some polls predicting results of the U.S. 2020 presidential election appear to have missed the mark. Aggregator RealClearPolitics showed former vice president Joe Biden with a 7-point advantage over current president Donald Trump, while FiveThirtyEight suggested Biden was ahead by at least 8 points at the national level on average. In reality, the race turned out to be substantially tighter. In Florida, for example, where FiveThirtyEight showed a 2.5-point margin in favor of Biden, Trump claimed victory as he gained unexpected support in Miami-Dade County.
Polling isn’t a perfect science. Reports in the lead-up to the 2016 election showed Hillary Clinton ahead nationally, with a tighter race in states like Wisconsin, Michigan, and Pennsylvania. But Trump ultimately passed the 270 electoral college votes needed to win the presidency. A report from the American Association for Public Opinion Research concluded that state-level polling “underestimated Trump’s support in the Upper Midwest,” with forecasters pointing to a lack of high-quality polling data from those states.
So is there a more accurate way to project election results than traditional polling, which mostly relies on telephone calls and online panel surveys? Firms like KCore Analytics, Expert.AI, and Advanced Symbolics claim algorithms can capture a more expansive picture of election dynamics because they draw on signals like tweets and Facebook messages. But in the aftermath of the 2020 election, it’s still unclear whether AI proved more or less accurate than the polls.
KCore Analytics predicted from social media posts that Biden would have a strong advantage — about 8 or 9 points — in terms of the popular vote but a small lead when it came to the electoral college. Italy-based Expert.AI, which found that Biden ranked higher on social media in terms of sentiment, put the Democratic candidate slightly ahead of Trump (50.2% to 47.3%). On the other hand, Advanced Symbolics’ Polly system, which was developed by scientists at the University of Ottawa, was wildly off with projections that showed Biden nabbing 372 electoral college votes compared with Trump’s 166, thanks to anticipated wins in Florida, Texas, and Ohio — all states that went to Trump.
As with polling, some of the disparity in the algorithm-driven forecasts can be attributed to methodological differences.
Expert.ai leverages a knowledge graph that identifies named entities — including people, companies, and places — and attempts to model the relationships between them. The company says its system, which attaches 84 emotional labels to hundreds of thousands of posts from Twitter and other networks, semi-automatically weeds out botlike social accounts. Expert.ai’s algorithm ranks the labels on a scale from 1 to 100 (reflecting their intensity) and multiplies this by the number of occurrences per candidate. At the same time, it classifies emotions as either “positive” or “negative” and uses this to create an index that can compare the two candidates.
By comparison, KCore Analytics, which claims to have used over 1 billion mined tweets to guide its predictions, taps an end-to-end framework to find influencers and hashtags in networks like Twitter. Data is selected according to both content and frequency — ostensibly in real time and excluding bots — which an AI model called AWS-LSTM analyzes for opinion classification, with a claimed accuracy of up to 89.5%.
As for Polly, it gathers a randomized, controlled sample of American voters identified by their posts and conversations on social media. Prior to November 3, this total stood at 288,659 people.
One challenge in predicting the election results with AI is that the algorithms must be trained to learn different models for the electoral college that coincide with national predictions. Another is that they need to fine-tune their ability to uncover issues important to specific minority groups and regions. The smaller the groups, the harder these are to find.
According to Advanced Symbolics, Polly spectacularly failed in this respect. The model predicted that Florida would vote for Biden, with 52.6% of the state’s total votes, but only because the system failed to separately sample for Cuban Americans, who usually vote for Republican candidates. Instead, Polly lumped them in as “Hispanic,” along with Venezuelan Americans and Mexican Americans.
“We need to include more ethnic and regional ‘factors’ for the next election,” the Polly team conceded in a blog post this week. “Amplifying errors make them easier to uncover — finding where Polly went astray, issue by issue, state by state.”
Rural areas of the U.S. were also more difficult for the models to account for. That’s because a lower percentage of likely voters in these regions use Twitter, leading the models to underpredict the margin of, say, Biden voters. Moreover, fewer potential Trump voters are on Twitter, as the social network tends to lean liberal. This means tweets from Trump supporters are weighted more heavily in social-based election forecasting models but sometimes not heavily enough, as was the case with Polly.
Trump received more than 68.6 million votes on Election Day this year, compared with 62.8 million in 2016. And in counties like Miami-Dade, which was expected to “go blue,” Republicans turned out to vote at a somewhat higher percentage than Democrats (63% of the county’s registered Republicans compared with 56% of Democrats) as of October 30.
Firms like KCore Analytics claim their AI models are superior to traditional polling because they can be scaled up to massive groups of potential voters and adjusted to predict outcomes with sampling biases (like underrepresented minorities) and other limits. They correctly predicted the U.K. would vote to leave the European Union in 2016, and they correctly predicted about 80% of the winners in Taiwan’s parliamentary elections, as well as close regional races in India and Pakistan.
But they aren’t infallible. And as Fortune notes, none of these models takes into account the way legal challenges, faithless electors (members of the electoral college who don’t vote for the candidate they’d pledged to), or other confounders might affect the outcome of a race. And with Polly as a case study, these approaches — like traditional polls — appear to have underestimated voter enthusiasm for Trump in 2020, particularly among Black and Latinx voters and members of the LGBTQ community.
Andrew Gelman, a professor of statistics and political science at Columbia University, makes the case that polling models tuned to certain variables in a given election year are likely to be closer to the mark than guesses derived from polling averages. “Political scientists have developed models that do a good job of forecasting the national vote based on so-called ‘fundamentals’: key variables such as economic growth, presidential approval, and incumbency,” he wrote in an op-ed for Wired. “If we’d taken one of these models and adjusted it based on the parties’ vote shares from 2016 (as opposed to using recent polling data), we would have projected a narrow Biden win.”
How startups are scaling communication:
The pandemic is making startups take a close look at ramping up their communication solutions. Learn how
Author: Kyle Wiggers
Source: Venturebeat