AI & RoboticsNews

Researchers develop AI that distinguishes between satire and fake news

How can you distinguish between satire and fake news? It usually comes down to semantic and linguistic differences, but the nuances can be tough to spot. That’s why researchers at George Washington University, Amazon AWS AI, and startup AdVerifai investigated a machine learning approach to classifying misleading speech. They say the AI model they developed, which outperformed the baseline, lays the groundwork for the study of additional linguistic features.

Their work follows that of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), which earlier this year architected an AI model that could determine whether a source is accurate or politically prejudiced. In subsequent work, MIT CSAIL used one of the world’s largest fact-checking data sets to develop automated systems that could detect false statements.

The paper’s coauthors note that efforts to reduce the spread of misinformation have occasionally resulted in the flagging of legitimate satire, particularly on social media. Complicating matters, some fake news purveyors have begun masquerading as satire sites. These developments of course threaten the business of legitimate publishers, which might struggle to monetize their satire, but also they affect the experience of consumers, who could miss out on miscategorized content.

The researchers hypothesized that metrics of text coherence might be useful in capturing semantic relatedness between sentences of a story. To this end, they used a set of indices related to text statistics implemented by Coh-Metrix, a tool for producing linguistic and discourse representations. There were 108 in total, including (but not limited to) the number of words and sentences; referential cohesion, which refers to overlap in content words between sentences; various text readability formulas; and different types of connective words.

The researchers leveraged a statistical technique called principal component analysis to convert potentially correlated metrics into uncorrelated variables (or principal components), which they used in two logistic regression models (functions that model the probability of certain classes) with the fake and satire labels their dependent variables. Next, they evaluated the models’ performance on a corpus containing 283 fake news stories and 203 satirical stories that had been verified by hand.

The team reports that a classifier trained on the “significant” indices outperformed the baseline F1 score, a measure of the frequency of false positives and negatives. The top-performing algorithm achieved a 0.78 score, where 1 is perfect, while revealing that satirical articles tended to be more sophisticated (and less easy to read) than fake news articles.

In future work, the researchers plan to study linguistic cues such as absurdity, incongruity, and other humor-related features.

“Overall, our contributions, with the improved classification accuracy and toward the understanding of nuances between fake news and satire, carry great implications with regard to the delicate balance of fighting misinformation while protecting free speech,” they wrote.


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!