In the latest example of the need for further human moderation, YouTube’s automated system took down several videos after mistaking robot fighting matches as animal cruelty. Those affected, including some BattleBots contestants, received a message stating, “Content that displays the deliberate infliction of animal suffering or the forcing of animals to fight is not allowed on YouTube.” Was this a mere glitch, or are the robots gaining empathy for their brethren?
“With the massive volume of videos on our site, sometimes we make the wrong call,” a YouTube spokesperson told Engadget. “When it’s brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it. We also offer uploaders the ability to appeal removals and we will re-review the content.” The spokesperson clarified that YouTube does not have any policies that prohibit footage of robots fighting and that the affected videos were quickly reinstated.
The issue encapsulates an ongoing problem with Google’s streaming platform: Innocuous videos are frequently removed while harmful videos go unnoticed — or sometimes ignored — by YouTube’s moderation system. With about 300 hours of footage uploaded to YouTube every minute, there’s no way humans could moderate all that content, but some sort of middle ground seems warranted. Human moderators could review videos that have been flagged for more egregious problems such as animal cruelty, while leaving copyright issues up to the robots and the appeals system. It will likely take an incident far more egregious than mistakenly flagging some robot battles for YouTube take any steps in that direction, though.
Author: Marc DeAngelis
Tags: AI, automation, av, BattleBots, entertainment, google, internet, moderation, robots, YouTube