In what they say was an effort to combat “fake news” about COVID-19 aka the Coronavirus, Facebook, Twitter, YouTube, and Google admit that the artificial intelligence they put in place, while human content moderators were transitioning to working remotely due to the pandemic, led to widespread removal of content flagged as “policy violations” or “against community guidelines” erroneously.
The Verge reports: The issue was due to a “bug in an anti-spam system,” according to Guy Rosen, Facebook’s vice president of integrity. Rosen said the company began working on a fix as soon as discovering the issue.
We're on this – this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce. We're in the process of fixing and bringing all these posts back. More soon.
— Guy Rosen (@guyro) March 17, 2020
While Reuters reported: In a blog post, Google said that to reduce the need for people to come into offices, YouTube and other business divisions are temporarily relying more on artificial intelligence and automated tools to find problematic content. Such software is not always as accurate as humans, which leads to errors, it added, however. And “turnaround times for appeals against these decisions may be slower,” it said.
The other companies issued similar statements.