Facebook published a new study on Monday outlining how it uses a mix of artificial intelligence and human fact-checkers and moderators to uphold the principles of citizenship. The study — called the Group Standards Compliance Update, which typically contains results and observations from the previous three to six months — is this time around focused heavily on AI and Facebook’s success depends more on algorithms rather than on humans, despite the intense toll that the job will take on human moderators.
Facebook still depends heavily on the technologies nowadays to help regulate its forum after the COVID-19 pandemic, which prohibits the organization from employing the normal third-party moderator companies because staff of those companies is not allowed to view confidential Facebook data from home computers. On Tuesday, it is reported that Facebook had formed a $52 million class action for current and former moderators to pay them for mental health problems, particularly post-traumatic stress disorder, and while at work. The Verge has previously written on Facebook employees firms’ working practices to improve their website.
“This study contains details only through March 2020 so it doesn’t reflect the full effect of the improvements we made since the pandemic,” writes Guy Rosen, vice president of honesty at the organization, in a blog post. Given the state of the planet, the Facebook study provides additional insights about how the organization actively uses its AI systems to counter coronavirus-related disinformation and other types of network violence, such as price gouging about Facebook Marketplace.
Also Read: Facebook and Instagram Launch a Week of Highlights and Events
Facebook Placed Warning Labels On 50 Million Coronavirus-Related Posts Last Month “We placed warning labels on around 50 million posts related to COVID-19 on Facebook during April, based on about 7,500 articles by our autonomous fact-checking partners,” the organization said in a separate blog post about its ongoing COVID-19, written by a group of its academic scientists and software engineers.
Facebook believes the warnings work: Anyone who is warned that a piece of material contains disinformation will opt not to access it again, 95% of the time. But it is proving to be a struggle to create such brands through its vast network. As an example, Facebook is finding that there is already a reasonable amount of propaganda as well as hate speech occurring in photographs and videos, not just links to text or post.
This, the organization says, is a harder obstacle for AI to tackle. Thanks to factors such as wordplay and language variations, not only do AI-based models have a tougher time decoding a viral image or video, but the algorithm must be programmed to identify duplicates or even slightly changed copies of the material as it travels through Twitter, too. But this is just what Facebook says is done with what it calls SimSearchNet, a multi-year project across several branches within the organization to train an AI model to recognize all versions of the original image and those that are near-duplicates and maybe only one word altered in the line of text.
Conclusion:
This is a major problem on Facebook where many politically influenced groups and organizations, or those who merely feed off ideological indignation, will take photos, videos, and other pictures and modify them to change their context.