By Sean Jackson
On Wednesday Facebook issued its annual Community Standards Enforcement report which highlights removal of content the company deems harmful. In the report the company said it removed close to 3.2 billion fake accounts between April and September - more than double than the amount they removed last year during the same period. The increase in removals come amid heightened scrutiny of the platform by the U.S. government.
Facebook also reports they removed 11.4 million instances of hate speech, compared to the 5.4 million in they removed in the six month period last year. Facebook classifies hate speech as “violent or dehumanizing speech, statements of inferiority, calls for exclusion or segregation based on protected characteristics, or slurs. These characteristics include race, ethnicity, national origin,, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disability or disease.”
Facebook has included enforcement action on Instagram in the report for the first time, and the company has said it has made strides towards detecting child nudity and sexual exploitation on the platform. Facebook states that they have removed 1.2 million posts that fit those parameters in the six month timeframe.
The company also classified new categories of harmful content, removing more than 1.6 million posts relating to suicide and self-injury content on Instagram, and on Facebook they removed 4.5 million posts.
Every year Facebook updates its enforcement and community standards which dictate what content is acceptable and what will result in users getting banned. Last year, Facebook said it is making progress taking down fake accounts and removing hate speech from the platform, along with bullying, child nudity, and sexual exploitation. This new level of transparency regarding its enforcement decisions come following the 2016 U.S. presidential election where the company faced criticism over how it failed to prevent election interference, including misinformation.
As Facebook continues to adapt their community standards, the push from lawmakers to ensure political advertisements are not spreading misinformation continues. Facebook has recently faced backlash for refusing to fact-check or remove political advertisements. Other platforms such as Twitter have banned political advertisements outright from their platform, but Facebook CEO Mark Zuckerberg defended the company’s decision to allow for political ads as a matter of free speech.
This has been met with criticism from politicians such as Senator Elizabeth Warren, who posted intentional fake ads to highlight the potential spread of misinformation. Senator Mark Warner has also come out in favor of the Honest Ads Act, a bill introduced to require platforms to correct the record if they display false advertisements.
In an open letter to Facebook, Warner feels the company should be held to the same standards as TV broadcasters, saying, “In making strides not to continue contributing to the coarsening of our political debate, and the undermining of our public institutions, at minimum, Facebook should at least adhere to the same norms of other traditional media companies when it comes to political advertising. In defending its refusal to remove false political advertisements by candidates, Facebook has pointed to provisions of the Communications Act that prohibit broadcast licensees from rejecting or modifying candidate ads, regardless of their accuracy.”
Warner’s letter also outlines how Facebook fact checks commercial advertisements, but not political ones, saying, “While Facebook’s advertising rules have long prohibited ads with ‘deceptive, false, or misleading content’ - relying on its ‘third-party fact-checkers’ to identify violations of these policies - Facebook subsequently revised these rules to exempt political ads run by politicians through a policy that withholds such ads from third-party fact-checkers review.”