Facebook Inc will ban false information about voting requirements and fact-check fake reports of violence or long lines at polling stations ahead of next month’s U.S. midterm elections.
Facebook executives revealed this information to the Media as the latest effort to reduce voter manipulation on its service.
The new policy was disclosed by Facebook’s cyber security policy chief, Nathaniel Gleicher and other company executives
The world’s largest online social network, with 1.5 billion daily users, has stopped short of banning all false or misleading posts.
This is something that Facebook has shied away from as it would likely increase its expenses and leave it open to charges of censorship.
The latest move addresses a sensitive area for the company, which has come under fire for its lax approach to fake news reports and disinformation campaigns.
Many believe the laxity affected the outcome of the 2016 presidential election, won by Donald Trump.
The ban on false information about voting methods, set to be announced, comes six weeks after Senator Ron Wyden asked Chief Operating Officer Sheryl Sandberg how Facebook would counter posts aimed at suppressing votes.
The company was said to have defaulted by telling certain users they could vote by text, a hoax that has been used to reduce turnout in the past.
The information on voting methods becomes one of the few areas in which falsehoods are prohibited on Facebook.
This is a policy enforced by what the company calls “community standards” moderators, although application of its standards has been uneven.
It will not stop the vast majority of untruthful posts about candidates or other election issues.
“We don’t believe we should remove things from Facebook that are shared by authentic people if they don’t violate those community standards, even if they are false,” said Tessa Lyon.
Lyon is product manager for Facebook’s News Feed feature that shows users what friends are sharing.
Links to discouraging reports about polling places that may be inflated or misleading will be referred to fact-checkers under the new policy, Facebook said.
If then marked as false, the reports will not be removed but will be seen by fewer of the poster’s friends.
Such partial measures leave Facebook more open to manipulation by users seeking to affect the election, critics say.
Russia, and potentially other foreign parties, are already making “pervasive” efforts to interfere in upcoming U.S. elections, the leader of Trump’s national security team said in early August.
Just days before that, Facebook said it uncovered a coordinated political influence campaign to mislead its users and sow dissension among voters.
It removes 32 pages and accounts from Facebook and Instagram.
Members of Congress briefed by Facebook said the methodology suggested Russian involvement.
Trump has disputed claims that Russia has attempted to interfere in U.S. elections. Russian President Vladimir Putin has denied it.
Facebook instituted a global ban on false information about when and where to vote in 2016, but Monday’s move goes further, including posts about exaggerated identification requirements.
Facebook executives are also debating whether to follow Twitter Inc’s recent policy change to ban posts linking to hacked material, Gleicher told Reuters in an interview.
On the issue of fake news, Facebook has held off on a total ban, instead limiting the spread of articles marked as false by vetted fact-checkers.
However, that approach can leave fact-checkers overwhelmed and able to tackle only the most viral hoaxes.
“Without a clear and transparent policy to curb the deliberate spread of false information that applies across platforms, we will continue to be vulnerable,” said Graham Brookie.
Brooke is the head of the Atlantic Council’s Digital Forensic Research Lab.
(Reuters/NAN)