Pages

Wednesday, November 7, 2018

Facebook got through the midterms mostly unscathed, although it's still got lots of work ahead

Facebook's efforts to curtail the spread of misinformation on its services were put through their biggest test yet on Tuesday by the U.S. midterm elections.

For the most part, the company came through unscathed.

It's a major achievement for Facebook, which has spent much of the past two years facing questions over its efforts to curtail the spread of fake news, propaganda, hate speech and spam on its services. Academics and researchers criticized Facebook for not doing enough to curtail harmful content before the 2016 U.S. elections.

Most visibly, the company set up a "war room" to fight the spread of misinformation. Although Fortune and others criticized it as a publicity stunt, the Facebook war room is made up of 20 people from Facebook's various divisions — including WhatsApp, Instagram and others -- dedicated to identifying and removing false and fake accounts trying to influence the U.S. midterm elections.

Behind the scenes, Facebook has hired thousands of staffers and contractors since 2016 to review and remove harmful content. The company now has 7,500 content moderators — up from 4,500 in May 2017 — who are part of a team of 20,000 people working on safety and security. Those moderators work in conjunction with AI systems that the company has built "that can flag content that might be problematic," Zuckerberg told analysts during the company's earnings call last week.

As Election Day approached, Facebook stepped up its work. In the two weeks leading up to Tuesday, the company removed hundreds of accounts and pages engaged in "coordinated inauthentic behavior"." On Monday night , Facebook said that it had removed more than 100 accounts linked to the Internet Research Agency, a Russian organization which was accused earlier this year by Special counsel Robert Mueller of being "engaged in operations to interfere with the elections and political processes."

On Election Day, the company was actively removing content that offered inaccurate voting information, including posts and memes directing Americans to vote on wrong days and content spreading a false claim that ICE agents were stationed at polling locations.

"Our teams worked around the clock during the midterms to reduce the spread of misinformation, thwart efforts to discourage people from voting, and deal with issues of hate on our services," a spokeswoman for Facebook told CNBC.

"There's still more work to do, as many of these bad actors are well-funded and change their tactics as we improve. It's why we continue to invest heavily in security, and are working more closely with governments, as well as other tech companies, to help prevent the spread of misinformation and outside interference in elections globally."

More generally, Facebook has improved its transparency with the public over how it handles problematic material that people post on the site. In the first quarter of 2018, Facebook said it removed 2.5 million pieces of hate speech content, 836 million pieces of spam and 583 million fake accounts, according to a new transparency report that the company shared in May.

It also introduced an Facebook ad archive and report that allows the public to monitor and review which entities are paying for political ads on the company's social networks.

These tools, for example, made it possible to determine how many people viewed an ad by President Trump's reelection campaign, which CNN and others called "racist," before it was rejected by Facebook for violating the company's policies against sensational content.

The company's active removal of misinformation and its disclosure of those actions has allowed the public to stay better informed about the type harmful content they may come across on Facebook and stay abreast of what the company is doing to combat it.

Although it appears that Facebook managed misinformation more effectively than in 2016, we won't know for sure until researchers, academics and cyber security experts can gauge the full impact.

That could take months — recall that Facebook didn't start talking about Russian attempts to influence the 2016 election via until September 2017.

Meanwhile, the company faces a constantly shifting set of challenges.

The company this week released a report that details Facebook's impact on the spread of hate speech in the country of Myanmar. The report recommended that Facebook increase the enforcement of its content policies and increase its transparency by providing the public more data.

In the U.S., a series of Vice investigations that found potential vulnerabilities in Facebook's handling of political ads.

In one instance, Vice received Facebook approval to say that political ads it put together were "paid for by" any of the 100 U.S. senators, although none of the ads were actually published. Another article by Vice and ProPublica found political ads on Facebook paid for by a group not registered with the Federal Election Commission. It is unclear who is behind the group, and Facebook has not removed the ads, saying it had requested additional information from the advertiser and determined the ads were not in violation of Facebook's standards.

Beyond the main Facebook app, the spread of misinformation appears to be on the rise across the company's other services, Facebook's former Chief Security Officer Alex Stamos told CNBC.

Notably, when Facebook blocked accounts linked to Russia's Internet Research Agency, it took down 85 Instagram accounts compared to 30 Facebook accounts.

"This last cycle also demonstrated that Instagram is likely to be the largest target of Russian groups going forward," Stamos said. "Facebook needs to really invest in porting some of the back-end detection systems that catch fake accounts to this very different platform."

Reports over the past year ondicate small groups on Messenger and WhatsApp are becoming the hotbed of misinformation. In India, rumors on WhatsApp reportedly resulted in a group of men being lynched. The app was also used to reportedly spread misinformation ahead of the Brazilian election last month.

"The greater privacy guarantees of these platforms increases the technical challenges with stopping the spread of misinformation," Stamos said.

Additionally, Reuters this week reported that Russian agents are changing their tactics to spread divisive content across social media while staying ahead of Facebook's efforts. This includes moving away from fake news and focusing on amplifying content produced by Americans on the far right and far left.

The next tests for the company will come soon enough. On the horizon are the Indian general election next year, and the 2020 presidential election primaries.

"I anticipate that it will be about the end of next year when we feel like we're as dialed in as we would generally all like us to be," Zuckerberg said last week.

"And even at that point, we're not going to be perfect because more than 2 billion people are communicating on the service. There are going to be things that our systems miss, no matter how well-tuned we are."

Let's block ads! (Why?)

from Top News & Analysis https://ift.tt/2yZqQR6
via IFTTT

No comments:

Post a Comment