BUSINESS NEWS
Facebook fake account takedowns doubled Q4 2018 vs Q1 2019
[ad_1]
Mark Zuckerberg, chief executive officer and founder of Facebook Inc., speaks during a joint hearing of the Senate Judiciary and Commerce Committees in Washington, D.C., U.S., on Tuesday, April 10, 2018.
Al Drago | Bloomberg | Getty Images
Facebook has stepped up its fight against fake accounts.
On Thursday, in its third periodic Community Standards Enforcement Report, the company said it took action on nearly twice as many suspected fake accounts in the first quarter of 2019 as it did in the fourth quarter of 2018.
The uptick was due to “automated attacks by bad actors who attempt to create large volumes of accounts at one time,” the company said.
On a call discussing the report, Facebook CEO Mark Zuckerberg responded to calls to break up his company through antitrust, saying it would hurt Facebook’s efforts to combat fake news and other content that violates its policies.
“The amount of our budget that goes toward our safety systems is greater than Twitter’s whole revenue this year,” said Zuckerberg on a call on Thursday. “We’re able to do things that I think are just not possible for other folks to do.”
Specifically, Facebook disabled 2.19 billion accounts in the first quarter of 2019 compared to 1.2 billion accounts in the fourth quarter of 2018.
That’s a huge number of accounts considering Facebook reported 2.38 billion monthly active users (MAUs) in its first quarter of 2019. A Facebook spokesperson said the number of accounts it disabled is not included in its MAU figure since the obvious fakes tend to be removed fairly quickly. Still, Facebook estimated that about 5% of the accounts counted in monthly active users are fake.
The latest report comes after Facebook in March announced a pivot to privacy that will eventually shift more of users’ communications to private, encrypted channels via the chat functions of Instagram, Messenger and WhatsApp. Zuckerberg on Thursday said that this pivot will make it harder for Facebook to find and remove the type of content covered in the Thursday report.
“We’ll be fighting that battle without one of the very important tools, which is of course being able to look at the content itself,” Zuckerberg said. “It’s not clear on a lot of these fronts that we’re going to be able to do as good of a job on identifying harmful content as we can today.”
Facebook launched the first edition of the report in May 2018 on the heels of the Cambridge Analytica scandal that rocked users’ and investors’ confidence in the company’s ability to enforce its policies. In an effort to promote transparency, Facebook uses the reports to share information about how it responds to false, violent and graphic information on its platform.
Facebook also shared data about illicit sales of drugs and firearms on its platform for the first time in Thursday’s report.
Facebook said it proactively detected and took action on 83% of 900,000 pieces of drug sale content in the first quarter of 2019. That was up from 77% the previous quarter. (The remaining content in the total count was flagged by users.)
Similarly, Facebook said it proactively detected and took action on 69% of the 670,000 pieces of firearm sale content during the first quarter, compared to 65% the previous quarter.
The company also began including information about appeals and corrections to content removal. Facebook and other social media companies have been criticized by lawmakers, particularly on the right, for being biased against political conservatives.
In the latest edition of the report, Facebook disclosed for the first time the number of pieces of content appealed and restored across various policy areas including spam, hate speech, nudity and terrorism. Of 1.1 million pieces of content appealed under “Hate Speech” in Q1 of 2019, for example, Facebook said 152,000 pieces were restored.
Subscribe to CNBC on YouTube.
Watch: Sheryl Sandberg says breaking up Facebook doesn’t address big underlying issues
[ad_2]
Source link