Comtuter and Internet News Channel
Photo: Charles Platiau / Reuters
Facebook in the first three months of 2018 took measures to restrict access to almost 1.5 billion accounts and posts that violate the company’s policy. This is stated in the company’s report.
According to the report, Facebook restricted access to 837 million messages about spam and 583 million fake accounts. Also, Facebook removed 21 million posts containing pornography, 2.5 million – with incitement to hatred, as well as 1.9 million posts containing terrorist propaganda.
“This is the beginning of the journey, not the end, and we are trying to be as open as possible,” said Richard Allan, Facebook’s vice president for public policy in Europe, the Middle East and Africa.
Why Facebook introduces new restrictions for applications
Facebook also stated that specially trained artificial intelligence increased the amount of illegal content found. AI automatically blocked such content, regardless of whether its users marked it or not. He worked particularly well to detect spam and fake accounts – the company stated that they managed to find 98.5% of fake accounts and almost 100% of spam.
Custom marks worked well in cases of detection of pornography. The most difficult for the company was the moderation of content aimed at inciting hatred – in the company managed to find about 38% of such posts. According to Alex Schulz, vice president of data analytics, the amount of content moderated, containing scenes of violence, almost tripled in a quarter. Mr. Schulz states that this is due to the situation in Syria.
Facebook has recently taken steps to increase transparency. In April, the company published its documents revealing what is allowed and not allowed to publish on the site – a year after they were merged into the network. Facebook also announced that advertisers carrying out political activities must undergo an authentication process and confirm their adherence to advertisements.
The Facebook report appeared a week after the publication of the open document Santa Clara Principles, in which the authors tried to present the principles according to which large platforms should moderate content. The document states that social networks should publicly talk about the amount of information to be deleted, as well as explain why this is done, and offer the opportunity to appeal this decision.
How Facebook will strengthen the protection of user data
“This is an excellent first step,” said Gillian York, Director of International Freedom and Expression in the non-profit human rights organization Electronic Frontier Foundation, but we have no idea how many errors occur and how many appeals lead to the restoration of content. It is necessary to improve feedback from users, providing detailed information on the reasons for the blocking. “
Facebook is not the only platform that takes steps to ensure transparency. Last month Youtube said that it removed 8.3 million illegal videos for the period from October to December last year. According to Mrs. York, this is “a direct response to the pressure that large platforms have been experiencing from the various stakeholders, including civil society groups and scientists, over the past few years.”
Add a comment