How to Fix Facebook, According to Facebook Employees


Facebook denied the allegation. “In the middle of these stories there’s a plan that isn’t real,” spokesman Kevin McAlister said in an email. “Yes, we’re a business and making a profit, but the idea that we’re doing this would be detrimental to people’s safety or goodwill where our own commercial interests are not understood.”

On the other hand, the company recently fess off appropriate criticism from 2019 documents. “Previously, we did not address safety and security challenges earlier in the product development process,” it said in a September 2021 statement. blog post. “However, we are making progress actively in response to a specific abuse. But fundamentally we are changing that approach. Now, we are embedding teams that specifically focus on the issues. safety and security directly to the product development teams, which allowed us to address these issues during our product development process, not after it.McAlister appointed Live Audio Rooms, introduced this year, as a for example a product rolled under this process.

If that’s true, this is a good thing. The same claims Facebook has made over the years, however, have not always withstood scrutiny. If the company is serious about its new approach, it needs to learn a few more lessons.

Your AI Can’t Solve Everything

On Facebook and Instagram, the value of a given post, group, or page primarily determines how likely it is to focus, Like, comment, or share it. The higher the probability, the more the platform will recommend that content to you and show it in your feed.

But what caught people’s attention disproportionate WHAT annoys or misleads them. This helps explain why low-quality, outtrage-baiting, hyper-partisan publishers are so good on the platform. One of the internal documents, from September 2020, states that “low integrity Pages” gained the majority of their followers through News Feed recommendations. Another recounts a 2019 experiment in which Facebook researchers created a dummy account, named Carol, and it was followed by Donald Trump and some conservative publishers. For several days the platform encouraged Carol to join QAnon groups.

Facebook is aware of these dynamics. Zuckerberg himself explained 2018 that content will gain more engagement as it comes close to violating the rules of the platform. But instead of rethinking the wisdom of optimizing for engagement, Facebook’s response has often been to deploy a mix of human reviewers and machine learning to spot the wrong. good stuff and remove or demote it. Its AI tools are widely considered world class; a February blog post by chief technology officer Mike Schroepfer says that, during the last three months of 2020, “97% of hate speech taken from Facebook was seen on our automated systems before it was flagged by anyone.”

The inside documents, however, paint a bad picture. A presentation from April 2020 said Facebook removals reduced the overall prevalence of graphic violence by about 19 percent, nudity and pornography by about 17 percent, and hate speech by about 1 percent. A file from March 2021, previously reported by the Wall Street Journal, more pessimistic yet. Of these, the company’s researchers estimate “we can act on 3-5% of hate and ~ 0.6% of [violence and incitement] on Facebook, despite being the best in its world. ”



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *