How Facebook Hides How It Is Afraid To Speak Hate

In public, Facebook seems to claim that it removes more than 90 percent of hate spoken on its platform, but in private internal communications the company says the number is just a staggering 3 to 5 percent. Facebook wants us to believe that almost all hate speech is removed, when in fact almost everyone stays on the platform.

Obscene hypocrisy was revealed amid numerous complaints, correlated with thousands of pages of leaked documents, made by the Facebook employee whistleblower. Frances Haugen and his legal team filed with the SEC earlier this month. While public attention to these drops has focused on the impact of Instagram on teen health (which is it’s hard to smoke a gun it is mentioned as) and the role of the News Feed algorithm in magnifying misinformation (difficult a revelation), Facebook’s utter failure to limit hate speech and the simple deception that has always been relied upon to hide this failure. It reveals how much Facebook relies on AI for content moderation, how ineffective AI is, and the need to force Facebook to clean up.

In testimony in the U.S. Senate in October 2020, Mark Zuckerberg pointed to the company’s transparency reports, which he said showed that “we actively identified, I think about 94 percent of the hate speech we ended up with. ” In a testimony to the House a few months ago, Zuckerberg both answered questions about hate speech by quoting a transparency report: “We also removed nearly 12 million of the Groups’ content because of violating our policies on hate speech, 87 percent of which we found active. ”In almost every quarterly transparency report, Facebook proclaimed the percentages of hate speech management in the 80s and 90s. as it were.Although one release of a document from March 2021 says, “We can act even up to 3-5% of hate… on Facebook.”

Did Facebook really get a serious lie? Yes and no. Technically, the same number is correct – they just measure different things. The step that matters is what Facebook is hiding. The move is publicly reported to be insignificant. It’s a bit like every time a cop hand you in and ask how fast you’re going, you’re always answering by not trusting the question and bragging about your car’s fuel line. .

There are two ways that the flag of speech can be flagged for review and possible removal. This can be reported to users, or AI algorithms can try to detect it automatically. Knowing the algorithm is important not only because it is more efficient, but because it can be done motivated, before any user flags hate speech.

The 94 percent number that Facebook’s publicity makes is the “active rate,” the number of hate items taken that Facebook’s AI sees as active, divided by the total number of hate items. taken. Facebook might want you to think that this number announces what hate speech is taken before it has a chance to hurt — but all this step is really how big of a role hate detection algorithms play. spoken on the platform.

What matters in society is the amount of hate speech that is not removed from the platform. The best way to get this is to have the number of hate speech removals divided by the total number of hate speech times. This “takedown rate” measures how much Facebook’s hate speech actually took – and it’s the number Facebook tries to keep secret.

Thanks to Haugen, we finally know the removal rate, and it’s bad. According to internal documents, more than 95 percent of the hate speech shared by Facebook remains on Facebook. Zuckerberg boasted in Congress that Facebook dropped 12 million hate words on Groups, but based on the released estimate, we now know that nearly 250 million hate words are probably left. It’s amazing, and shows how little progress has been made since the early days of unregulated internet forums-despite the many investments Facebook has made in AI content training over the years. .

Source link


Leave a Reply

Your email address will not be published. Required fields are marked *