Facebook Silently Makes a Big Entry


Back in February, Facebook announced the small experiment. Reduce the amount of political interference shown by a subset of users in some countries, including the U.S., and then ask them about the experience. “Our goal is to protect people’s ability to find and communicate within Facebook politics, while respecting everyone’s appetite for it at the top of their News Feed,” explains Aastha Gupta, a product management director in a blog post.

On Tuesday morning, the company provides an update. Survey results have arrived, and they suggest that users prefer to see political items that aren’t always in their feeds. Now Facebook intends to repeat the experiment in several countries and is digging into “further expansion in the coming months.” Depositing people’s feeds makes sense for a company that is constantly in hot water for its alleged political impact. The move, after all, was first announced just a month after Donald Trump’s supporters stormed the U.S. Capitol, a period that some people, including elected officials, seeks to blame Facebook. The change could end up having major ripple effects for political groups and media organizations accustomed to relying on Facebook for distribution.

The most important part of the Facebook announcement, however, has nothing to do with politics.

The basic premise of any AI -promoted social media feed – think Facebook, Instagram, Twitter, TikTok, YouTube – is that you don’t have to tell it what you want to see. Just by observing what you like, sharing, commenting, or simply continuing, the algorithm knows what different material has captured your interest and keeps you on the platform. Afterwards it shows you a lot of things like that.

In a sense, this form of plotting gives social media companies and their apologies a convenient protection against criticism: If certain things go awry on a platform, that’s because this is what users want. If you have a problem with that, maybe your problem is with the users.

However, at the same time, optimization for participation is at the heart of many criticisms of social platforms. An algorithm that focuses heavily on interaction can push content users to be more attractive but have less social value. They can feed into their diet in posts that are more appealing because they are always more severe. And it can encourage the spread of viral material that is false or harmful, because the system chooses first what will provoke participation, rather than what should be visible. The list of diseases included in the first aid scheme helps explain why there is no Mark Zuckerberg, Jack Dorsey, by Sundar Pichai claim during a congressional hearing in March that the platforms under their control were built that way all along. Zuckerberg insists that “meaningful social interactions” are the real purpose of Facebook. “Negotiation,” he says, “is just a sign that if we deliver the amount, naturally people will use our service more.”

In a different context, however, Zuckerberg acknowledges that things may not be as simple. In a 2018 post, explains why Facebook restricts “borderline” posts that try to continue to the edge of the platform’s rules without violating them, he writes, line, people join it more on average-even if say they told us after that they didn’t like the problem. ”Yet that observation seems to have explored the issue of how to enforce Facebook’s policies regarding prohibited content, rather than considering the design of the ranking algorithm more broadly.



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *