Will the Knowledge of Many People Fix the Issue of Social Media Trust?


The research found that in a group of only eight lay people, there was no statistically significant difference between what the crowd did and a given fact checker. Once the groups reached 22 people, they really started to do their best without the stone masks succeeding. (These numbers describe the consequences when the laity are told the origin of the article. If they do not know the origin, many people are doing worse.) Perhaps most importantly, the laity hordes. surpassed fact checkers of fact for stories classified as “Political,” because stories where fact checkers are likely to disagree with each other. The analysis of political reality is really true difficult.

It seems impossible that random groups of people could surpass the work of trained fact checkers – especially without being based on knowledge of the title, first sentence, and publication. But that’s the whole idea behind crowd intelligence: gather a lot of people, act independently, and their consequences will overwhelm the experts.

“What we know is what’s going on is people are reading this and asking themselves,‘ How good is this line all I know? ’” Rand said. “Here comes the wisdom of the majority. You don’t have to let all the people know what’s out there. By averaging the ratings, the noise can come out and you get a much higher resolution signal than to any man. ”

It’s not the same thing as a Reddit -style system of upvotes and downvotes, nor is it the Wikipedia model of citizen editors. In those cases, small, non-representative subsets of users themselves choose to curate the material, and each can see what the others are doing. The intelligence of the crowds can only be achieved when groups are diverse and individuals judge independently. And relying on random, balanced political groups, rather than a group of volunteers, makes the researchers ’approach even more difficult to play. (It also explains why the experimental approach is different from Twitter Birdwatch, a pilot program that writes users to write notes explaining why a given is cheating.)

The main conclusion of the paper is straightforward: Social media platforms such as Facebook and Twitter can use a crowd -based system of enthusiasm and inexpensively to increase their audit operations without actually sacrificing the katukma. (Study laymen were paid $ 9 per hour, which translates to a cost of $ .90 per article.) The method that comes from the majority, the researchers say, also helps increase confidence in the process, because it is easy to assemble. groups of laity politically balanced and thus more difficult to accuse of being prejudiced. (According to a 2019 Pew survey, Republicans most widely believe that fact checkers “someone wants to exaggerate a part.”) Facebook is already debuting similar thing, which pays user groups to “work as researchers to find information that may contradict the most obvious online scams or reinforce other claims.” But such an effort is designed to ascertain the work of official fact-checking partners, not to add to it.

Elevated fact -finding is one thing. The more interesting question is how to use it on platforms. Should stories marked as untrue be banned? What about stories that may not have any intentional misinformation in them, but that is even misleading or manipulative?

The researchers suggest that platforms should move away from both true / false binary and leave it alone / flag this binary. However, they suggest that platforms include “continuous increase in most accuracy ratings” in their ranking algorithms. Instead of having a true / false cut, and treating everything at the top of it one way and everything at the bottom of it another, platforms should include the proportional given score of the majority when determining how importantly a provided link should be displayed in user feeds. That is, if the judgment of most of a story is not very accurate, it is especially downranked by the algorithm.



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *