As flames tore through large parts of Los Angeles this month, so did fake news. Social media posts touted wild conspiracies about the fire, with users sharing misleading videos and misidentifying innocent people as looters. It brought into sharp focus a question that has plagued the social media age: what is the best way to contain and correct potentially incendiary sparks of misinformation?
It is a debate that Meta’s CEO, Mark Zuckerberg, has been at the centre of. Shortly after the January 6th Capitol riots in 2021, which were fuelled by false claims of a rigged US presidential election, Zuckerberg gave testimony to Congress. The billionaire boasted about Meta’s “industry-leading fact checking program”, which drew on 80 “independent third-party fact checkers” to curb misinformation on Facebook and Instagram. Four years on, that system is no longer something to brag about.
“Fact checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US,” Zuckerberg said earlier in January. Taking their place, he said, would be a system inspired by X’s “community notes”, where users rather than experts adjudicate on accuracy.
Many experts and fact checkers questioned Zuckerberg’s motives. “Mark Zuckerberg was clearly pandering to the incoming administration and to Elon Musk,” said Alexios Mantzarlis, the director of the Security, Trust and Safety Initiative at Cornell Tech. But like many experts, he also makes another point that has perhaps been lost in the firestorm of criticism Meta faces: that, in principle, community-notes-style systems can be part of the solution to misinformation.
Source link