En español | Fake coronavirus cures, conspiracy theories and other harmful misinformation are flooding social media in what the World Health Organization (WHO) calls an “infodemic” — an overabundance of information, some accurate, some not.
To combat falsehoods, tech companies such as Facebook, Twitter and Google (which owns YouTube) are flagging or removing misinformation and redirecting users to credible sources of information about the virus.
These companies have faced criticism in the past for not doing enough to curb misinformation on their platforms, especially around the 2016 presidential election.
Now, with the health of billions of people at stake, Facebook, for example, is taking a more aggressive approach to material posted and shared about the coronavirus. Last week, the company said it will begin alerting users who have shared, clicked on or commented on information proven to be false and redirect them to a WHO web page on coronavirus myths.
Facebook also has increased the number of third-party fact checkers who examine posts flagged by users as suspicious or captured by its algorithms. It is removing harmful posts such as fake cures. One even told people to drink a solution of bleach, a potentially fatal poison.
"Through this crisis, one of my top priorities is making sure that you see accurate and authoritative information across all of our apps,” Mark Zuckerberg, the founder and chief executive of Facebook, wrote in a post about the new policies.
The company, which also owns photo-sharing network Instagram and messaging service WhatsApp, has directed Facebook and Instagram users to its Coronavirus (COVID-19) Information Center and used educational pop-ups that Zuckerberg said more than 350 million people had clicked on by mid-April.
Google also is working to direct people to credible health information. Searching “coronavirus cures” on its site brings up an information box that reads, “To date, there are no specific vaccines or medicines for COVID-19,” and includes a link to the WHO.
The moves to prioritize accuracy and downgrade or remove misinformation are a major change from how the tech giants previously approached the free flow of information on their platforms.
"It was definitely, within the companies, a shift,” Andy Pattison, the WHO's manager of digital solutions, told the Associated Press.
Now Pattison and his team flag misleading information on the virus and sometimes lobby for misinformation to be removed from Facebook, Google and YouTube.
Falsehoods long a problem
Misinformation and disinformation, which is false information deliberately spread to sway opinion or obscure the truth, long have lurked on social media. But now the stakes are higher.
"There's really a wide range of different kinds of harms associated with misinformation generally, and specifically in this case of coronavirus,” says Dipayan Ghosh, codirector of the Digital Platforms & Democracy Project at the Harvard Kennedy School. These range from people making uninformed political decisions to death, Ghosh says. And because of the grave risks, the platforms could potentially be considered responsible if something terrible happens, he says.
Angie Drobnic Holan, editor in chief of PolitiFact, a nonpartisan fact-checking organization in Washington, says people create and spread misinformation for reasons including profit or political ideology or because they think — mistakenly — the information is correct.
PolitiFact is one of the fact-checking groups partnering with Facebook. In recent weeks, all of the content PolitiFact has checked is related to the virus, including the claim about drinking bleach. Its site, PolitiFact.com, is also investigating coronavirus claims and rating them on a Truth-O-Meter with colors ranging from red (false) to green (true).
"It's overwhelmed the national conversation,” Holan says.
It's also energized those who educate the public so facts win out over fiction.
"People in the information community are really activated like never before,” Holan says. “There are all kinds of conversations happening among fact checkers, journalists, the platforms, philanthropies about how to make sure people have good, solid, factual information."
Talks with platforms
The Centers for Disease Control and Prevention (CDC) has provided health information to a number of platforms, including Facebook and Google, says Benjamin Haynes, an agency spokesman. The discussions have included how linking to CDC content in a variety of ways could help users track down credible content and stave off misinformation.
The responses by different companies have been both proactive and reactive, says Ghosh. Google, for example, is proactively providing information on the number of coronavirus cases in a user's county, state and country and giving links to the WHO. It also includes COVID-19 alerts and lists of “Common questions” and “Common searches” when a user searches for virus-related material.
Reactive responses include using content-moderation algorithms to identify posts that contain misinformation and having users flag potential misinformation, Ghosh says.
The problem with reactive responses is that the misinformation is out there before it is taken down. YouTube, for example, removed videos that said 5G wireless networks caused the virus. Some of those videos had hundreds of thousands of views before they were taken down, the AP said.
Even with the stepped-up efforts, it is impossible for the social media platforms to catch all the misinformation online, according to Holan, who urges people to be wary about what they forward or share.
Everyone — from tech companies to journalists to governments to average people — has a role to play, she says, since “misinformation is not a problem that only one group is going to solve."