June Wilson, PhD
On December 4, 2016, shortly before 3 PM, a 28-year old man armed with an assault rifle walked into Comet Ping Pong Restaurant in Washington, DC, pointed the gun at an employee and fired three shots, hitting the floor and a door. Edgar Welch, from North Carolina, charged by the police for assault with a dangerous weapon, stated he had planned to “self-investigate” the trafficking of child sex slaves (Goldman, 2016).
Before this shooting, the Comet Pizza owner, his family, and staff had received an onslaught of threatening emails because of the false and malicious stories about Hillary Clinton and John Podesta alleged to be running a child trafficking and pornography ring out of the back of Comet Pizza. The stories were rapidly spread through 4chan, Reddit, and right wing blogs (Kang, 2016).
Alt right site Infowars host Alex Jones repeatedly stated that Hillary Clinton was involved with child abuse, running a pedophile ring, and murdering children, while John Podesta was running a pedophilia ring and satanic cults. A YouTube video that broadcast these false accusations was viewed over 427,000 times (Fisher, Woodrow, & Hermann, 2016). Although debunked by law enforcement and all reputable media, this story spread on social media at an alarming rate (hashtag #pizzagate). Many who receive their news from social media still believe the story. Indeed, a Pew research study revealed that 62% of Americans receive their news from social media, with the top three sites noted as Reddit, FaceBook, and Twitter (Gottfried & Shearer, 2016).
There are increased concerns that content, tone, uncivil behavior, and manipulation on social media will persist and worsen (Rainie, Anderson, Albright, & Page, 2016), especially when news is fabricated. Perceived realism is stronger when consumers are exposed to more fake news. Hard news can act as a moderator for fake news (Balmas, 2012), but many consumers are starved for hard news if they continue to rely on social media sites that merely reinforce their beliefs.
A Pew research study found 64% of respondents reported that fabricated news causes a “great deal” of confusion about the issues. However, 39% of respondents reported feeling “very confident” in recognizing fabricated stories, while 45% reported they are “somewhat confident” in their ability to identify fabricated stories (Barthel, Mitchell, & Halcomb, 2016).
Rumors tend to spread quickly on social media. Zubiaga, Liakata, Wong, and Tommie (2016) investigated thousands of tweets and found that unverified rumors are retweeted far more frequently than verified rumors. However, interest in spreading a rumor decreases once the rumor has been either verified or debunked. While it takes only 2 hours to verify a true rumor, debunking a false rumor requires almost 14 hours and unverified rumors are shared early. Consumers seem to be more interested in spreading new information, regardless of its veracity.
Once released, the information is difficult to correct and may have lasting effects even when discredited. Furthermore, some media consumers are unwilling to revise their beliefs regardless of the corrective information presented (Nyhan & Reifler, 2015). When asked who is responsible for stopping fabricated news, 45% of respondents stated that government, elected officials, and politicians were responsible, while 43% stated the public, and 42% of respondents identified social networking sites and search engines (Barthel et al., 2016).
Bots are often used to mislead consumers and manipulate public opinion through misinformation. There is evidence to suggest that bots were used to disrupt the 2016 Presidential election and interfered with the democratic process. Social bot detection algorithms have determined that approximately one-fifth of the users on social media are not human (Bessi & Ferrara, 2016). If we value an informed populace we need powerful bot detection tools.
Mantzarlis (2016) made the following suggestions for preventing and debunking fakes: (a) Do no harm – do not share false information. (b) Use custom searches – use sites you trust to verify your work. (c) Learn basic photo checking sites such as Google Chrome or Tin Eye which can validate them. Reid (2016) suggests: Use the site Who Is to determine when a website was created, pay attention to the domain name, and check the story on more than one reputable media source.
Facebook (2017) recently released 10 tips for identifying false news and provided a link to report suspicious stories. Facebook’s media literacy tips include asking the user to be skeptical of headlines, investigating the source, and checking the evidence. While Facebook appears to be taking false news seriously, the steps consumers must undertake to verify the accuracy of news appear burdensome.
Unfortunately, false, misleading, and fake news is here to stay. Fake news can be forwarded for financial motivation (Shaik, 2017), re-posted unknowingly, or maliciously posted by a bot (Bessi & Ferrara, 2016). Thus, we can no longer be passive consumers, but must be savvy active consumers, especially when using social media for news. The problem is two-fold: How can we protect ourselves from fake news while at the same time educating unknowing consumers not to re-post potentially fake news?
The effects of fake news are global. While Germany passed a bill making it illegal to post fake news, Facebook recently ran full page print media ads informing readers about the ways to identify fake news in both Germany and France. Additionally, 30,000 accounts were recently closed in France. Shabnam Shaik (2017), Facebook’s technical program manager stated, “Our priority, of course is to remove the accounts with the largest footprint, with a high amount of activity and a broad reach.”
Google and Facebook have stated they will restrict sites that publish fake news from using online advertising (Schmidt, 2016), but a recent Google search on the now debunked pizzagate revealed that sites related to the false conspiracy theory come up first. So much for effectiveness!
A colleague from the APA leadership team stated to me: “this is the year of Division 46,” as she referred to the prevalence of misinformation in the media. I agree. This is the year for Division 46 to do everything within its power to resist fake news and its devastating effects. Let us promote media literacy and prevent another pizzagate.
- Bot or not bot – useful for identifying bots – https://truthy.indiana.edu/botornot/
- First Draft Partner Network – Resources for media literacy – https://firstdraftnews.com
- News Literacy lessons for digital citizens – https://www.coursera.org/learn/news-literacy
- The News Literacy Project – news literacy skills – http://www.thenewsliteracyproject.org
- Tin Eye- useful site for checking reverse imaging. https://www.tineye.com
- Who is – useful for identifying when a site was created – https://whois.icann.org/en
Balmas, M. (2012). When fake news becomes real. Communication Research, 41, 430-454.
Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini (2016). The rise of social bots. Communications of the ACM, 59, 96-104. Doi:10.1145/2818717
Nyhan, B., & Reifler, J. (2015). Displaying information about events: An experimental test of causal corrections. Journal of Experimental Political Science,(2), 81-93. DOI: https://doi.org/10.1017/XPS.2014.22
Zubiaga, A., Liakata, M., Procter, R., Wong Sak Hoi, G., & Tolmie, P. (2016). Analysing how people orient to and spread rumours in social media by looking at conversational threads. PLoS ONE 11(3): e0150989. doi:10.1371/journal.pone.0150989