President’s Column: The Quest for Truth amid the Perils and Pitfalls of Political Disinformation

Susan J. Eddington, APR, PhD
seddington@email.fielding.edu

Campaigning for the 2024 elections is in full swing. It has been projected that the 2023-2024 election cycle will be the most expensive in history, with political expenditures expected to reach $10.2 billion, spread across broadcast, cable, radio, satellite, digital, and connected television (AdImpact, n.d.). Throughout this year-long political season, many campaign messages will be fueled by disinformation.

The role of disinformation in political campaigns was thrust into national news when, after uncovering a scheme to interfere in the 2016 election, U. S. Justice Department prosecutors charged 13 Russians and three companies with efforts to conduct a disinformation campaign to influence the presidential election (Frenkel & Benner, 2018). Prosecutors found that the Russians’ preferred social media platforms were Facebook and Instagram. They used the stolen identities of Americans, created Facebook groups where they focused on hot button issues like immigration and religion to gain a following, then distributed divisive ads, and posted inflammatory images. Their intent was to sow discord amongst the American public. Facebook chief executive officer, Mark Zuckerberg, later admitted 150 million Americans had been exposed to the Russian propaganda on the social networks (Frenkel & Benner, 2018). Although not a new phenomenon, since 2016, we have seen a global proliferation of information pollution–the spread of false, misleading, manipulated and otherwise harmful information (Mejia et al., 2018) that has led to increasing truth decay, an increasing disagreement about objective facts (Kavanagh & Rich, 2022; Kavanagh & Rich, 2018).

Historically, news organizations were relied on as a trusted source to inform the public about current events and critical issues that could affect their lives. There is now a decline in trust of most media organizations as a source of accurate and reliable information. A Knight Foundation study found most Americans believe media outlets prioritize their own business interests over serving the best interests of the public (Knight Foundation [Knight], 2023). Perhaps more importantly, the study found they feel an emotional distrust of news organizations with the belief that media organizations “intend to mislead them and are indifferent to the social and political impact of their reporting”. Americans also have an increasing distrust of government (Gallup, 2020) and politicians (Pew Research Center [Pew Research], 2023), impacting the security of the democracy.

Truthfulness in political advertising is not required because it is protected as free speech by the First Amendment. Thus, candidates can make any claim, regardless of its veracity. There were repeated calls for technology companies to play an active role in efforts to combat election misinformation following the January 6, 2021 attack on the U. S. Capitol. Facebook placed a temporary ban on political advertising after the November 2020 elections, but since then has rescinded that ban. YouTube and X, formerly known as Twitter, have also announced that they will resume allowing political ads with caveats about how they will, or will not, police disinformation (Isaac, 2021).

Political consultants celebrate the ways that AI can decrease their workload and allow them to do more with less. AI is a useful tool to generate draft news releases, social media content, and other collateral materials essential for a communications effort. Besides the promise of greater efficiency there is also an urgent concern about the role of artificial intelligence in amplifying disinformation. There have been numerous incidents that prove the concern is not mere speculation. An image of black smoke near an unidentified building near the Pentagon was shared with media outlets, with the claim that the Pentagon had been bombed. Before it was acknowledged as an AI-generated fake, the false news impacted the stock market before officials announced it was not true: there had been no blast at the Pentagon (Marcelo, 2023). Within minutes after President Joe Biden announced his campaign for re-election, the Republican National Committee released an AI-generated ad, posted to YouTube, that suggested doom and gloom would engulf the nation if President Biden and Vice President Kamala Harris were re-elected. The video had an ominous tone and featured images of migrants flooding the US southern border, police officers in tactical gear on the streets of San Francisco, and simulated images of explosions in Taiwan (Forbes Breaking News, 2023).

Deepfakes are now considered the greatest threat to defending truth and the security of the nation. A deepfake is generated by machine learning techniques to create synthetic media where a person in an image, video, or audio is seen doing things they did not do, or heard saying things they did not say (Somers, 2020; Villesenor, 2019). Until recently, deepfakes have most often been used to create pornographic images of women, however, as the technology becomes more sophisticated, easier to use, and more accessible (Lanxon, 2023), there are increasing examples of deepfakes being used for political purposes (Farmer, 2021). This challenge leads to the question: how does one discern what is true when they can see or hear a person making statements or engaged in an act, yet not know if it was artificially created? The increasing realism of synthetic creations has led to the possibility that individuals who have been recorded or taped saying or doing things they later choose to deny, might claim that the supposed evidence is a deepfake (Farmer, 2021). Proving something is not a deepfake is increasingly difficult as the technology improves. This denial of accountability has been labeled the “Liar’s Dividend” (Chesney & Citron, 2018).

AI holds great promise for advancing the possibilities of what humans can achieve. Nonetheless, governments recognize the potential threat to society posed by deepfakes. Government officials and technology companies are attempting to establish guidelines for the use of generative AI. President Biden recently issued an Executive Order “to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order includes a requirement to establish new standards for AI safety and security, the intent to protect Americans’ privacy, and efforts to advance equity and civil rights” (The White House, 2023). The National Institute of Standards and Technology (NIST) issued an AI Risk Management Framework, crafted “to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI)” (National Institute of Standards and Technology [NIST], 2021). Meta announced a new policy that will require political ads running on Facebook and Instagram to disclose if they were created using AI (Klepper, 2023).

Evidence has shown disinformation and misinformation travel much faster than accurate information and, enabled by the internet, travel much farther (Vosoughi et al., 2018). The battle for information integrity is a global issue that we cannot afford to lose. Media psychologists are generally poised to support this battle and could make a meaningful impact by conducting media literacy training, conducting research, educating the public about the issues and challenges we face, or the importance of inoculation against information pollution. It does not matter how you choose to get engaged, what is important is that you do so.

References

AdImpact. (n.d.). 2024 political spending projections report from adimpact. Adimpact. https://adimpact.com/2024-political-spending-projections-report/

Chesney, R., & Citron, D. (2018). Deep fakes: A looming challenge for privacy, democracy, and national security. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3213954

Farmer, B. M. (2021, October 10). The impact of deepfakes: How do you know when a video is real? CBS News. https://www.cbsnews.com/news/deepfakes-real-fake-videos-60-minutes-2021-10-10/

Forbes Breaking News. (2023, April 25). Republicans launch eerie AI-generated attack add on Biden [Video]. YouTube. https://www.youtube.com/watch?v=wYWpw4E5T9g

Frenkel, S., & Benner, K. (2018, February 17). To stir discord in 2016, Russians turned most often to Facebook. The New York Times. https://www.nytimes.com/2018/02/17/technology/indictment-russian-tech-facebook.html

Gallup. (2020, August 4). American views 2020: Trust, media and democracy. Knight Foundation. https://knightfoundation.org/reports/american-views-2020-trust-media-and-democracy/

Gallup, Inc. (2021, October 7). In U.S., trust in politicians, voters continues to ebb. Gallup.com. https://news.gallup.com/poll/355430/trust-politicians-voters-continues-ebb.aspx

Isaac, M. (2021, March 3). Facebook ends ban on political advertising. The New York Times. https://www.nytimes.com/2021/03/03/technology/facebook-ends-ban-on-political-advertising.html

Kavanagh, J., & Rich, M. (2022, March 29). Truth decay: An initial exploration of the diminishing role of facts and analysis in American public life. Rand Corporation. https://www.rand.org/pubs/research_reports/RR2314.html

Kavanagh, J., & Rich, M. D. (2018). Truth decay: An initial exploration of the diminishing role of facts and analysis in American public life. Rand.

Klepper, D. (2023, November 8). To help 2024 voters, Meta says it will begin labeling political ads that use AI-generated imagery. AP News. https://apnews.com/article/meta-facebook-instagram-political-ads-deepfakes-2024-c4aec653d5043a09b1c78b4fb5dcd79b

Knight Foundation. (2023, February 15). American views 2022: Part 2 trust, media and democracy. https://knightfoundation.org/reports/american-views-2023-part-2/

Lanxon, N. (2023, September 20). How making deepfake videos got so easy and why it’s a threat. Washington Post. https://www.washingtonpost.com/business/2023/09/20/deepfakes-what-are-fake-ai-video-dangers-and-how-to-spot-them/6512c87c-57e2-11ee-bf64-cd88fe7adc71_story.html

Marcelo, P. (2023, May 23). Fact focus: Fake image of Pentagon explosion briefly sends jitters through stock market. AP News. https://apnews.com/article/pentagon-explosion-misinformation-stock-market-ai-96f534c790872fde67012ee81b5ed6a4

Mejia, R., Beckermann, K., & Sullivan, C. (2018). White lies: A racial history of the (post)truth. Communication and Critical/Cultural Studies, 15(2), 109–126. https://doi.org/10.1080/14791420.2018.1456668

National Institute of Standards and Technology. (2021, July 12). AI risk management framework. NIST. https://www.nist.gov/itl/ai-risk-management-framework

Pew Research Center. (2023, September 19). Americans’ dismal views of the nation’s politics. Pew Research Center – U.S. Politics & Policy. https://www.pewresearch.org/politics/2023/09/19/americans-dismal-views-of-the-nations-politics/ Somers, M. (2020, July 21). Deepfakes, explained. MIT Sloan. https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained

The White House. (2023, October 30). Fact sheet: President Biden issues executive order on safe, secure, and trustworthy artificial intelligence [Fact Sheet]. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

Villesenor, J. (2019, February 14). Artificial intelligence, deepfakes, and the uncertain future of truth. Brookings. https://www.brookings.edu/articles/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559