Investigating how hate speech has changed over time with the internet and social media

The emergence of social media can be traced back to the beginning of the Internet. In the 1990s, online communities such as Bulletin Board Systems (BBS) allowed people to connect with others. This happened through users dialing into the BBS through a modem (Driscoll, 2016). The system allowed users to post messages, share files, and engage in discussions. Due to this, the BBS is widely considered the earliest form of social media.

Chrysalis L. Wright (Corresponding Author: Chrysalis.Wright@ucf.edu), University of Central Florida; Salma Houlli, University of Central Florida; and Gabriela Tramonte, University of Central Florida

The emergence of social media can be attributed to Web 2.0 technologies, which refers to a shift in how websites and applications are designed and used. Instead of static web pages, Web 2.0 introduced web applications that allow users to interact with each other and share content in real-time (Kenton, 2023). In 2003, Myspace was the first social media platform to acquire widespread popularity, allowing users to create profiles, share photos, and connect with friends. However, Facebook soon overtook Myspace and became the dominant social media platform (Satish, 2021). Following the rise of Facebook, social media continued to expand. Platforms such as LinkedIn, Twitter, YouTube, and Instagram began to emerge and acquire large numbers of users due to Web 2.0.

Social media has revolutionized how we connect and communicate. Among the various social media platforms that have emerged, Facebook has grown into one of the largest social media platforms in the world, with billions of users (Satish, 2021).

During the early days of the Internet, online forums and chat rooms emerged as methods for disseminating hateful rhetoric. One of the internet’s original websites was Stormfront, a White supremacist website with more than 300,000 users (Jeeves, 2017). However, the rise of social media platforms, such as Facebook and Twitter, also led to increased online communities, some of which express their prejudices through hate speech, which is rooted in systemic inequalities, such as racism, sexism, homophobia, and xenophobia. Online communities can often strengthen and reinforce these prejudices and inequalities, normalizing and spreading hateful messages (Laub, 2019).

The prevalence of hate speech and racism online is a growing problem. There is clear evidence that certain groups of people are disproportionately targeted by online harassment, such as people of color (ADL, 2019). This rise in online hate speech can be attributed to several factors: the anonymity of the internet, the ability to reach a wider audience, and the easy access to online communities that promote hate speech. Additionally, online racism is prevalent through microaggressions in platform design and administration (see Matamoros-Fernández & Farkas, 2021, for a review). Examples include Snapchat and Instagram launching the ability for users to create filters and encouraging White consumers to engage in “digital Black face,” as well as Facebook allowing users and marketers to intentionally exclude Black and Hispanic consumers.

The victims of internet hate speech are diverse, with people of color and immigrants being the most targeted. Internet anonymity has led to an increase in online hate groups since it eliminates the fear of consequences. The rise in online hate is a concern as users become encouraged to share their negative opinions through exposure to like-minded individuals. Hate speech can profoundly affect individuals and communities, making it imperative to develop an approach to combat this issue.

Methods to detect hate speech on social media were prompted in 2015 when Facebook and Twitter were pushed to define online policies and remove toxic language. On February 11, 2021, Instagram and Facebook implemented detection methods and on July 15, 2021, Instagram posted an official update discussing how the company would handle abuse on the platform. It informed users that the majority of abuse and hate speech occurs in direct messages (DMs), which are private conversations. Facebook denounced attacks on people based on their protected characteristics. Instagram has developed new filtering measures, including removing hateful accounts and adding new controls to help reduce abuse in their DMs. They have also worked with law enforcement to try to reduce online abuse. Tools have been created for users to block offensive words, phrases, or emojis (Instagram, 2021). Representatives monitor text and emoticon use on Facebook and may remove any content deemed “inappropriate.” By March of 2021, more than 25 million pieces of hate speech content had been removed from the platform. Since Facebook and Instagram are now linked, they use similar methods to help prevent racism on the platform (Meta, 2021).

While some (Mozafari et al., 2020) argue that offensive speech has been successfully mitigated through training classifiers to debias these internet mediums, others (Li et al., 2023) propose that biases can bypass moderators and, in fact, still lead to significant psychological distress. While it is true that mitigation methods can detect racism and hate speech, online racism has evolved over time, across various digital platforms, and current detection methods are generally outdated (Keum & Miller, 2017). People sharing racist content online devise ever-evolving creative ways to share their content and not be detected by current methods used by social media platforms, such as the use of coded language, acronyms for racial slurs, and ambiguous writing. In order for moderation and removal efforts to be effective, they should include a measure of online racism that reflects how online racism is presently manifested, shared, and transmitted across various digital platforms.

References

ADL. (2019, November 2). Online hate and harassment: The American experience. ADL. https://www.adl.org/resources/report/online-hate-and-harassment-american-experience

Driscoll, K. (2016, October 24). Social media’s dial-up ancestor: The Bulletin Board System. IEEE Spectrum. https://spectrum.ieee.org/social-medias-dialup-ancestor-the-bulletin-board-system

Instagram. (2021, February 11). Tackling abuse and hate speech on Instagram. Instagram Blog. https://about.instagram.com/blog/announcements/an-update-on-our-work-to-tackle-abuse-on-instagram

Kenton, W. (2023, July 30). What is Web 2.0? Definition, impact, and examples. Investopedia. https://www.investopedia.com/terms/w/web-20.asp

Keum, B. T., & Miller, M. J. (2017). Racism in digital era: Development and initial validation of the Perceived Online Racism Scale (PORS v1.0). Journal of Counseling Psychology, 64(3), 310–324. https://doi.org/10.1037/cou0000205

Laub, Z. (2019, June 7). Hate speech on social media: Global comparisons. Council on Foreign Relations. https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons

Li, Y., Kim, M., Dong, F., & Zhang, X. (2023). Racial discrimination, coping, and suicidal ideation in Chinese immigrants. Cultural Diversity and Ethnic Minority Psychology. https://doi.org/10.1037/cdp0000588

Matamoros-Fernández, A., & Farkas, J. (2021). Racism, hate speech, and social media: A systematic review and critique. Television & New Media, 22(2), 205-224. https://doi.org/10.1177/1527476420982230

Meta. (2021, July 15). What we’re doing to tackle online hate. Meta. https://www.facebook.com/business/news/what-were-doing-to-tackle-online-hate

Mozafari, M., Farahbakhsh, R., & Crespi, N. (2020). Hate speech detection and racial bias mitigation in social media based on Bert Model. PLOS ONE, 15(8). https://doi.org/10.1371/journal.pone.0237861

Reeves, J. (2017, August 28). Oldest white supremacist site, stormfront.org, shut down. USA Today. https://www.usatoday.com/story/tech/news/2017/08/28/oldest-white-supremacist-site-shut-down/608981001/

Satish, R. (2021, September 30). The fall of Myspace and rise of Facebook!. Medium. https://medium.com/@rjn.mark01/the-fall-of-myspace-and-rise-of-facebook-fffee182cb18