The internet and online communities, including social media, have made it easier for racist groups to share their ideologies with others. The more someone uses the internet, the more likely it is that they will encounter online racism. Online racism is more prevalent than offline racism, as racist posts are consistently produced and spread by users, increasing the likelihood that online racist posts will go viral on social media. Online racism may include justification of harms directed toward racial minorities (e.g., police brutality), sharing of misinformation intended to prove that racial minorities are inferior to the majority group, and passively supporting online racism (e.g., sharing someone else’s post). The passive support of online racism allows the person to participate in online racism without having personal accountability for the content. Historically, online racism has targeted and continues to target African Americans more so compared to other racial groups (e.g., White, Asian American, Latino). With 20.8% of African Americans being the daily primary users of social networking sites, unjust racial targeting is important to be noted. Generally, social media users are able to detect and identify online racism, regardless of their racial background. However, current detection methods implemented by social media platforms to identify and remove online racism fall short.
The internet grants an opportunity for rapid communication, along with providing a platform for connecting like-minded individuals, and anonymity in communications. Online anonymity is one of the main reasons why online racism is common and continues to be prevalent and present. Anonymity makes people feel invisible and provides a sense of self-confidence in the online space that may not be present in person. This is directly related to the theory of disinhibition, which states that increased feelings of anonymity are associated with disinhibited behaviors that would not otherwise occur. While online racism is common, moderation of such content has faced backlash due to the idea of digital free speech. While free speech is often used in defense of online racism, research has found that explicit racial prejudice was a reliable predictor of people who used the free speech argument to defend racist behavior/speech, unless directed at coworkers or law enforcement. The free speech argument also tends to be used strategically, when convenient. Taking on an anonymous persona and not having anything to trace back to an actual person merely fuels this cry for free speech, as there is a diffusion of responsibility when there is no consequence to racist posts.
Numerous studies have demonstrated the impact of online racism on consumers and users of online spaces. Exposure to online racism is a predictor of poor mental health, increased depression and generalized anxiety, increased psychological distress, unhealthy coping behaviors, increased chronic stress, and reduced academic performance among racial minority groups. As children are accessing the internet at increasingly younger ages, they are being exposed to both overt and covert forms of racism in larger quantities, which is especially detrimental for children and young adults of color.
Since 2015, social media platforms have been pressured to develop specific policies regarding online racism and other harmful content. Despite the monumental effort in moderating and removing online racist content, such content can remain a permanent presence on the internet. Many social media platforms (i.e., Reddit, Instagram, Facebook) have detection methods designed to decrease online racism on their platforms but online racism has evolved over time, causing the detection methods of these platforms to become generally outdated.
It may be easier for social media platforms to develop new detection programs than it is to maintain and upgrade dated ones. Those sharing racist content online devise evolving creative ways to share their content and not be detected by current methods used by social media platforms, such as the use of coded language, acronyms for racial slurs, as well as ambiguous writings. For moderation and removal efforts to be effective, they should include a measure of online racism that reflects how online racism is presently manifested, shared, and transmitted across various digital platforms. Many detection methods designed to monitor online racism have fallen short due to a rise in unintended bias, leading to racial minority groups being unfairly targeted by the very system designed to protect them.
Social media platforms like Instagram and Facebook have come forward with their own methods on how to prevent hate and racist content within their platforms. Instagram explained that most online racism happens through direct messages (DMs), which is harder to monitor because DMs are for private conversations. However, Instagram claims to have new steps to help prevent this type of behavior. These platforms developed new measures, such as removing accounts of people who are participating in sending abusive messages, and they also developed new controls to help reduce the abuse people see in their DMs. Their rules do not tolerate attacks on people based on their protected characteristics, such as race. They came up with strict penalties for people who send abusive messages, are working with law enforcement to try to reduce online abuse, and have created tools for Instagram users to block offensive words, phrases, or emojis that they do not want to see. Since Facebook and Instagram are now owned by the same company, they use similar tactics to help to prevent racism on the platform. Facebook gives users the option to filter messages and block any words and emojis that they find offensive and do not wish to receive. For Messenger, the message part of Facebook, there is an option to ignore a conversation and automatically have it moved out of their inbox, without having to block the sender. The platform also launched a tool that lets users decide who can comment on their posts.
Online racism is a highly nuanced and complex phenomenon. It is constantly changing its face and reinventing its language, which has made it difficult to target. Every day, more people are either publishing racist content or stumbling across such content online. More people are influenced by such content and spread it to others. An increasing number of people are negatively affected by it, including children and teens. It is imperative in an era where technology has become the norm in daily life that social media companies take more responsibility for the role they play in enabling such content online to spread. Dedicating permanent task forces to studying racist semantics and prioritizing reports from users that call out racism is crucial. Holding focus groups with younger members of targeted communities to discuss hate speech that they have seen online would also aid in spotting evolving racist rhetoric or language. Finally, the recently proposed Blueprint for an AI Bill of Rights may be the first step in making automated systems and online spaces work for all people.