Dr. Scott Debb
Norfolk State University
smdebb@nsu.edu

Humankind gains enormous power by building large networks of cooperation, but the way these networks are built predisposes them to use power unwisely (Harari, 2024, p. 6).”

We are living in a world where communication depends heavily on digital technologies such as social media. These platforms create ease of connectivity with little upfront cost and without the need for in-person interaction. Social media also persuades people to share their perspectives within echo chambers of like-minded individuals (Cinelli et al., 2021), which erodes the boundaries between public and private life (Brenner, 2013; Jaidka, 2022). Moral outrage has a new outlet (Crockett, 2017), and with it comes the cost of amplification and intensified polarization between culturally and politically divergent groups, contributing to destructive and at times seemingly intractable conflict.

Cyberpsychology is the study of how digital technologies intersect with human behavior (Debb, 2021). Historically focused on online compared to offline behavior (Suler, 2016), the field has grown to explore how digital tools mediate (i.e., influence the process) and moderate (i.e., affect the intensity) how we view, value, and maintain relationships. The uses and gratifications framework helps explain why people gravitate towards the various functions social media provides (Orchard, 2019), and cyberpsychology helps explain the declining reliance on physical space for civil discourse (Hamilton, 2021). This has happened over the past two decades, seemingly reciprocal to increases in incendiary, user-generated content flooding social media feeds with unprecedented frequency, speed and intensity.

A significant focus of cyberpsychological research is how online environments affect self-presentation (Fullwood, 2019). Social media encourages selective self-representation, for example, among influencers, who tailor their personas to maximize reach and engagement. The hyperpersonal model of communication (Walther, 1996) suggests that online interactions can become more intimate or even exaggerated due to carefully curated content. However, this can compromise their ability to remain unbiased, which may not be easily discernable to users who lack the digital literacy needed to identify content that is misinformation (unintentional sharing of false or misleading information), disinformation (false information deliberately shared to deceive), or malinformation (objectively true information that is used out of context with harmful intent to mislead).

Social media-based AI technologies have been purposefully engineered to observe user behavior and amplify emotionally charged content (Omar & Ondimu, 2024), regardless of its reliability or validity. The result is an environment optimized for constant engagement above accuracy. AI elevates provocative voices to users’ feeds, attempting to trigger emotional arousal, often at the expense of ethical, moral and social responsibilities (Laidlaw, 2015; Wynn-Williams, 2025). Over time, this algorithmic design socially engineers attention patterns, reinforcing ideological silos and facilitating widespread polarization.

AI essentially “trains us to train it better” (Varoufakis, 2023, p. 68). By shaping emotions and self-perceptions, social media directly exploits the core psychological needs outlined by self-determination theory—autonomy, competence, and relatedness (Ryan & Deci, 2000). SDT is a common framework in cyberpsychological research that conceptualizes why people are motivated to engage in behaviors that seem to help meet these core psychological needs. In essence, social media has learned to weaponize our desire to feel good about ourselves.

Accordingly, when user-generated content started to become more influential than traditional news media outlets (Manosevitch & Tenenboim, 2016), fear became a reliable tool for exploiting users’ attention. When considering that “Technology is asked only to continue to instrumentalize the world, to help us to do anything and everything faster, cheaper and at larger scales (but) genuine moral reflection gets squeezed out” (Hong, 2020, p. 186), it invites the type of polarization we are experiencing today. Although social media can empower users to express themselves more genuinely, the absence of face-to-face accountability often leads to overtly aggressive and antagonistic discourse.

This behavior is closely tied to fear of missing out (FoMO), fueled by constant connectivity and the compulsive consumption of inflammatory and polarizing content. In such digital spaces, repeated exposure reinforces extreme views and ideological rigidity (Holt, 2022). Technology companies frequently rationalize their lack of oversight by invoking Section 230 of the U.S. Communications Decency Act (1996), which shields them from legal liability for user-generated content while also allowing broad discretion to remove or restrict content (i.e., censorship). This regulatory gap leaves the public vulnerable to AI-curated information, often without meaningful recourse, despite the significant societal harm caused by unmoderated, polarizing rhetoric.

Anonymity and identity fluidity are central to these dynamics. While social media offers social capital and connectivity, it also facilitates behaviors that would be less common in face-to-face contexts. Increased anonymity may also contribute to an erosion of privacy (Marwick, 2023) as well as encouraging users to engage in morally questionable behaviors. The online disinhibition effect describes how individuals shed social restraints in digital settings, sometimes leading to deindividuation and antisocial conduct (Barton & Laffan, 2024; Connolly, 2024). This disinhibition can escalate into moral disengagement and trolling, especially when users attribute their behavior to perceived authority figures or prevailing group norms (Bandura, 2016; Wu et al., 2023). This can be thought of in the context of social influence theory, where online herding leads people to conform to otherwise irrational attitudes and opinions, as well as deleterious stereotyping of others they observe in digital environments (Muchnik et al., 2013; Noble, 2018).

The broader structural forces driving these trends are rooted in the commodification of digital life. Our online activity has been monetized through the privatization of the internet, where large tech companies are incentivized—both socially and financially—to extract value from our personal data. Although users technically grant consent to access their data, this permission is often obtained through manipulative interface design known as “dark patterns” (Feldman, 2018; Nouwens et al., 2020). These patterns are intentionally engineered to nudge users toward “accepting all” terms or to overlook opt-out options buried deep in the interface.

This dynamic exemplifies technocapitalism—the fusion of technological innovation with profit maximization—which thrives within corporate-controlled digital systems (Kellner, 2021; Suarez-Villa, 2009). In such environments, people have effectively become the product, and algorithms increasingly determine what we see, believe, and ultimately share with others.

Zeus would have no choice…but to one day destroy a humanity incapable of restraining its own, technologically induced, power (Varoufakis, 2023, p. 10).”

It has become increasingly difficult to distinguish genuine connection from AI-generated and socially engineered manipulation. Social media is designed to exploit our basic human emotions and psychological vulnerabilities, amplifying extremist voices that reinforce the perceived safety of retreating to our digital silos. Overcoming this reality demands that we choose critical thinking over conformity, empathy over aggression, and dialogue over detachment.

To meet this challenge, we must actively promote digital literacy—equipping people to recognize both overt and subtle forms of bias embedded in user-generated social media content, which gets artificially elevated by algorithms in pursuit of corporate profit over social responsibility. Just as importantly, we need to reimagine how we each confront our blind spots and challenge ourselves about assumptions and worldviews we hold to be objective truths rather than subjective, albeit not necessarily malicious, opinions and perspectives. This is especially urgent now, given the ubiquity of social media and the opportunities it offers to engage with diverse voices and cultures across the globe. If we harness these tools purposefully and thoughtfully, we can begin to transform our current digital captivity into a platform for collective growth, leveraging technologies like AI to expand human potential rather than diminish it.

References

Bandura, A. (2016). Moral disengagement: How people do harm and live with themselves. Worth Publishers.

Barton, H., & Laffan, D.A. (2024). The dark side of the internet. In Kirwan, G., Connolly, I., Barton, H., & Palmer, M. (Eds.), An introduction to cyberpsychology (2nd ed.), pp. 75-90. Routledge.

Brenner, J. (2013). Glass houses: Privacy, secrecy, and cyber insecurity in a transparent world. Penguin.

Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118. https://doi.org/10.1073/pnas.2023301118

Communications Decency Act, 47 U.S.C. § 230 (1996).

Connolly, I. (2024). Young people and the internet. In Kirwan, G., Connolly, I., Barton, H., & Palmer, M. (Eds.), An introduction to cyberpsychology (2nd ed.), pp. 271-285. Routledge.  

Crockett, M. J. (2017). Moral outrage in the digital age. Nature Human Behaviour, 1(11), 769-771. doi: 10.1038/s41562-017-0213-3

Debb, S.M. (2021). Enhancing online graduate education through interactive asynchronous instruction: Utilizing cyberpsychology to teach cyberpsychology. In Southeastern Teaching of Psychology Conference. Society for the Teaching of Psychology.

Debb, S.M., Haschke, K.J., & McClellan, M.K. (2022). Validation of the fear of missing out scale for use with African Americans in the United States. Cyberpsychology, Behavior, & Social Networking, 25(7), 439-449. https://doi.org/10.1089/cyber.2021.0151   

Feldman, B. (2018, June 28). How big tech manipulates users into sacrificing privacy. New York Magazine. https://nymag.com/intelligencer/2018/06/how-big-tech-manipulates-users-into-sacrificing-privacy.html

Fullwood, C. (2019). Impression management and self-presentation online. In A. Attrill-Smith, C. Fullwood, M. Keep, & D. J. Kuss (Eds.), The Oxford Handbook of Cyberpsychology (pp. 37–52). Oxford University Press.

Hadnagy, C. (2011). Social engineering: The art of human hacking. Wiley.

Hamilton, R.J. (2021). Governing the Global Public Square. Harvard International Law Journal, 62, 117-174. https://journals.law.harvard.edu/ilj/2021/04/governing-the-global-public-square/

Harari, Y.N. (2018). 21 lessons for the 21st century. Penguin Random House.

Harari, Y.N. (2024). Nexus: A brief history of information networks from the stone age to AI. Penguin Random House.

Holt, T., Bossler, A., & Seigfried-Spellar, K. (2022). Cybercrime and digital forensics: An introduction (3rd ed.). Routledge.

Hong, S.H. (2020). Technologies of speculation: The limits of knowledge in a data-driven society. New York University Press.

Jaidka, K., Zhou, A., Lelkes, Y., Egelhofer, J., & Lecheler, S. (2022). Beyond anonymity: Network affordances, under deindividuation, improve social media discussion quality. Journal of Computer-Mediated Communication, 27(1), zmab019. https://doi.org/10.1093/jcmc/zmab019

Kellner, D. (2021). Technology and democracy: Toward a critical theory of digital technologies, technopolitics, and technocapitalism. Springer.

Laidlaw, E.B. (2015). Regulating speech in cyberspace: Gatekeepers, human rights and corporate responsibility. Cambridge University Press.

Manosevitch, I., & Tenenboim, O. (2016). The multifaceted role of user-generated content in news websites: An analytical framework. Digital Journalism, 5(6), 731-752. https://doi.org/10.1080/21670811.2016.1189840

Marwick, A.E. (2023). The private is political: Networked privacy and social media. Yale University Press.

Muchnik, L., Aral, S., & Taylor, S. J. (2013). Social influence bias: A randomized experiment. Science, 341(6146), 647–651. https://doi.org/10.1126/science.1240466

Nouwens, M., Liccardi, I., Veale, M., Karger, D. R., & Kagal, L. (2020). Dark patterns after the GDPR: Scraping consent pop-ups and demonstrating their influence. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–13). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376321

Noble, S.U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Omar, A.S., & Ondimu, K.O. (2024). The impact of social media on society: A systematic literature review. The International Journal of Engineering & Science, 13(6), 96-106. https://doi.org/10.9790/1813-130696106

Orchard, L. J. (2019). Uses and gratifications of social media: Who uses it and why? In A. Attrill-Smith, C. Fullwood, M. Keep, & D. J. Kuss (Eds.), The Oxford Handbook of Cyberpsychology (pp. 389–406). Oxford University Press.

Ryan, R.M., & Deci, E.L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68  

Suarez-Villa, L. (2009). Technocapitalism: A critical perspective on technological innovation and corporatism. Temple University Press.

Suler, J.R. (2016). Psychology of the digital age: Humans become electric. Cambridge University Press.

Varoufakis, Y. (2023). Technofeudalism: What killed capitalism. Melville House.

Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 3–43. https://doi.org/10.1177/009365096023001001

Wynn-Williams, S. (2025). Careless people: A cautionary tale of power, greed, and lost idealism. Flatiron Books.

Wu, B., Xiao, Y., Zhou, L., Li, F., & Liu, M. (2023). Why individuals with psychopathy and moral disengagement are more likely to engage in online trolling? The online disinhibition effect. Journal of Psychopathology & Behavioral Assessment, 45(2), 322-332. https://doi.org/10.1007/s10862-023-10028-w