An important theme at the intersection of media and psychology is how academic media influences the content of psychological research. Academic publishers’ interests might differ from those of psychologists. Since researchers’ published works are evaluated by their institutional policies and grant agencies, they might pursue topics consistent with those of academic journals and put aside topics that are of interest to them and those that are relevant to the communities they serve.
A discussion of the theme of how academic media influences research needs to consider whether the researchers are from “large” or “small” cultures. A large culture—in terms of the chances of being cited—is one where a large number of active researchers have sufficient knowledge to understand the terminology related to the culture’s phenomena or issues. Such understanding is possibly acquired by being raised in the culture or by frequently reading about the phenomena or issues in textbooks and media. Either way, the researchers in a large culture acquire the facility to read a publication quickly or to glean the necessary information from the abstract without spending much time trying to understand the terminology and the cultural context surrounding the issue of investigation. American and Chinese mainstream cultures are examples of large cultures where a large number of researchers are trained. Conversely, a small culture, such as Bhutan or a small Native American nation, is one whose culture-specific terminology and contexts are relatively unknown to many active researchers in large cultures. Another example of small culture, in this sense, might also be the culture of a country, like Korea, which has a large population of researchers, but it is still small compared to the overall international research community. Publications that focus on the specific issues of small cultures are difficult to understand for most researchers working in the larger mainstream culture and they are unlikely to spend time to educate themselves on cultural aspects of a small culture issue to cite the research.
Researchers, therefore, face a conflict of interest when wanting to investigate specific themes relevant to small cultures. In many countries, academic publishers and researchers are evaluated based on the number of citations their publications receive. Since more citations are regarded as better by institutions and grant agencies, researchers and journals adopt strategies to increase the number of citations. And, in the case of publishing small-culture-specific research, for example, research on the shimcheong emotion in Korea (Choi & Han, 2008), the use of such strategies creates a conflict of interest for both the researchers and journals.
Mainstream research is often based on phenomena (e.g., the fundamental attribution error, Harvey, Town, & Yarkin, 1981), that happen in large cultures. However, a phenomenon in a small culture, like the shimcheong emotion in Korea (Choi & Han, 2008), does not have a close equivalent in large cultures. For such a culturally specific phenomenon, it is important that authors explain the cultural context of the phenomena to enable readers in other cultures better understand its significance. Unfortunately, because researchers often rely on abstracts to see if the articles are worth reading further and that the abstracts do not include culturally contextual details about the published studies, it is highly unlikely researchers will read the entire article to be able to appreciate its relevance for their study and cite the study. Even those readers who are willing to spend the time to read about a culturally specific phenomenon may incorrectly conclude from just reading the abstract that the phenomenon or an issue discussed is not relevant to their own culture and not cite the article.
Researchers from small cultures in countries that use citation-based metrics to evaluate research face a conflict of interest when they research culturally specific phenomena that are important to their communities inasmuch as such research might receive few citations, which in turn may lead to an unfavorable evaluation of their work. Yet, if small culture researchers choose to study how a phenomenon present in a large culture works in their culture, they will receive many more citations and receive more favorable evaluations. However, such research is unlikely to enrich the diversity of international social science and limits its application to serve the community at large.
A common topic in media research is the reciprocal relationship between media content and audience behaviors and the influence of audience behaviors on the content published and read in media, driven, for example, by the number of clicks or views. This relationship is clear in the academic media, where the cultural content or relatedness of a study can impact the number of citations, which in turn can influence where researchers and readers focus their attention. Thus, it is important to study how to change the authors’ and journals’ behaviors to increase the cultural diversity of the media content.
The reciprocal connection between media content and audience behavior is mediated by the citation metrics used in evaluating work and by the revenues generated for the authors and journals. Many countries that use citation metrics and impact factors base their evaluations on such databases as Web of Science and Scopus. To break the connection between academic media evaluation and low cultural diversity of published content, we need to change this system of evaluation. Because it is not realistic to expect that institutions will stop using these databases, we might try to change their content.
The current metrics in databases tend to be based on the number of citations. However, they might be enriched by including an automatically computed diversity-based metric that would allow for the evaluation of cultural diversity. Both Web of Science and Scopus have only one field that contains information related to cultural diversity: the language of the publication. It would be, therefore, beneficial to add to these databases a scientometric measure of diversity that could be computed from information about a publication’s language.
The Linguistic Diversity Index (LDI) is an example of such a metric. The LDI rates the number and rareness of languages of the sources cited in a publication’s reference list. The higher the number and the rarer the language cited in the reference list, the higher the LDI. Because LDI is based on information about language, which is already present in databases, they need only add a new algorithm for computing it. You can read more details about this measure in Linkov, O’Doherty, Choi, and Han (2021) and sign a petition for its adoption at http://chng.it/rWBXCvC6.
Adding LDI or other diversity-based metrics to Web of Science, Scopus, and other scientometric databases may help increase the space devoted to culturally diverse research in academic media. This is necessary to help researchers, institutions, and funding agencies value the inclusion of culturally diverse research in their work and make the work reported in rarely cited investigations more visible.
Choi, S.-Ch., & Han, G. (2008). Shimcheong psychology: A case of an emotional state for cultural psychology. International Journal for Dialogical Science, 3, 205-224.
Harvey, J. H., Town, J. P., & Yarkin, K. L. (1981). How fundamental is “the fundamental attribution error”? Journal of Personality and Social Psychology, 40(2), 346–349.
Linkov, V., O’Doherty, K., Choi, E., & Han, G. (2021). Linguistic Diversity Index: A Scientometric Measure to Enhance the Relevance of Small and Minority Group Languages. SAGE Open, doi:10.1177/21582440211009191.
(Co-Editors’ Note: Václav Linkov received the 2021 Distinguished Early Career Contributions Awards to Media Psychology and Technology from the APA Society for Media Psychology and Technology)