Pennsylvania State University
Cyber security and privacy are areas of concern not only to organizations but to individual end users. Computer scientists can devise algorithms and security measures that help protect organizations and systems from attack, but the success of these measures depends ultimately on the users. In 2012, the first author and his graduate student at the time, Jing Chen, now Assistant Professor at Old Dominion University, began collaboration with Ninghui Li of Computer Science at Purdue and his graduate student, Chris Gates, now Technical Director at Symantec. The project on which we collaborated turned out to be the first of many in the area of cyber security and privacy. Various students in psychology and computer science have passed through the interdisciplinary research group in the ensuing years, both contributing to the research and gaining experience on an interdisciplinary team, and another Computer Science faculty member, Jeremiah Blocki, joined in 2016. Our research program has been productive because of the blending of computer science and psychology.
Malware and Phishing Attacks
The first project we conducted (Gates, Li, Chen, & Proctor, 2012) involved implementation and usability evaluation of a personalized application whitelisting CodeShield to block malware attacks. Rather than just warning users when a potentially dangerous action occurs and allowing them to continue, CodeShield blocks the action and requires the user to make a more effortful decision on another installation interface to accept the action. For the user study, participants ran the CodeShield on their laptop computers for 6 weeks. Results showed that they generally understood and accepted the model implemented in the software.
In more recent work we have focused specifically on phishing attacks, in which spoofed online communication (e.g., emails or instant messages) were used to lure users into providing personal information to an illegitimate online party. Such attacks have had, and continue to have, negative impacts on individuals, organizations, and society. Examination of the human information processing of the adversary and the user suggests that there is an asymmetry of information that is crucial to success of the phishing scams (Xiong, Proctor, & Li, 2020). The attackers are experts who keep up-to-date on the latest security vulnerabilities to take advantage of users’ response tendencies and heuristics. When a webpage is classified by security software as suspicious, a warning is typically displayed. But, users tend to ignore the warnings, as we found in a field experiment (Yang, Xiong, Chen, Proctor, & Li, 2017). In Yang et al. (2017), we simulated a phishing campaign and presented a phishing warning to participants through a Chrome extension installed on their laptop computers. About half of the participants who received the warning only entered their username and password; for the other half of the participants, who had received prior training about phishing, none fell for the phishing. Consequently, we have advocated an approach in which training is embedded in the warnings by highlighting and explaining the cues to which the user needs to attend and provided evidence that training embedded warnings improved users’ detection of phishing webpages (Xiong, Proctor, Yang, & Li, 2019).
Much of our research over several years has centered on how to get mobile device users to take privacy risk/safety into account when choosing which of several applications (apps) with similar functionality to install on their devices. The main information provided for Android devices is a list of permissions requested by an app. These permissions are difficult to comprehend, and most users do not consider them. We have taken an approach of presenting risk information at three levels of granularity. The lowest level is a summary risk index, displayed as the number of filled circles out of five (similar to the user ratings of one to five stars), based on an estimate of the total privacy risk associated with an app. The intermediate level displays risk of three major categories, and the highest level is that of the permissions themselves.
Most of our efforts in app selection have been devoted to the summary risk index. In several studies, we have demonstrated that providing summary risk information influences participants’ app choices, though less than user ratings do (Chen, Gates, Li, & Proctor, 2015; Gates, Chen, Li, & Proctor, 2014). A consistent finding in those studies is that the impact of such summary information is greater when it is framed as the amount of safety rather than amount of risk. This may in part be due to safety being a more unidimensional concept than risk. Another finding is that priming participants prior to the selection task, with either ratings of agreement/disagreement with self-relevant privacy statements or presentation of factual information about app permission requests, also causes the participants to weight safety more heavily in their app selection decisions (Chong, Ge, Li, & Proctor, 2018; Rajivan & Camp, 2016). Perhaps most significant, Chong et al. showed that even a single item from either prime category is sufficient to increase the weighting of the summary privacy index in the decision.
For the middle level of granularity, we determined that most risks associated with mobile device use fall into three categories: personal privacy, monetary loss, and device stability (Jorgensen et al., 2015). In subsequent experiments, participants performed app risk-rating and app-selection tasks with various graphical formats for displaying the risks associated with the three categories (Chen et al., 2018). In general, the intermediate-level risk displays, particularly horizontal bar graphs, were effective at conveying the risk-category information. Moreover, individual participants’ ratings of the importance of the three risk categories correlated positively with the influence of the respective categories on their decisions. With regard to permissions, we have compared the effectiveness of alternative interfaces employed in versions 5 and 6 of Android (Moore, Ge, Li, & Proctor, 2019), the former of which informs users of all permissions requested at the time the app is downloaded and the latter of which informs users of each permission on its first use by the downloaded app. Participants indicated a preference for the Android 6 interface, but their performance was no better with that interface than with Android 5.
Password Generation and Retrieval
A third research area on which we have concentrated is improving users’ password behaviors through training and interface design. We have assessed multiple methods of training users how to make effective passwords including, but not limited to, using mnemonic strategies that will result in memorable but secure passwords (Yang, Li, Chowdhury, Xiong, & Proctor, 2016) and stringing together person, action, and object words to make passwords (PAO strategy; Blocki, Komanduri, Cranor, & Datta, 2015). In currently unpublished work, this latter PAO strategy has been found to be particularly useful in promoting later recall. Despite better security resulting from mnemonic strategies, participants’ long-term recall was not ideal (Yang et al., 2016). Thus, we also have started investigating memory errors made by participants to gain insights to improve their mnemonic strategies.
We have also begun to investigate how password interfaces might be altered to encourage more secure password generation by introducing privacy priming similar to that studied by Chong et al. (2018) in the context of app selection. Specifically, we have begun to investigate whether priming is more effective when it is introduced as a pop-up between sites or incorporated within the account generation process. Our team is also investigating how password cues presented during generation can be reintroduced during recall to aid in a user’s memory retrieval. Lastly, when it comes to password training, one of the biggest issues faced lies in the outdated beliefs people hold about what exactly makes a secure password. We have collected data on this matter and hope to use this information to retrain false commonly held beliefs.
Cyber security is an area of crucial concern for almost all applications of computer technology and media to which basic and applied psychologists can contribute. The joint efforts from our team have demonstrated that analysis of human-centered interventions informed by our distinct backgrounds in cognitive psychology, human factors, human-computer interaction, and computer science provide effective ways to address a range of cyber security issues.
Acknowledgment: The research of our team has been supported in part by NSF awards no. 1314688 and 1704587 and by National Security Agency grants as part of a Science of Security lablet through North Carolina State University.
Blocki, J., Komanduri, S., Cranor, L., & Datta, A. (2015). Spaced repetition and mnemonics enable recall of multiple strong passwords. In 22nd Annual Network and Distributed System Security Symposium, NDSS 2015, San Diego, California, February 8-11, 2015.
Chen, J., Gates, C. S., Li, N., & Proctor, R. W. (2015). Influence of risk/safety information framing on Android app-installation decisions. Journal of Cognitive Engineering & Decision Making, 9, 149-168.
Chen, J., Ge, H., Moore, S., Yang, W., Li, N., & Proctor, R. W. (2018). Display of major risk categories for Android apps. Journal of Experimental Psychology: Applied, 24, 306-330.
Chong, I., Ge, H., Li, N., & Proctor, R. W. (2018). Influence of privacy priming and security framing on mobile app selection. Computers & Security, 78, 143-154.
Gates, C. S., Chen, J., Li, N., & Proctor, R. W. (2014). Effective risk communication for Android apps. IEEE Transactions on Dependable and Secure Computing, 11, 252-265.
Gates, C., Li, N., Chen, J., & Proctor, R. W. (2012). CodeShield: Towards personalized application whitelisting. In Proceedings of the 28th Annual Computer Security Applications Conference (ACSAC) 2012 (pp. 279-288). New York: ACM.
Jorgensen, Z., Chen, J., Gates, C. S., Li, N., Proctor, R. W., & Yu, T. (2015). Dimensions of risk in mobile applications: A user study. In Proceedings of the Fifth ACM Conference on Data and Application Security and Privacy (pp. 49-60). New York: ACM.
Moore, S. R., Ge, H., Li, N., & Proctor, R. W. (2019). Cybersecurity for Android applications: Permissions in Android 5 and 6. International Journal of Human-Computer Interaction, 35, 630-640.
Rajivan, P., & Camp J. (2016). Influence of privacy attitude and privacy cue framing on android app choices. Proceedings of the Twelfth Symposium on Usable Privacy and Security (SOUPS). Berkeley, CA: USENIX Association.
Xiong, A., Proctor, R. W., & Li, N. (2020). Evolution of phishing attacks: Challenges and opportunities for humans to adapt to the ubiquitous connected world. In M. Mouloua & P. A. (Eds.), Human performance in automated and autonomous systems: Emerging issues and practical perspectives (pp. 237-257). Boca Raton, FL: CRC Press.
Xiong, A., Proctor, R. W., Yang, W., & Li, N. (2019). Effects of embedding anti-phishing training within cybersecurity warnings. Human Factors, 61, 577-595.
Yang, W., Li, N., Chowdhury, O., Xiong, A., & Proctor, R. W. (2016). An empirical study of password generation strategies. In CCS ’16 Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (pp. 1216-1229). New York: ACM.
Yang, W., Xiong, A., Chen, J., Proctor, R. W., & Li, N. (2017). Use of phishing training to improve security warning compliance: Evidence from a field experiment. In HotSoS Proceedings of the Hot Topics in Science of Security: Symposium and Bootcamp (pp. 52-61). New York: ACM.