The Internet and New Media (INM) fundamentally alter the landscape of influence and persuasion in three major ways.
First, the ability to influence is now democratized, in that any individual or group has the potential to communicate and influence large numbers of others online in a way that would have been prohibitively expensive in the pre-Internet era. It is also now significantly more quantifiable, in that data from the INM can be used to measure the response of crowds to influence efforts and the impact of those operations on the structure of the social graph. Finally, influence is also far more concealable, in that users may be influenced by information provided to them by anonymous strangers, or even in the simple design of a web interface.
While much time is spent discussing the technical security of computer networks, perhaps less well examined is the extent to which persistent computer networks create threats for the security of social systems as well. Operations through the INM can be used to influence, misinform, manipulate and erode trust among targeted communities and within the public at large. Alternatively, they can be used to ensure greater trust among members, inhibit susceptibility to false information, and enhance awareness of malicious third-parties trying to exert influence. In short, the INM enables the large-scale “hacking” of group behavior, for good for for ill.
This symposium will convene a diverse group of experts relevant to the broad area of cognitive security (“CogSec”) that includes the development of methods that (1) detect and analyze cognitive vulnerabilities and (2) block efforts that exploit cognitive vulnerabilities to influence collective action at multiple scales.
CogSec embraces traditional messaging efforts, but also includes more sophisticated tactics such as the distribution of INM tools to encourage (or inhibit) certain types of social behavior, the manipulation of distributed group social dynamics, and the long-term, targeted shaping of social network topology to produce certain outcomes.
Our format will diverge from the traditional AAAI symposium event structure. We envision the first day to focus on talks, with the remaining two days devoted to free-form working groups developing short papers relevant to the following topics:
- A Statement of the Field of Cognitive Security
- Methods of Cognitive Vulnerability Analysis and Modeling
- A Defense Doctrine of Cognitive Security
- Design Principles for Effective Network Shaping
- A Code of Ethics of Social Shaping and Social Hacking
Opening Day Schedule (March 24)
We intend for the first day of talks to be highly conversational, and feature a substantial amount of group discussion. At the moment, we have confirmed the following series of speakers for the first day of the symposium:
9:00 - 9:30 -- Welcome
- Tim Hwang, Pacific Social Architecting Corporation
9:30 - 11:00-- Network Theory and Cognitive Security
The last decade has seen profound advancements in the understanding of social networks and group dynamics. Will the study of the shaping of social phenomena transition towards an applied, quantitive science of influence going forwards? This session brings together researchers working to understand issues of misinformation and resilience to discuss the current and likely future state of the research.
- Anil Vullikanti, Network Dynamics and Simulation Science Laboratory, Virginia Tech Bioinformatics Institute
- Huan Liu, School of Computing, Informatics and Decision Systems Engineering, ASU
11:00 - 12:30 -- Case Study: Bots
Swarms of automated identities that look and behave like real users on social platforms represent a potent potential tool for would-be cognitive hackers in the future. This session describes some of the current ongoing research about the ability for bots to shape behavior online, and the implications for privacy and data security.
12:30 - 2:00 -- Lunch
2:00 - 3:30 -- Applications and Scenarios
How much of a threat do these “cognitive vulnerabilities” actually present in practice? Who might use these methods to forward their interests? Whom might they use them against? This session describes actual efforts to use these methods to shape the behavior of populations, and investigates the potential scale and scope of similar efforts going forwards.
- Piotr Sapiezynski, SensibleDTU
- Paulo Shakarian, US Military Academy, West Point Network Science Center
4:00 - 5:30 -- Cognitively Secure Systems
How might we design systems that are robust against the potential attacks and vulnerabilities identified throughout today? This session explores this fundamental design problem by looking at some examples that have succeeded in producing more robust and secure communities to date, and opening the discussion of how we might respond to the scenarios examined in the previous session.
- L. Jean Camp, Indiana University - School of Informatics and Computing
- Andrés Hernandez-Monroy, FUSE Labs - Microsoft Research
Second day (March 25)
9:00 - 9:30 -- Predicting Susceptibility to Social Bots on Twitter
- Chris Sumner, The Online Privacy Foundation
Are some Twitter users more naturally predisposed to interacting with social bots and can social bot creators exploit this knowledge to increase the odds of getting a response?
Social bots are growing more intelligent, moving beyond simple reposts of boilerplate ad content to attempt to engage with users and then exploit this trust to promote a product or agenda. While much research has focused on how to identify such bots in the process of spam detection, less research has looked at the other side of the question—detecting users likely to be fooled by bots. This talk provides a summary of research and developments in the social bots arms race before sharing results of our experiment examining user susceptibility.
We find that a users’ Klout score, friends count, and followers count are most predictive of whether a user will interact with a bot, and that the Random Forest algorithm produces the best classifier, when used in conjunction with appropriate feature ranking algorithms. With this knowledge, social bot creators could significantly reduce the chance of targeting users who are unlikely to interact.
Users displaying higher levels of extraversion were more likely to interact with our social bots. This may have implications for eLearning based awareness training as users higher in extraversion have been shown to perform better when they have great control of the learning environment.
9:30 - 10:00 -- Information Extraction and Manipulation Threats to Crowd-Powered Systems
- Walter S. Lasecki, Carnegie Mellon University and University of Rochester
10:30 - 5:00 -- Working Groups
For further information and other inquiries, please contact the symposium organizers Tim Hwang (tim[at]pacsocial.com) and Xiaowei Wang (xiaowei[at]pacsocial.com).
The organizers would like to thank the Microsoft Technology Policy Group for their generous support of the symposium.