Instructional Awareness: A User-Centred Approach for Risk Communication in Social Network Sites


Often, users of Social Network Sites (SNSs) like Facebook or Twitter find hard to foresee the negative consequences of sharing private information on the Internet. Hence, many users suffer unwanted incidents such as identity theft, reputation damage, or harassment after their private information reaches an unintended audience. Many efforts have been made to develop preventative technologies (PTs) with the purpose of raising the levels of privacy awareness among the users of SNSs. Basically, these technologies generate interventions (i.e. warning messages) when users attempt to disclose private or sensitive information inside these platforms. However, users do not fully engage with PTs because they often perceive their interventions as too invasive or annoying. Basically, this happens because users have different privacy concerns and attitudes that should be considered when generating such interventions. In other words, some users are less concerned about their privacy than others and, consequently, are more willing to disclose private information without carrying much about the consequences. Therefore, PTs should incorporate adaptivity principles to their design in order to successfully nudge the users towards better privacy practices.
This thesis focuses in the development of an adaptive approach for generating privacy awareness in SNSs. Particularly, in the elaboration of software artefacts for communicating those privacy risks that may occur when disclosing private information in SNSs. Overall, this covers two main aspects: knowledge extraction and knowledge application. Artefacts for knowledge extraction include the data structures and methods necessary to represent and elicit risky self-disclosure scenarios in SNSs. In this work, privacy heuristics (PHs) are introduced as an alternative for representing such scenarios and as fundamental instruments for the generation of adaptive privacy awareness. Alongside, the artefacts corresponding to knowledge application comprise those methods and algorithms that leverage the information contained inside PHs to shape the corresponding interventions. This includes methods for estimating the risk impact of a self-disclosure act and mechanisms for regulating the content and frequency of warning messages. All of these artefacts collaborate with each other in a conceptual framework that this thesis calls Instructional Awareness.

University of Duisburg-Essen, Department of Computer Science and Applied Cognitive Science,