Carnegie Mellon University

Jaspreet Bhatia

Jaspreet Bhatia

  • WEH 4208
Address
5000 Forbes Avenue
Pittsburgh, PA 15213

Bio

I am a third year Ph.D. student at the Institute for Software Research at Carnegie Mellon University. I am fortunate to be advised by Dr. Travis Breaux

I am broadly interested in applying natural language processing techniques to solve software engineering problems. More specifically, my research work focuses on developing techniques to extract and analyze privacy requirements using crowdsourcing, natural language processing, and user studies. 

During my PhD so far, I have developed a hybridized task re-composition framework, that semi-automatically extracts privacy goals that describe a website's data practices. I have also worked on developing a theory of vagueness and privacy risk perception, with our collaborators, Prof. Joel Reidenberg and Dr. Thomas Nortan. 

I have been working on developing an empirically validated framework for understanding and measuring perceived privacy risk

I am currently also working on developing a framework to extract and analyze data practices described in privacy policies, using frames. These frames help store the semantic information extracted from the data practice in a structured format, enable question-answering about the privacy policy, and help users and regulators better understand the website's data practices. 

Projects

Privacy Goal Mining 

Privacy policies describe high-level goals for corporate data practices. We have developed a semi-automated framework that combines crowdworker annotations, natural language typed dependency parses, and a reusable lexicon to improve goal-extraction coverage, precision, and recall. Our results show that no single framework element alone is sufficient to extract goals; however, the overall framework compensates for elemental limitations. Human annotators are highly adaptive at discovering annotations in new texts, but those annotations can be inconsistent and incomplete; dependency parsers lack sophisticated, tacit knowledge, but they can perform exhaustive text search for prospective requirements indicators; and while the lexicon may never completely saturate, the lexicon terms can be reliably used to improve recall.

[Paper]

This work is supported by NSF Frontier Award #1330596. For more details about this project, please visit our Usable Privacy Project Website.

Vagueness in Privacy Policies 

Vagueness undermines the ability of organizations to align their privacy policies with their data practices, which can confuse or mislead users thus leading to an increase in privacy risk. We have developed a theory of vagueness for privacy policy statements based on a taxonomy of vague terms derived from an empirical content analysis of privacy policies. The taxonomy was evaluated in a paired comparison experiment and results were analyzed using the Bradley-Terry model to yield a rank order of vague terms in both isolation and composition. The theory predicts how vague modifiers to information actions and information types can be composed to increase or decrease overall vagueness. We further provide empirical evidence based on factorial vignette surveys to show how increases in vagueness will decrease users' acceptance of privacy risk and thus decrease users' willingness to share personal information.

[Paper]

This work is supported by NSF Award #1330596, NSF Award #1330214 and NSA Award #141333.

Understanding and Measuring Perceived Privacy Risk 

Personal data is increasingly collected and used by companies to tailor services to users, and to make financial, employment and health-related decisions about individuals. When personal data is inappropriately collected or misused, however, individuals may experience violations of their privacy. Despite to the recent shift toward a risk-managed approach for privacy, there are to our knowledge no empirical methods to determine which personal data is most at-risk. We conducted a series of experiments to measure perceived privacy risk, which is based on expressed preferences and which we define as an individual's willingness to share their personal data with others given the likelihood of a potential privacy harm. These experiments control for one or more of the six factors affecting an individual's willingness to share their information: data type, discomfort associated with the data type, data purpose, privacy harm, harm likelihood, and individual demographic factors such as age range, gender, education level, ethnicity and household income. To measure likelihood, we adapt Construal Level Theory from psychology to frame individual attitudes about risk likelihood based on social and physical distances to the privacy harm. The findings include predictions about the extent to which the above factors correspond to risk acceptance, including that perceived risk is lower for induced disclosure harms when compared to surveillance and insecurity harms as defined in Solove's Taxonomy of Privacy. In addition, we found that likelihood was not a multiplicative factor in computing privacy risk perception, which challenges conventional concepts of privacy risk in the privacy and security community

[Paper]

This work is supported by NSF Award #1330596