BioI am a third year Ph.D. student at the Institute for Software Research at Carnegie Mellon University. I am fortunate to be advised by Dr. Travis Breaux.
I am broadly interested in applying natural language processing techniques to solve software engineering problems. More specifically, my research work focuses on developing techniques to extract and analyze privacy requirements using crowdsourcing, natural language processing, and user studies.
During my PhD so far, I have developed a hybridized task re-composition framework, that semi-automatically extracts privacy goals that describe a website's data practices. I have also worked on developing a theory of vagueness and privacy risk perception, with our collaborators, Prof. Joel Reidenberg and Dr. Thomas Nortan.
I have been working on developing an empirically validated framework for understanding and measuring perceived privacy risk.
Privacy Goal Mining
Privacy policies describe high-level goals for corporate data practices. We have developed a semi-automated framework that combines crowdworker annotations, natural language typed dependency parses, and a reusable lexicon to improve goal-extraction coverage, precision, and recall. Our results show that no single framework element alone is sufficient to extract goals; however, the overall framework compensates for elemental limitations. Human annotators are highly adaptive at discovering annotations in new texts, but those annotations can be inconsistent and incomplete; dependency parsers lack sophisticated, tacit knowledge, but they can perform exhaustive text search for prospective requirements indicators; and while the lexicon may never completely saturate, the lexicon terms can be reliably used to improve recall.
This work is supported by NSF Frontier Award #1330596. For more details about this project, please visit our Usable Privacy Project Website.
Vagueness in Privacy Policies
This work is supported by NSF Award #1330596, NSF Award #1330214 and NSA Award #141333.
Understanding and Measuring Perceived Privacy Risk
Personal data is increasingly collected and used by companies to tailor services to users, and to make financial, employment and health-related decisions about individuals. When personal data is inappropriately collected or misused, however, individuals may experience violations of their privacy. Despite to the recent shift toward a risk-managed approach for privacy, there are to our knowledge no empirical methods to determine which personal data is most at-risk. We conducted a series of experiments to measure perceived privacy risk, which is based on expressed preferences and which we define as an individual's willingness to share their personal data with others given the likelihood of a potential privacy harm. These experiments control for one or more of the six factors affecting an individual's willingness to share their information: data type, discomfort associated with the data type, data purpose, privacy harm, harm likelihood, and individual demographic factors such as age range, gender, education level, ethnicity and household income. To measure likelihood, we adapt Construal Level Theory from psychology to frame individual attitudes about risk likelihood based on social and physical distances to the privacy harm. The findings include predictions about the extent to which the above factors correspond to risk acceptance, including that perceived risk is lower for induced disclosure harms when compared to surveillance and insecurity harms as defined in Solove's Taxonomy of Privacy. In addition, we found that likelihood was not a multiplicative factor in computing privacy risk perception, which challenges conventional concepts of privacy risk in the privacy and security community
This work is supported by NSF Award #1330596