The California Consumer Privacy Act (CCPA) provides California residents with a range of enhanced privacy protections and rights. Our research investigated the extent to which Android app developers comply with the provisions of the CCPA that require them to provide consumers with accurate privacy notices and respond to "verifiable consumer requests" (VCRs) by disclosing personal information that they have collected, used, or shared about consumers for a business or commercial purpose. We compared the actual network traffic of 109 apps that we believe must comply with the CCPA to the data that apps state they collect in their privacy policies and the data contained in responses to "right to know" requests that we submitted to the app’s developers. Of the 69 app developers who substantively replied to our requests, all but one provided specific pieces of personal data (as opposed to only categorical information). However, a significant percentage of apps collected information that was not disclosed, including identifiers (55 apps, 80%), geolocation data (21 apps, 30%), and sensory data (18 apps, 26%) among other categories. We discuss improvements to the CCPA that could help app developers comply with "right to know" requests and other related regulations.
2020
Empirical Measurement of Systemic 2FA Usability
Joshua Reynolds , Nikita Samarin, Joseph Barnes , and 4 more authors
Two-Factor Authentication (2FA) hardens an organization against user account compromise, but adds an extra step to organizations’ mission-critical tasks. We investigate to what extent quantitative analysis of operational logs of 2FA systems both supports and challenges recent results from user studies and surveys identifying usability challenges in 2FA systems. Using tens of millions of logs and records kept at two public universities, we quantify the at-scale impact on organizations and their employees during a mandatory 2FA implementation. We show the multiplicative effects of device remembrance, fragmented login services, and authentication timeouts on user burden. We find that user burden does not deviate far from other compliance and risk management time requirements already common to large organizations. We investigate the cause of more than one in twenty 2FA ceremonies being aborted or failing, and the variance in user experience across users. We hope our analysis will empower more organizations to protect themselves with 2FA.
2019
Pilot: Password and PIN Information Leakage from Obfuscated Typing Videos
Kiran Balagani , Matteo Cardaioli , Mauro Conti , and 8 more authors
This paper studies leakage of user passwords and PINs based on observations of typing feedback on screens or from projectors in the form of masked characters (∗ or ∙) that indicate keystrokes. To this end, we developed an attack called Password and Pin Information Leakage from Obfuscated Typing Videos (PILOT). Our attack extracts inter-keystroke timing information from videos of password masking characters displayed when users type their password on a computer, or their PIN at an ATM. We conducted several experiments in various attack scenarios. Results indicate that, while in some cases leakage is minor, it is quite substantial in others. By leveraging inter-keystroke timings, PILOT recovers 8-character alphanumeric passwords in as little as 19 attempts. When guessing PINs, PILOT significantly improved on both random guessing and the attack strategy adopted in our prior work (In European Symposium on Research in Computer Security (2018) 263–280 Springer). In particular, we were able to guess about 3% of the PINs within 10 attempts. This corresponds to a 26-fold improvement compared to random guessing. Our results strongly indicate that secure password masking GUIs must consider the information leakage identified in this paper.
2018
SILK-TV: Secret Information Leakage from Keystroke Timing Videos
Kiran S Balagani , Mauro Conti , Paolo Gasti , and 8 more authors
In European Symposium on Research in Computer Security (ESORICS) , 2018
Shoulder surfing attacks are an unfortunate consequence of entering passwords or PINs into computers, smartphones, PoS terminals, and ATMs. Such attacks generally involve observing the victim’s input device. This paper studies leakage of user secrets (passwords and PINs) based on observations of output devices (screens or projectors) that provide “helpful” feedback to users in the form of masking characters, each corresponding to a keystroke. To this end, we developed a new attack called Secret Information Leakage from Keystroke Timing Videos (SILK-TV). Our attack extracts inter-keystroke timing information from videos of password masking characters displayed when users type their password on a computer, or their PIN at an ATM or PoS. We conducted several studies in various envisaged attack scenarios. Results indicate that, while in some cases leakage is minor, it is quite substantial in others. By leveraging inter-keystroke timings, SILK-TV recovers 8-character alphanumeric passwords in as little as 19 attempts. However, when guessing PINs, SILK-TV yields no substantial speedup compared to brute force. Our results strongly indicate that secure password masking GUIs must consider the information leakage identified in this paper.
refereed preprints
2023
Understanding How Third-Party Libraries in Mobile Apps Affect Responses to Subject Access Requests
Nikita Samarin, and Primal Wijesekera
Workshop on Technology and Consumer Protection (ConPro), 2023
Recent events have placed a renewed focus on the issue of racial justice in the United States and other countries. One dimension of this issue that has received considerable attention is the security and privacy threats and vulnerabilities faced by the communities of color. Our study focuses on community-level advocates who organize workshops, clinics, and other initiatives that inform Black communities about existing digital safety and privacy threats and ways to mitigate against them. Additionally, we aim to understand the online security and privacy needs and attitudes of participants who partake in these initiatives. We hope that by understanding how advocates work in different contexts and what teaching methods are effective, we can help other digital safety experts and activists become advocates within their communities.
2020
Surveying Vulnerable Populations: A Case Study of Civil Society Organizations
Nikita Samarin, Alisa Frik , Sean Brooks , and 2 more authors
Workshop on Inclusive Privacy and Security (WIPS), 2020
Compared to organizations in other sectors, civil society organizations (CSOs) are particularly vulnerable to security and privacy threats, as they lack adequate resources and expertise to defend themselves. At the same time, their security needs and practices have not gained much attention among researchers, and existing solutions designed for the average users do not consider the contexts in which CSO employees operate. As part of our preliminary work, we conducted an anonymous online survey with 102 CSO employees to collect information about their perceived risks of different security and privacy threats, and their self-reported mitigation strategies. The design of our preliminary survey accounted for the unique requirements of our target population by establishing trust with respondents, using anonymity-preserving incentive strategies, and distributing the survey with the help of a trusted intermediary. However, by carefully examining our methods and the feedback received from respondents, we uncovered several issues with our methodology, including the length of the survey, the framing of the questions, and the design of the recruitment email. We hope that the discussion presented in this paper will inform and assist researchers and practitioners working on understanding and improving the security and privacy of CSOs.
2019
A Key to Your Heart: Biometric Authentication Based on ECG Signals
Nikita Samarin, and Donald Sannella
Who Are You?! Adventures in Authentication Workshop (WAY), 2019
In recent years, there has been a shift of interest towards the field of biometric authentication, which proves the identity of the user using their biological characteristics. We explore a novel biometric based on the electrical activity of the human heart in the form of electrocardiogram (ECG) signals. In order to explore the stability of ECG as a biometric, we collect data from 55 participants over two sessions with a period of 4 months in between. We also use a consumer-grade ECG monitor that is more affordable and usable than a medical-grade counterpart. Using a standard approach to evaluate our classifier, we obtain error rates of 2.4% for data collected within one session and 9.7% for data collected across two sessions. The experimental results suggest that ECG signals collected using a consumer-grade monitor can be successfully used for user authentication.
On the Ridiculousness of Notice and Consent: Contradictions in App Privacy Policies
Ehimare Okoyomon , Nikita Samarin, Primal Wijesekera , and 6 more authors
Workshop on Technology and Consumer Protection (ConPro), 2019
The dominant privacy framework of the information age relies on notions of “notice and consent.” That is, service providers will disclose, often through privacy policies, their data collection practices, and users can then consent to their terms. However, it is unlikely that most users comprehend these disclosures, which is due in no small part to ambiguous, deceptive, and misleading statements. By comparing actual collection and sharing practices to disclosures in privacy policies, we demonstrate the scope of the problem. Through analysis of 68,051 apps from the Google Play Store, their corresponding privacy policies, and observed data transmissions, we investigated the potential misrepresentations of apps in the Designed For Families (DFF) program, inconsistencies in disclosures regarding third-party data sharing, as well as contradictory disclosures about secure data transmissions. We find that of the 8,030 DFF apps (i.e., apps directed at children), 9.1% claim that their apps are not directed at children, while 30.6% claim to have no knowledge that the received data comes from children. In addition, we observe that 10.5% of 68,051 apps share personal identifiers with third-party service providers, yet do not declare any in their privacy policies, and only 22.2% of the apps explicitly name third parties. This ultimately makes it not only difficult, but in most cases impossible, for users to establish where their personal data is being processed. Furthermore, we find that 9,424 apps do not use TLS when transmitting personal identifiers, yet 28.4% of these apps claim to take measures to secure data transfer. Ultimately, these divergences between disclosures and actual app behaviors illustrate the ridiculousness of the notice and consent framework.
2018
Evading Classifiers in Discrete Domains with Provable Optimality Guarantees
Bogdan Kulynych , Jamie Hayes , Nikita Samarin, and 1 more author
Workshop on Security in Machine Learning at NeurIPS, 2018
Machine-learning models for security-critical applications such as bot, malware, or spam detection, operate in constrained discrete domains. These applications would benefit from having provable guarantees against adversarial examples. The existing literature on provable adversarial robustness of models, however, exclusively focuses on robustness to gradient-based attacks in domains such as images. These attacks model the adversarial cost, e.g., amount of distortion applied to an image, as a p-norm. We argue that this approach is not well-suited to model adversarial costs in constrained domains where not all examples are feasible. We introduce a graphical framework that (1) generalizes existing attacks in discrete domains, (2) can accommodate complex cost functions beyond p-norms, including financial cost incurred when attacking a classifier, and (3) efficiently produces valid adversarial examples with guarantees of minimal adversarial cost. These guarantees directly translate into a notion of adversarial robustness that takes into account domain constraints and the adversary’s capabilities. We show how our framework can be used to evaluate security by crafting adversarial examples that evade a Twitter-bot detection classifier with provably minimal number of changes; and to build privacy defenses by crafting adversarial examples that evade a privacy-invasive website-fingerprinting classifier.