Please note: This tentative program is subject to change. All times US Pacific Daylight Time (PDT = UTC/GMT-7:00 hours).
The legal landscape regarding age-based restrictions (age gates) for online services is rapidly changing. In order to comply with existing and proposed regulations, online services must determine whether users are older or younger than mandated age thresholds. The implementation details of these age gates are highly relevant for consumer protection advocates given the risk of user circumvention and/or chilling effects. We therefore propose a study measuring the prevalence and variety of age gating mechanisms across the Internet. We start with a case study of the e-cigarette industry, finding that nearly all site arrival age gates merely require users to self-attest that they are older than an age threshold. We plan to expand this study to additional industries, website interaction points, and automated classification techniques to produce a comprehensive assessment of online age gating practices. (extended PDF)
To enhance children's safety online, emerging legislation requires social media platforms to verify users' age. Current verification measures, such as submitting ID copies or presenting digital ID raise significant privacy concerns due to the significantly increased uploading, transmission, collection, and sharing of sensitive identity documentation and information. Privacy-preserving credentials such as Verifiable credentials (VCs), a tamper-evident and cryptographically verifiable mechanism for asserting claims, offer a promising solution to this challenge. However, existing implementations of VCs require issuers to run a special protocol to issue and sign new VCs and often rely distributed ledgers, introducing inefficiencies for some issuers.
This work proposes a novel privacy-preserving framework with key features: (1) compatibility with existing ID systems to minimize the operational costs from issuers; (2) disclosure of only the verification result without revealing additional information; (3) support for blockchain but without reliance on it.
The main contributions of this work include: (1) designing an efficient and privacy-preserving VC framework that seamlessly integrates with existing ID systems (e.g. U.S. driver's licenses); (2) laying the foundation for future research into deploying VCs for various verification tasks that rely on government ID; and (3) addressing the tension between protecting children online and safeguarding user privacy. (extended PDF)
Roblox is a popular gaming and metaverse platform played predominantly by children. Anecdotal accounts have identified potentially problematic strategies employed by popular Roblox worlds to increase the chances of their young user base spending money. In this work, we empirically investigate the prevalence of different monetization strategies inside the Roblox ecosystem, by first crawling dozens of highly played worlds, and then both quantitatively and qualitatively analyzing the content inside those worlds. We find that Roblox monetization strategies range from undisclosed sponsored ad content to deceptive or coercive design, and more. We find that a vast majority of worlds investigated include at least one monetization strategy, and a majority include manipulative or deceptive monetization strategies. We propose future work to understand how the young user base interacts with these strategies in the wild, as well as the development of more robust and scalable measurement tools. (extended PDF)
The mutual influence between LLM assistants and humans makes challenging aligning LLMs with humans after deployment. Most alignment research focuses on LLM development; we argue that research supporting humans to critically and safely engage with LLMs is essential for ensuring that LLMs do indeed align with, rather than shift, human intent. (extended PDF)
Public discourse around trust, safety, and bias in AI systems intensifies, and as AI systems increasingly impact consumers' daily lives, there is a growing need for empirical research to measure psychological constructs underlying the human-AI relationship. By reviewing literature, we identified a gap in the availability of validated instruments. Instead, researchers seem to adapt, reuse, or develop measures in an ad hoc manner without much systematic validation. Through piloting different instruments, we identified limitations with this approach but also with existing validated instruments. To enable more robust and impactful research on user perceptions of AI systems, we advocate for a community-driven initiative to discuss, exchange, and develop validated, meaningful scales and metrics for human-centered AI research. (extended PDF)
Users increasingly query LLM-enabled web chatbots for help with scam defense. The Consumer Financial Protection Bureau's complaints database is a rich data source for evaluating LLM performance on user scam queries, but currently the corpus does not distinguish between scam and non-scam fraud. We are developing an LLM ensemble approach to distinguishing scam and fraud CFPB complaints and describe our methodology, current performance and observations of strengths and weaknesses of LLMs in the scam defense context. (extended PDF)
Transparency is a foundation for consumer product safety. Users of mobile apps deserve accurate safety labels that identify the unavoidable risks present when using technology as intended. How do we generate accurate and current safety labels for mobile apps at scale? The authors describe a pilot project called Safetypedia, a crowdsourcing software platform and worldwide community of certified "safety inspectors" to collect mobile app behavior data, that semi-automatically generates robust and freely available app safety labels. Several challenges exist around this approach such as: (1) how to ensure high quality data collection from the crowdsourced community, (2) how to automate more parts of the data collection and assessment process, (3) how to "taxonomize everything", (4) if AI be valuable in the process, and (5) will the crowdsourcing approach work? (extended PDF)
In this research proposal, I present journaling about privacy harms as a potential mechanism for transforming people's attitudes and feelings over their privacy. I outline plans for a diary study asking people to engage in a daily guided browsing activity and record their feelings about personalized content on Instagram. Using this diary study instrument as a cultural probe, I explore how engaging in active documentation and labeling of privacy-harming experiences might motivate people to reflect on their agency over their privacy. (extended PDF)