January 23, 2020

264 words 2 mins read

Consensual software: Prioritizing user trust and safety

Consensual software: Prioritizing user trust and safety

Online safety has become a huge problem in the world of oversharing. Real-name policies, automatic geolocation tracking, and photo tagging increase user adoption rates, but these features can be quickly abused by bad actors. Danielle Leong explains how to apply a "consent filter" to product decisions to create a safer user experience and help protect your most vulnerable users from harm.

Talk Title Consensual software: Prioritizing user trust and safety
Speakers Danielle Leong (GitHub)
Conference O’Reilly Security Conference
Conf Tag Build better defenses
Location New York, New York
Date October 30-November 1, 2017
URL Talk Page
Slides Talk Slides
Video

Getting consent is as simple as making someone a cup of tea. Consensual software means we should get an explicit “yes” from users in order to interact with them or their data. In doing so, we ensure that the features we build aren’t used to annoy, harass, or endanger people. Assuming that a user has implicitly consented to using a feature creates vulnerabilities and loopholes that can be exploited to harass others. As an engineer on GitHub’s community and safety team, it’s Danielle Leong’s job to close abuse vectors and build antiharassment tools to improve collaboration on open source projects. Danielle explores the concept of consensual software, the cost of ignoring harassment on your platform, and how GitHub’s community and safety team builds consensual software and reviews other teams’ features for abuse and harassment vulnerabilities. Along the way, you’ll learn how to apply a “consent filter” to product decisions to create a safer user experience and help protect your most vulnerable users from harm.

comments powered by Disqus