January 7, 2020

354 words 2 mins read

On the accountability of black boxes: How we can control what we cant exactly measure

On the accountability of black boxes: How we can control what we cant exactly measure

Black box algorithmic systems make decisions that have a great impact in our lives. Thus, the need for their accountability and transparency is growing. Code4Thought created an evaluation model reflecting the state of practice in several organizations. Yiannis Kanellopoulos explores this model and shares lessons learned from its application at a financial corporation.

Talk Title On the accountability of black boxes: How we can control what we cant exactly measure
Speakers Yiannis Kanellopoulos (Code4Thought)
Conference Strata Data Conference
Conf Tag Making Data Work
Location London, United Kingdom
Date April 30-May 2, 2019
URL Talk Page
Slides Talk Slides
Video

There’s little doubt that algorithmic systems are making decisions that have a great impact on our daily lives. Their authority is increasingly expressed algorithmically, and decisions that used to be based on human intuition and reflection are now automated, so transparency οn how these systems work matters not as an end in itself but merely as a means of accountability. Trying to render an algorithmic system accountable means that a series of challenges need to be addressed. For instance, how do you keep a balance between high precision and transparency? Deep learning models are well known for the former, not the latter. Also, organizations tend to keep their algorithms secret, claiming they want to preserve valuable intellectual property or avoid the risk of them getting gamed (e.g., in the case of credit scoring algorithms). Finally, there’s no widely accepted industry standard that defines how an algorithmic system should be evaluated in terms of transparency and accountability. Yiannis Kanellopoulos shares an evaluation framework that reflects the state of practice as applied in several organizations. The framework is based on the thesis that data is socially constructed, so it covers both the algorithms themselves and the organizations that utilize them and need to cater for their accountability. It’s domain agnostic, so it can be operationalized at any type of organization, business domain, and type of algorithm, and it’s not intrusive, as it consists of a set of questions that require experts’ input. Yiannis then highlights lessons learned from the framework’s actual operationalization at a multibillion-dollar high-tech corporation.

comments powered by Disqus