HAI Seminar<br><br>Title: The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems<br>Speaker: Kathleen Creel<br>Date: February 24<br>Time: 10:00am - 11:00am<br>RSVP Here: https://stanford.zoom.us/webinar/register/WN_4OgWtuxsQX2YSlxIXgq-6w<br><... decision-making systems implemented in public life are typically highly standardized. One algorithmic decision-making system can replace or influence thousands of human deciders. Each of the humans so replaced had their own decision-making criteria: some good, some bad, and some merely arbitrary. Decision-making based on arbitrary criteria is legal in some contexts, such as employment, and not in others, such as criminal sentencing. Where no other right provides a guarantee of non-arbitrary decision-making, is arbitrariness of moral concern?<br><p>An isolated arbitrary decision need not morally wrong the individual whom it misclassifies. However, if the same algorithms produced by the same companies are uniformly applied across wide swathes of a public sphere, be that hiring or lending, the same people could be consistently excluded from employment, loans, or other sectors of civil society. This harm persists even when the automated decision-making systems are “fair” on standard metrics of fairness. We argue that arbitrariness at scale is morally and should be legally problematic. The heart of this moral issue relates to domination and a lack of sufficient opportunity for autonomy. It relates in interesting ways to the moral wrong of discrimination. We propose technically informed solutions that can lessen the impact of algorithms at scale and so mitigate or avoid the moral harm we identify. </p><br>Bio: <br>Kathleen Creel is the Embedded EthiCS Fellow at HAI and EIS. Her research explores the ethics of automated decision-making and the epistemology of machine learning in science.