Why Fairness Cannot Be Automated

Published: April 20, 2020, 5:09 p.m.

b'Fairness and discrimination in algorithmic systems are globally recognized as topics of critical importance. To date, the majority of work in this area starts from an American regulatory perspective defined by the notions of \\u2018disparate treatment\\u2019 and \\u2018disparate impact.\\u2019 But European legal notions of discrimination are not equivalent. \\n\\nIn this talk, Sandra Wachter, Visiting Professor at Harvard Law School and Associate Professor and Senior Research Fellow in Law and Ethics of AI, Big Data, robotics and Internet Regulation at the Oxford Internet Institute (OII) at the University of Oxford, examines EU law and jurisprudence of the European Court of Justice concerning non-discrimination and identifies a critical incompatibility between European notions of discrimination and existing work on algorithmic and automated fairness. \\n\\nWachter discusses the evidential requirements for bringing a claim under EU non-discrimination law and propose a statistical test as a baseline to identify and assess potential cases of algorithmic discrimination in Europe.'