"The AI community has been focusing on developing fixes for harmful bias and discrimination, through so-called ‘debiasing algorithms’ that either try to fix data for known or expected biases, or constrain the outcomes of a given predictive model to produce ‘fair’ outcomes. We argue that creating more AI solutions to fix harmful biases in data is not the only solution we should be pursuing. A fundamental question we are facing as researchers and practitioners, is not how to fix harmful bias in AI with new algorithms, but rather; if we should be designing and deploying such potentially biased systems in the first place”
The conversation was held on 26 February 2021.
Guest speakers:
Sennay Ghebreab and Hinda Haned (Civic AI Lab)
Moderator
: Sarah Eskens (VU Amsterdam)
RPA Human(e) AI:
https://humane-ai.nl/