Pipeline-aware Approaches to Algorithmic Harms

Speaker: Emily Black

Location: 370 Jay Street, Room 825

Date: Friday, March 1, 2024

In this talk, I will discuss methods for understanding and mitigating algorithmic harm through interrogating the entire AI model creation pipeline. This technique contrasts with a large portion of AI fairness literature, which focuses on studying model outcomes alone. Specifically, I will show how considering a model's end-to-end creation pipeline can expand our understanding of what constitutes unfair behavior---such as in my work demonstrating how model instability can lead to unfairness by having important decisions rely on arbitrary modeling choices (e.g. whether or not a person is granted a loan from a decision-making model may depend on whether some unrelated person happened to be in the training set). Secondly, I will discuss how studying the AI creation pipeline can help us find bias mitigation techniques which can lessen tradeoffs between performance and fairness, with a case study from my collaboration with Stanford and the US Internal Revenue Service investigating tax auditing practices. Finally, I will close with how pipeline-aware approaches to algorithmic harm can help ease some of the legal tensions that arise when implementing responsible AI techniques in practice.