Events
CILVR Seminar: Responsible AI on the Ground: Tensions, Opportunities, and Lessons from the Field
Speaker: Emily Black
Location: 60 Fifth Avenue, Room 7th floor open space
Date: Wednesday, October 8, 2025
AI and GenAI models are now ubiquitous in decision-making in high-stakes domains from healthcare to employment. Unfortunately, these systems have displayed bias on the basis of race, gender, income, and other attributes. In certain domains-- particularly credit, housing, and employment-- this bias is often illegal. While AI governance frameworks are rapidly changing, some of the strongest tools we have to combat this kind of discrimination in the United States are civil rights laws dating back to the 1960s. However, crucially, there is a lot of (bi-directional) debate and misunderstanding about how these laws apply to debiasing techniques. In this talk, I’ll (1) discuss and clear up some of the friction between civil rights laws and AI debiasing methods as well as point out some technical insights into how to sidestep it, (2) demonstrate how perceived tension influenced responsible AI practice in real-world high-stakes AI decision-making contexts such as fair lending, and (3) discuss some further, unique tensions and opportunities around GenAI regulation.Bio: Emily Black is an Assistant Professor of Computer Science and Engineering at New York University. Her research concerns fairness and accountability in AI systems. In other words, she creates methods to determine whether AI systems will cause harm to the public, studies the equity impacts of AI systems in high-stakes settings, such as the government, and connects her own and related research to the legal and policy worlds to help better regulate AI systems. Professor Black’s work is interdisciplinary as she aims to prevent harm from AI systems used in a variety of contexts: she works with lawyers, accountants, civil society advocates, and others to try to prevent algorithmic harm in practice.