Dec 9

Computer Science Seminar: Emily Black (Carnegie Mellon University)

-
913 Milstein or via Zoom
  • Add to Calendar 2021-12-09 11:00:00 2021-12-09 12:00:00 Computer Science Seminar: Emily Black (Carnegie Mellon University) Speaker: Emily Black, Carnegie Mellon University Title: Considering Process in Algorithmic Bias Detection and Mitigation  The seminar will be available for Zoom participation.  People in the Barnard/Columbia community who have authorized access to Barnard indoor spaces can participate in person if desired.  If you would like to receive the Zoom link, please register using the “Register” button above.   Artificial Intelligence (AI) systems now affect important decisions in people's lives, from the news articles they read, to whether or not they receive a loan. While the use of AI may lead to great accuracy and efficiency in the making of important decisions, recent news and research reports have shown that AI models can act unfairly: from exhibiting gender bias in hiring models, to racial bias in recidivism prediction systems.  In this talk, I’ll discuss methods for understanding fairness issues in AI through considering the process by which models arrive at their decisions. This technique contrasts with a large portion of AI fairness literature, which focuses on studying model outcomes alone. Specifically, I will show how considering a model’s end-to-end decision process allows us to expand our understanding of unfair behavior---such as in my work demonstrating how model instability can lead to unfairness by having important decisions rely on arbitrary modeling choices (e.g. whether or not a person is granted a loan from a decision-making model may depend on whether some unrelated person happened to be in the training set). Secondly, I will discuss how considering process can help us find bias mitigation techniques which avoid a tradeoff between predictive utility and fairness, with case studies from my collaborations with Stanford RegLab and the Internal Revenue Service (IRS) investigating tax auditing practices, and with Cornell, Microsoft Research, Upturn, and others to examine the role of criminal risk assessment models in racial disparities in pre-trial detention. Emily Black is a PhD candidate in the Accountable Systems Lab at Carnegie Mellon University, advised by Matt Fredrikson. Her research centers around understanding the impacts of machine learning and deep learning models in society. In particular, she focuses on showing ways in which commonly used machine learning models may act unfairly; finding ways to pinpoint when models are behaving in a harmful manner in practice; developing ways to mitigate harmful behavior when possible; and translating technical insights into technology policy recommendations. She is currently supported by an Amazon Graduate Research Fellowship. (For more information, please see https://www.cs.cmu.edu/~emilybla/).   913 Milstein or via Zoom Barnard College barnard-admin@digitalpulp.com America/New_York public

Speaker: Emily Black, Carnegie Mellon University
Title: Considering Process in Algorithmic Bias Detection and Mitigation 

The seminar will be available for Zoom participation.  People in the Barnard/Columbia community who have authorized access to Barnard indoor spaces can participate in person if desired.  If you would like to receive the Zoom link, please register using the “Register” button above.  

Artificial Intelligence (AI) systems now affect important decisions in people's lives, from the news articles they read, to whether or not they receive a loan. While the use of AI may lead to great accuracy and efficiency in the making of important decisions, recent news and research reports have shown that AI models can act unfairly: from exhibiting gender bias in hiring models, to racial bias in recidivism prediction systems. 

In this talk, I’ll discuss methods for understanding fairness issues in AI through considering the process by which models arrive at their decisions. This technique contrasts with a large portion of AI fairness literature, which focuses on studying model outcomes alone. Specifically, I will show how considering a model’s end-to-end decision process allows us to expand our understanding of unfair behavior---such as in my work demonstrating how model instability can lead to unfairness by having important decisions rely on arbitrary modeling choices (e.g. whether or not a person is granted a loan from a decision-making model may depend on whether some unrelated person happened to be in the training set). Secondly, I will discuss how considering process can help us find bias mitigation techniques which avoid a tradeoff between predictive utility and fairness, with case studies from my collaborations with Stanford RegLab and the Internal Revenue Service (IRS) investigating tax auditing practices, and with Cornell, Microsoft Research, Upturn, and others to examine the role of criminal risk assessment models in racial disparities in pre-trial detention.


Emily Black is a PhD candidate in the Accountable Systems Lab at Carnegie Mellon University, advised by Matt Fredrikson. Her research centers around understanding the impacts of machine learning and deep learning models in society. In particular, she focuses on showing ways in which commonly used machine learning models may act unfairly; finding ways to pinpoint when models are behaving in a harmful manner in practice; developing ways to mitigate harmful behavior when possible; and translating technical insights into technology policy recommendations. She is currently supported by an Amazon Graduate Research Fellowship. (For more information, please see https://www.cs.cmu.edu/~emilybla/).