Dec 4

Computer Science Seminar: Tim Rudner (New York University)

-
Milstein 402 & Zoom
  • Add to Calendar 2024-12-04 11:00:00 2024-12-04 12:00:00 Computer Science Seminar: Tim Rudner (New York University) Speaker: Tim Rudner (New York University) Title: Probabilistic Methods for Robust and Transparent Machine Learning The seminar will be available for in-person and Zoom participation. To participate online, please email inquiry-cs@barnard.edu to receive the Zoom link. Machine learning models, while effective in controlled environments, can fail catastrophically when exposed to unexpected conditions upon deployment. This lack of robustness, well-documented even in state-of-the-art models, can lead to severe harm in high-stakes, safety-critical application domains such as healthcare. This shortcoming raises a central question: How can we develop machine learning models we can trust? In this talk, I will approach this question from a probabilistic perspective and address deficiencies in the trustworthiness of neural network models using Bayesian principles. Specifically, I will show how to improve the reliability and fairness of neural networks with data-driven, domain-informed prior distributions over model parameters. To do so, I will first demonstrate how to train neural networks with such priors using a simple learning objective with a regularizer that reflects the constraints implicitly encoded in the prior. I will then show how to construct and use domain-informed, data-driven priors to improve uncertainty quantification and group robustness in neural network models for selected application domains. Tim G. J. Rudner is a Faculty Fellow at New York University’s Center for Data Science and an AI Fellow at Georgetown University's Center for Security and Emerging Technology. He conducted PhD research on probabilistic machine learning in the Departments of Computer Science and Statistics at the University of Oxford, where he was advised by Yee Whye Teh and Yarin Gal. The goal of his research is to create robust and transparent machine learning models by developing methods and theoretical insights that improve the reliability, safety, transparency, and fairness of machine learning systems deployed in safety-critical settings. Tim holds a master’s degree in statistics from the University of Oxford and an undergraduate degree in applied mathematics and economics from Yale University. He was selected as a 2024 Rising Star in Generative AI and is a Qualcomm Innovation Fellow and a Rhodes Scholar. Milstein 402 & Zoom Barnard College barnard-admin@digitalpulp.com America/New_York public

Speaker: Tim Rudner (New York University)

Title: Probabilistic Methods for Robust and Transparent Machine Learning

The seminar will be available for in-person and Zoom participation. To participate online, please email inquiry-cs@barnard.edu to receive the Zoom link.

Machine learning models, while effective in controlled environments, can fail catastrophically when exposed to unexpected conditions upon deployment. This lack of robustness, well-documented even in state-of-the-art models, can lead to severe harm in high-stakes, safety-critical application domains such as healthcare. This shortcoming raises a central question: How can we develop machine learning models we can trust?

In this talk, I will approach this question from a probabilistic perspective and address deficiencies in the trustworthiness of neural network models using Bayesian principles. Specifically, I will show how to improve the reliability and fairness of neural networks with data-driven, domain-informed prior distributions over model parameters. To do so, I will first demonstrate how to train neural networks with such priors using a simple learning objective with a regularizer that reflects the constraints implicitly encoded in the prior. I will then show how to construct and use domain-informed, data-driven priors to improve uncertainty quantification and group robustness in neural network models for selected application domains.


Tim G. J. Rudner is a Faculty Fellow at New York University’s Center for Data Science and an AI Fellow at Georgetown University's Center for Security and Emerging Technology. He conducted PhD research on probabilistic machine learning in the Departments of Computer Science and Statistics at the University of Oxford, where he was advised by Yee Whye Teh and Yarin Gal. The goal of his research is to create robust and transparent machine learning models by developing methods and theoretical insights that improve the reliability, safety, transparency, and fairness of machine learning systems deployed in safety-critical settings. Tim holds a master’s degree in statistics from the University of Oxford and an undergraduate degree in applied mathematics and economics from Yale University. He was selected as a 2024 Rising Star in Generative AI and is a Qualcomm Innovation Fellow and a Rhodes Scholar.