[2019-05-30] Prof. Zachary Chase Lipton, Carnegie Mellon University, "Deep Learning Under Distribution Shift"

Poster:Post date:2019-05-07
Title: Deep Learning Under Distribution Shift   
Date: 2019-05-30 2:20pm-3:30pm
Location: R111, CSIE
Speaker: Prof. Zachary Chase Lipton, Carnegie Mellon University
Hosted by: Prof. Vivian Chen


We might hope that when faced with unexpected inputs, well-designed software systems would fire off warnings. However, ML systems, which depend strongly on properties of their inputs (e.g. the i.i.d. assumption), tend to fail silently. Faced with distribution shift, we wish to detect and quantify the shift, and to correct our classifiers when possible, even without observing test set labels. This talk will describe several approaches for tackling distribution shift. In one case, motivated by medical diagnosis, where diseases (targets), cause symptoms (observations), we focus on label shift, where the label marginal p(y) changes but the conditional p(x|y) does not. Our method exploits arbitrary black box predictors to reduce dimensionality, detecting and correcting shift without having to maneuver in the ambient dimension. In other work, we extend this research examining shift detection more broadly, and focusing on cases including structured output, noisy inputs.
Zachary Chase Lipton is an assistant professor at Carnegie Mellon University. His research spans both core machine learning methods and their social impact. This work addresses diverse application areas, including medical diagnosis, dialogue systems, and product recommendation. He is the founding editor of the Approximately Correct blog and an author of Dive Into Deep Learning, an interactive open-source book teaching deep learning through Jupyter notebooks. Find on Twitter (@zacharylipton) or GitHub (@zackchase).

Last modification time:2019-05-07 PM 1:27

cron web_use_log