:::

[2021-05-14] Mr. Guang-He Lee, Massachusetts Institute of Technology (MIT), "Transparency from Neural Models and Beyond"

專題討論演講公告
Poster:Post date:2021-03-23
Title: Transparency from Neural Models and Beyond
Date: 2021-05-14 2:20pm-3:30pm
Location: R103, CSIE
Speaker: Mr. Guang-He Lee, Massachusetts Institute of Technology (MIT)
Hosted by: Prof. Vivian Chen
 
 

Abstract:

 
Transparency has become a key desideratum of machine learning. Transparent properties such as interpretability or robustness are indispensable when model predictions are relied on in contentious or mission-critical applications (e.g., social, legal, financial, or security applications). It is also highly beneficial if we can learn from superhuman models to advance human knowledge (e.g., in medical or natural sciences). While the desired notion of transparency can vary widely across different scenarios, it is often utterly unavailable when it comes to modern predictors such as deep networks, primarily due to their inherent complexity. In this thesis, we focus on a set of formal properties of transparency and design a series of algorithms to obtain the properties from those modern predictors. In particular, it includes
1) the model class (of oblique decision trees), effectively represented and trained via a new family of neural models.
2) local model classes (e.g., piece-wise linear models), induced from and estimated jointly with a black-box predictor, possibly over structured objects.
3) local certificates of robustness, derived for ensembles of any black-box predictors in continuous or discrete spaces.
In an ongoing effort, we are trying to visualize the manifold structure of data, based on noisy representations with correct label information.
 
The contributions are mainly methodological and theoretical. We also emphasize the scalability in large-scale settings. Compared to a human-centric approach (to interpretability), our methods are particularly suited for scenarios when it involves factual verifications or is challenging to subjectively judge explanations by humans (e.g., for superhuman model).
 
 
Biography:
 
Guang-He Lee is a final year Ph.D. student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at Massachusetts Institute of Technology (MIT). His research interests are in developing transparent and interpretable machine learning methods that can ultimately help people trust and learn from modern machine learning models. He focuses on large-scale applications with combinatorial or high-dimensional data such as natural language, molecules, or images. He has been the recipient of several awards, including the Microsoft-IEEE young fellowship, some Best Thesis Awards, and the Honorary Member of the Phi Tau Phi Scholastic Honor Society.
 
 
 
Last modification time:2021-03-23 PM 4:24

cron web_use_log