[2018-12-28] Prof. Cho-Jui Hsieh, UCLA, "Robustness of Deep Neural Network to Adversarial Examples"

Title: Robustness of Deep Neural Network to Adversarial Examples
Date: 2018-12-28 2:20pm-3:30pm
Location: R103, CSIE
Speaker: Prof. Cho-Jui Hsieh, UCLA
Hosted by: Prof. Chih-Jen Lin


It has been observed recently that machine learning algorithms, especially deep neural networks, are vulnerable to adversarial examples.
In this talk, I'll briefly go through our recent work on attack, verification and defense. In terms of attack, I will talk about recent algorithms on constructing adversarial examples in restricted black-box settings. In terms of verification, an efficient verification framework will be introduced to estimate the robustness of neural networks.
Finally, I will discuss several interesting directions to improve the robustness of neural networks against adversarial examples.
Cho-Jui Hsieh is an assistant professor in UCLA CS. His research focus is on developing new algorithms and optimization techniques for large-scale machine learning problems. Cho-Jui obtained his master degree in 2009 from National Taiwan University (advisor: Chih-Jen Lin) and Ph.D. from University of Texas at Austin in 2015 (advisor: Inderjit S. Dhillon). He is the recipient of IBM Ph.D. fellowships in 2013-2015, the best paper award in KDD 2010, ICDM 2012, ICPP 2018 and best paper finalist in AISec 2017.
最後修改時間:2018-10-18 AM 8:56

cron web_use_log