2/22 |
* Course introduction
* Adversarial attacks
|
|
|
2/29 |
Empirical defenses to evasion attacks
|
* Towards Deep Learning Models Resistant to Adversarial Attacks
* Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
|
|
3/7 |
Theoretical analysis of adversarial examples
|
* Adversarially Robust Generalization Requires More Data
* Robustness May Be at Odds with Accuracy
* Adversarial Examples Are Not Bugs, They Are Features
* Adversarial examples from computational constraints
|
|
3/14 |
Certified Defenses
|
* Evaluating robustness of neural networks with mixed integer programming
* Certified Defenses against Adversarial Examples
* Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
Student presentation 1: Transferability
|
|
3/21 |
Certified Defenses
|
* MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
* Denoised Smoothing: A Provable Defense for Pretrained Classifiers
* LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness
* Globally-Robust Neural Networks
Student presentation 2: Adversarial attack beyond L_p constraints
|
|
3/28 |
Poisoning attacks & defenses
|
* Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
* Certified Defenses for Data Poisoning Attacks
* Spectral Signatures in Backdoor Attacks
Student presentation 3: Detection of adversarial examples
|
|
4/4 |
Holiday; no class
|
|
|
4/11 |
Confidentiality of ML models
|
* Towards Data-Free Model Stealing in a Hard Label Setting
* Increasing the Cost of Model Extraction with Calibrated Proof of Work
* Stealing Part of a Production Language Model
Student presentation 4: Attack / defense in 3D-based models
|
|
4/18 |
Differential Privacy I
|
* Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network
* Bilateral Dependency Optimization: Defending Against Model-inversion Attacks
* NetGuard: Protecting Commercial Web APIs from Model Inversion Attacks using GAN-generated Fake Samples
Student presentation 5: Robust overfitting
|
|
4/25 |
Differential Privacy II
|
* Deep Learning with Differential Privacy
* Scalable Private Learning with PATE
* Exploring the Benefits of Visual Prompting in Differential Privacy
Student presentation 6: Attack / defense in Graph Neural Networks
|
Final project proposal due
|
5/2 |
Fairness
|
* Differentially Private Database Release via Kernel Mean Embeddings
* Differentially Private Diffusion Models
* Differentially Private Fine-tuning of Language Models
Student presentation 7: Adversarial ML in LLM
|
|
5/9 |
ICLR 2024
|
Student presentation 9: Robustness and privacy in distributed learning
Student presentation 10: Privacy-robustness tradeoffs
Student presentation 11: Privacy in foundation models
|
|
5/16 |
Fairness
|
Student presentation 12: Connection between adversarial robustness and fairness
|
|
5/23 |
Fairness
|
* Rethinking Model Ensemble in Transfer-based Adversarial Attacks
* Adversarial Training Should Be Cast as a Non-Zero-Sum Game
* Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Student presentation 13: Adversarial ML in VLM
|
|
5/30 |
Final project presentation
|
|
|
6/6 |
Final project presentation
|
|
|
6/13 |
Summer vacation starts!
|
|
Final project report due
|