:::

[2022-11-25] Prof. Jason Lee, Princeton University,"Beyond kernel methods: Feature Learning with gradient descent"

專題討論演講公告
Poster:HSIN-YI SUNGPost date:2022-11-18
Title: Beyond kernel methods: Feature Learning with gradient descent 
Date: 2022-11-25 14:20-15:30
Location: R103, CSIE
Speaker: Prof. Jason Lee, Princeton University
Hosted by: Prof. Yen-Huan Li  

Abstrast:

Significant theoretical work has established that in specific regimes, neural networks trained by gradient descent behave like kernel methods. However, in practice, it is known that neural networks strongly outperform their associated kernels. In this work, we explain this gap by demonstrating that there is a large class of functions which cannot be efficiently learned by kernel methods but can be easily learned with gradient descent on a two layer neural network outside the kernel regime by learning representations that are relevant to the target task. We also demonstrate that these representations allow for efficient transfer learning, which is impossible in the kernel regime.
Specifically, we consider the problem of learning polynomials which depend on only a few relevant directions, i.e. of the form f(x)=g(Ux) where $U: \R^d \to \R^r$ with dr. When the degree of f is p, it is known that ndp samples are necessary to learn f in the kernel regime. Our primary result is that gradient descent learns a representation of the data which depends only on the directions relevant to f. This results in an improved sample complexity of nd2r+drp. Furthermore, in a transfer learning setup where the data distributions in the source and target domain share the same representation U but have different polynomial heads we show that a popular heuristic for transfer learning has a target sample complexity independent of d.

This is joint work with Alex Damian and Mahdi Soltanolkotabi.

 

Biography:

 Jason Lee is an associate professor in Electrical Engineering and Computer Science (secondary) at Princeton University. Prior to that, he was in the Data Science and Operations department at the University of Southern California and a postdoctoral researcher at UC Berkeley working with Michael I. Jordan. Jason received his PhD at Stanford University advised by Trevor Hastie and Jonathan Taylor. His research interests are in the theory of machine learning, optimization, and statistics. Lately, he has worked on the foundations of deep learning, representation learning, and reinforcement learning. He has received an NSF Career Award, ONR Young Investigator Award in Mathematical Data Science, Sloan Research Fellowship, NeurIPS Best Student Paper Award and Finalist for the Best Paper Prize for Young Researchers in Continuous Optimization.

 

 

 

 

 

 

 

 

Last modification time:2022-11-18 PM 12:04

cron web_use_log