[2019-01-09] Prof. Song Han, Massachusetts Institute of Technology (MIT), "Hardware-Centric AutoML: Design Automation for Efficient Deep Learning Computing"

Poster:Post date:2018-12-27
Title: Hardware-Centric AutoML: Design Automation for Efficient Deep Learning Computing
Date: 2019-01-09 2:00pm-3:30pm
Location: R103, CSIE
Speaker: Prof. Song Han, Massachusetts Institute of Technology (MIT)
Hosted by: Prof. Tei-Wei Kuo


In the post-Moore’s Law era, the amount of computation per unit cost and power is no longer increasing at its historic rate. In the post-ImageNet era, researchers are solving more complicated AI problems using larger data sets which drives the demand for more computation. This mismatch between supply and demand for computation highlights the need for co-designing efficient machine learning algorithms and domain-specific hardware architectures. We introduce our recent work using machine learning to optimize the machine learning system (Hardware-centric AutoML): learning the optimal pruning strategy (AMC) and quantization strategy (HAQ) on the target hardware; learning the optimal neural network architecture that is specialized for a target hardware architecture (ProxylessNAS); learning to optimize analog circuit parameters, rather than relying on experienced analog engineers to tune those transistors(L2DC). For hardware-friendly machine learning algorithms, I'll introduce the temporal shift module (TSM) for efficient video understanding, that offers 8x lower latency, 12x higher throughput than 3D convolution-based methods, while ranking the first on both Something-Something V1 and V2 leaderboards. On the hardware side, I’ll describe efficient deep learning accelerators that can take advantage of these efficient algorithms, including both FPGA and ASIC designs for emerging deep learning architectures. I’ll conclude the talk by giving an outlook of the design automation for efficient deep learning computing.
Song Han is an assistant professor in the EECS Department of Massachusetts Institute of Technology (MIT) and PI for HAN Lab: Hardware, AI and Neural-nets. Dr. Han's research focuses on energy-efficient deep learning and domain-specific architectures. He proposed “Deep Compression” that widely impacted the industry. He was the co-founder and chief scientist of DeePhi Tech based on his PhD thesis. Prior to joining MIT, Song Han graduated from Stanford University.
Last modification time:2018-12-28 AM 9:22

cron web_use_log