U.S. Department of Energy

Pacific Northwest National Laboratory

Unfolding Statistical Inference Algorithms into Better Deep Networks

Thursday, March 2, 2017
Scott Wisdom
Ph.D. student
University of Washington
The core idea of my research is "deep unfolding," which can be used to construct and explain deep network architectures with inference in statistical models. Deep unfolding yields principled initializations for training deep networks, provides insight into the effectiveness of deep networks, and assists with interpretation of what these networks learn. For example, my most recent work shows that recurrent neural networks with rectified linear units and residual connections are a particular deep unfolding of iterative shrinkage-thresholding, a simple and classic algorithm for solving the Lasso problem (i.e., L1-regularized least-squares). This unfolding leads to interpretability of the recurrent network weights, faster training, and better performance.
Speaker Bio

Scott Wisdom is a 5th year Ph.D. student in the Department of Electrical Engineering at the University of Washington, advised by Les Atlas and James Pitton. He earned his M.S. in Electrical Engineering from the University of Washington in March 2014, with a thesis title of, "Improved Statistical Signal Processing for Nonstationary Random Processes Using Time-Warping." He has interned at Microsoft Research and Mitsubishi Electric Research Labs, and was a graduate student member of the Far-Field Speech team at the summer 2015 Jelinek Workshop on Speech and Language Technology. His research interests include machine learning and statistical signal processing for detection, enhancement, and classification of nonstationary time series, especially audio signals such as speech.

| Pacific Northwest National Laboratory