U.S. Department of Energy

Pacific Northwest National Laboratory

Beyond Fine Tuning: Adding Capacity to Leverage few Labels

Publish Date: 
Thursday, December 28, 2017
In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural networks or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.
Hodas N.O., K.J. Shaffer, A. Yankov, C.D. Corley, and A.L. Anderson. 2017. "Beyond Fine Tuning: Adding capacity to leverage few labels." In Learning with Limited Labeled Data (LLD Workshop 2017), December 9, 2017, Long Beach, California. La Jolla, California:Neural Information Processing Systems Foundation, Inc. PNNL-SA-122155.
| Pacific Northwest National Laboratory