Publication
arXiv
Paper

Reducing Data Motion to Accelerate the Training of Deep Neural Networks

Download paper

Abstract

The use of Deep Neural Networks (DNNs) is becom- ing ubiquitous in many areas due to their exceptional pattern detection capabilities. For example, deep learning solutions are coupled with large scale scientific simulations to increase the accuracy of pattern classification problems and thus improve the quality of scientific computational models. Despite this success, deep learning methods still incur several important limitations: the DNN topology must be set by going through an empirical and time-consuming process, the training phase is very costly and the latency of the inference phase is a serious limitation in emerging areas like autonomous driving. This paper reduces the cost of DNNs training by decreasing the amount of data movement across heterogeneous architectures composed of several GPUs and multicore CPU devices. In particular, this paper proposes an algorithm to dynamically adapt the data representation format of network weights during training. This algorithm drives a compression procedure that reduces data size before sending them over the parallel system. We run an extensive evaluation campaign considering several up- to-date deep neural network models and two high-end parallel architectures composed of multiple GPUs and CPU multicore chips. Our solution achieves average performance improvements from 6.18% up to 11.91%.

Date

Publication

arXiv

Authors

Topics

Share