PDL ABSTRACT

Managed Communication and Consistency for Fast
Data-Parallel Iterative Analytics

Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-15-105. April 2015.

Jinliang Wei, Wei Dai, Aurick Qiao, Qirong Ho*, Henggang Cui, Gregory R. Ganger,
Phillip B. Gibbons, Garth A. Gibson, Eric P. Xing

Carnegie Mellon University
* Institute for Infocomm Research, A*STAR

http://www.pdl.cmu.edu/

At the core of Machine Learning (ML) analytics applied to Big Data is often an expert-suggested model, whose parameters are refined by iteratively processing a training dataset until convergence. The completion time (i.e. convergence time) and quality of the learned model not only depends on the rate at which the refinements are generated but also the quality of each refinement. While data-parallel ML applications often employ a loose consistency model when updating shared model parameters to maximize parallelism, the accumulated error may seriously impact the quality of refinements and thus delay completion time, a problem that usually gets worse with scale. Although more immediate propagation of updates reduces the accumulated error, this strategy is limited by physical network bandwidth. Additionally, the performance of the widely used stochastic gradient descent (SGD) algorithm is sensitive to initial step size, simply increasing communication without adjusting the step size value accordingly fails to achieve optimal performance.

This paper presents B¨osen, a system that maximizes the network communication efficiency under a given inter-machine network bandwidth budget to minimize accumulated error, while ensuring theoretical convergence guarantees for large-scale data-parallel ML applications. Furthermore, B¨osen prioritizes messages that are most significant to algorithm convergence, further enhancing algorithm convergence. Finally, B¨osen is the first distributed implementation of the recently presented adaptive revision algorithm, which provides orders of magnitude improvement over a carefully tuned fixed schedule of step size refinements. Experiments on two clusters with up to 1024 cores show that our mechanism significantly improves upon static communication schedules.

KEYWORDS: distributed systems, machine learning, parameter server

FULL TR: pdf

 

 

 

© 2017. Last updated 3 May, 2015