Distributed Primal-Dual Optimization for Non-uniformly Distributed Data
Distributed Primal-Dual Optimization for Non-uniformly Distributed Data
Minhao Cheng, Cho-Jui Hsieh
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2028-2034.
https://doi.org/10.24963/ijcai.2018/280
Distributed primal-dual optimization has received many focuses in the past few years. In this framework, training samples are stored in multiple machines. At each round, all the machines conduct a sequence of updates based on their local data, and then the local updates are synchronized and merged to obtain the update to the global model. All the previous approaches merge the local updates by averaging all of them with a uniform weight. However, in many real world applications data are not uniformly distributed on each machine, so the uniform weight is inadequate to capture the heterogeneity of local updates. To resolve this issue, we propose a better way to merge local updates in the primal-dual optimization framework. Instead of using a single weight for all the local updates, we develop a computational efficient algorithm to automatically choose the optimal weights for each machine. Furthermore, we propose an efficient way to estimate the duality gap of the merged update by exploiting the structure of the objective function, and this leads to an efficient line search algorithm based on the reduction of duality gap. Combining these two ideas, our algorithm is much faster and more scalable than existing methods on real world problems.
Keywords:
Machine Learning: Machine Learning
Machine Learning Applications: Big data ; Scalability