讲座论坛
荔园数学讲坛: Communication Compression in Distributed Learning
发布时间:2022-04-06 11:30:36 3771


主讲人:密歇根州立大学 严明教授

时间:2022年04月08日16:00–17:00

腾讯会议号:378-706-269

会议链接: https://meeting.tencent.com/dm/Rw0wGT5NEJjj

内容摘要: 

Large-scale machine learning models are trained by parallel (stochastic) gradient descent algorithms on distributed systems. The communications for gradient aggregation and model synchronization become the major obstacles for efficient learning as the number of computing nodes and the model's dimension scale up. In this talk, I will introduce several ways to compress the transferred data and reduce the overall communication such that the obstacles can be immensely mitigated. More specifically, I will introduce methods to reduce or eliminate the compression error without additional communication for both deterministic and stochastic algorithms.

 

主讲人简介: 

Ming Yan is an associate professor in the Department of Computational Mathematics, Science and Engineering (CMSE) and the Department of Mathematics at Michigan State University. His research interests lie in computational optimization and its applications in image processing, machine learning, and other data-science problems. He received his B.S. and M.S in mathematics from University of Science and Technology of China in 2005 and 2008, respectively, and then Ph.D. in mathematics from University of California, Los Angeles in 2012. After completing his PhD, Ming Yan was a Postdoctoral Fellow in the Department of Computational and Applied Mathematics at Rice University from July 2012 to June 2013, and then moved to University of California, Los Angeles as a Postdoctoral Scholar and an Assistant Adjunct Professor from July 2013 to June 2015. He received a Facebook faculty Award in 2020.