书目名称 | Optimization Algorithms for Distributed Machine Learning | 编辑 | Gauri Joshi | 视频video | | 概述 | Discusses state-of-the-art algorithms that are at the core of the field of federated learning.Analyzes each algorithm based on its error versus iterations convergence, and the runtime spent per iterat | 丛书名称 | Synthesis Lectures on Learning, Networks, and Algorithms | 图书封面 |  | 描述 | This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. | 出版日期 | Book 2023 | 关键词 | Distributed Machine Learning; Distributed Optimization; Optimization Algorithms; Stochastic Gradient De | 版次 | 1 | doi | https://doi.org/10.1007/978-3-031-19067-4 | isbn_softcover | 978-3-031-19069-8 | isbn_ebook | 978-3-031-19067-4Series ISSN 2690-4306 Series E-ISSN 2690-4314 | issn_series | 2690-4306 | copyright | The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerl |
The information of publication is updating
|
|