Book Details
Format
Hardback or Cased Book
ISBN-10
3031190661
ISBN-13
9783031190667
Edition
2023 ed.
Publisher
Springer International Publishing AG
Imprint
Springer International Publishing AG
Country of Manufacture
GB
Country of Publication
GB
Publication Date
Nov 26th, 2022
Print length
127 Pages
Ksh 7,200.00
Werezi Extended Catalogue
Delivery in 28 days
Delivery Location
Delivery fee: Select location
Delivery in 28 days
Secure
Quality
Fast
The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD.
This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Get Optimization Algorithms for Distributed Machine Learning by at the best price and quality guaranteed only at Werezi Africa's largest book ecommerce store. The book was published by Springer International Publishing AG and it has pages.