From
Ria Christie Collections, Uxbridge, United Kingdom
Seller rating 5 out of 5 stars
AbeBooks Seller since March 25, 2015
In. Seller Inventory # ria9783031190698_new
This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
About the Author:
Gauri Joshi, Ph.D., is an Associate Professor in the ECE department at Carnegie Mellon University. Dr. Joshi completed her Ph.D. from MIT EECS. Her current research is on designing algorithms for federated learning, distributed optimization, and parallel computing. Her awards and honors include being named as one of MIT Technology Review's 35 Innovators under 35 (2022), the NSF CAREER Award (2021), the ACM SIGMETRICS Best Paper Award (2020), Best Thesis Prize in Computer science at MIT (2012), and Institute Gold Medal of IIT Bombay (2010).
Title: Optimization Algorithms for Distributed ...
Publisher: Springer
Publication Date: 2023
Binding: Soft cover
Condition: New
Seller: moluna, Greven, Germany
Kartoniert / Broschiert. Condition: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where th. Seller Inventory # 1215161127
Quantity: Over 20 available
Seller: AHA-BUCH GmbH, Einbeck, Germany
Taschenbuch. Condition: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. Seller Inventory # 9783031190698
Quantity: 1 available
Seller: buchversandmimpf2000, Emtmannsberg, BAYE, Germany
Taschenbuch. Condition: Neu. Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 144 pp. Englisch. Seller Inventory # 9783031190698
Quantity: 2 available
Seller: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germany
Taschenbuch. Condition: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. 144 pp. Englisch. Seller Inventory # 9783031190698
Quantity: 2 available
Seller: California Books, Miami, FL, U.S.A.
Condition: New. Seller Inventory # I-9783031190698
Quantity: Over 20 available
Seller: Books Puddle, New York, NY, U.S.A.
Condition: New. pp. 144. Seller Inventory # 26398551570
Quantity: 4 available
Seller: Revaluation Books, Exeter, United Kingdom
Paperback. Condition: Brand New. 140 pages. 9.45x6.61x0.33 inches. In Stock. Seller Inventory # x-3031190696
Quantity: 2 available
Seller: Biblios, Frankfurt am main, HESSE, Germany
Condition: New. PRINT ON DEMAND pp. 144. Seller Inventory # 18398551576
Quantity: 4 available