This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.
Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management.
"synopsis" may belong to another edition of this title.
Jiawei Jiang obtained his PhD from Peking University 2018, advised by Prof. Bin Cui. His research interests include distributed machine learning, gradient optimization and automatic machine learning. He has served as a program committee member or reviewer for various international events, including SIGMOD, VLDB, ICDE, KDD, AAAI and TKDE. He was awarded the CCF Outstanding Doctoral Dissertation Award (2019) and ACM China Doctoral Dissertation Award (2018).
Bin Cui is a Professor at the School of EECS and Director of the Institute of Network Computing and Information Systems, at Peking University. His research interests include database system architectures, query and index techniques, and big data management and mining. He has published over 200 refereed papers at international conferences and in journals. Dr. Cui has served on the technical program committee of various international conferences, including SIGMOD, VLDB, ICDE and KDD, and as Vice PC Chair of ICDE 2011, Demo Co-Chair of ICDE 2014, Area Chair of VLDB 2014, PC Co-Chair of APWeb 2015 and WAIM 2016. He is currently a member of the trustee board of VLDB Endowment, is on the editorial board of the VLDB Journal, Distributed and Parallel Databases Journal, and Information Systems, and was formerly an associate editor of IEEE Transactions on Knowledge and Data Engineering (TKDE, 2009-2013). He was selected for a Microsoft Young Professorship award (MSRA 2008), CCF Young Scientist award (2009), Second Prize of Natural Science Award of MOE China (2014), and appointed a Cheung Kong distinguished Professor by the MOE in 2016.
This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.
Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal toa broad audience in the field of machine learning, artificial intelligence, big data and database management.
"About this title" may belong to another edition of this title.
Seller: Basi6 International, Irving, TX, U.S.A.
Condition: Brand New. New. US edition. Expediting shipping for all USA and Europe orders excluding PO Box. Excellent Customer Service. Seller Inventory # ABEJUNE24-314915
Quantity: 1 available
Seller: Books Puddle, New York, NY, U.S.A.
Condition: New. 1st ed. 2022 edition NO-PA16APR2015-KAP. Seller Inventory # 26388127495
Quantity: 1 available
Seller: Majestic Books, Hounslow, United Kingdom
Condition: New. Seller Inventory # 391505112
Quantity: 1 available
Seller: Biblios, Frankfurt am main, HESSE, Germany
Condition: New. Seller Inventory # 18388127501
Quantity: 1 available
Seller: Lucky's Textbooks, Dallas, TX, U.S.A.
Condition: New. Seller Inventory # ABLIING23Apr0412070092249
Quantity: Over 20 available
Seller: Ria Christie Collections, Uxbridge, United Kingdom
Condition: New. In. Seller Inventory # ria9789811634192_new
Quantity: Over 20 available
Seller: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germany
Buch. Condition: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management. 184 pp. Englisch. Seller Inventory # 9789811634192
Quantity: 2 available
Seller: moluna, Greven, Germany
Condition: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, imp. Seller Inventory # 473138653
Quantity: Over 20 available
Seller: California Books, Miami, FL, U.S.A.
Condition: New. Seller Inventory # I-9789811634192
Quantity: Over 20 available
Seller: AHA-BUCH GmbH, Einbeck, Germany
Buch. Condition: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management. Seller Inventory # 9789811634192
Quantity: 1 available