This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.
"synopsis" may belong to another edition of this title.
Tushar Krishna is an Assistant Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2014. Prior to that, he received an M.S.E in Electrical Engineering from Princeton University in 2009 and a B.Tech in Electrical Engineering from the Indian Institute of Technology (IIT), Delhi in 2007. Before joining Georgia Tech in 2015, he worked as a researcher in the VSSAD Group at Intel in Massachusetts. Dr. Krishna’s research spans computer architecture, interconnection networks, networks-on-chip (NoC), and deep learning accelerators, with a focus on optimizing data movement in modern computing systems. Three of his papers have been selected for IEEE Micro’s Top Picks from Computer Architecture, one more received an honorable mention, and three have won best paper awards. He received the National Science Foundation (NSF) CRII awardin 2018 and both a Google Faculty Award and a Facebook Faculty Award in 2019.
"About this title" may belong to another edition of this title.
Seller: Lucky's Textbooks, Dallas, TX, U.S.A.
Condition: New. Seller Inventory # ABLIING23Mar3113020034897
Seller: California Books, Miami, FL, U.S.A.
Condition: New. Seller Inventory # I-9783031006395
Seller: Chiron Media, Wallingford, United Kingdom
PF. Condition: New. Seller Inventory # 6666-IUK-9783031006395
Quantity: 10 available
Seller: Books Puddle, New York, NY, U.S.A.
Condition: New. 1st edition NO-PA16APR2015-KAP. Seller Inventory # 26395061358
Seller: Majestic Books, Hounslow, United Kingdom
Condition: New. Print on Demand. Seller Inventory # 402364337
Quantity: 4 available
Seller: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germany
Taschenbuch. Condition: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference. 168 pp. Englisch. Seller Inventory # 9783031006395
Seller: Biblios, Frankfurt am main, HESSE, Germany
Condition: New. PRINT ON DEMAND. Seller Inventory # 18395061348
Seller: moluna, Greven, Germany
Condition: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore s Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Netwo. Seller Inventory # 608129053
Quantity: Over 20 available
Seller: buchversandmimpf2000, Emtmannsberg, BAYE, Germany
Taschenbuch. Condition: Neu. Neuware -This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 168 pp. Englisch. Seller Inventory # 9783031006395
Seller: AHA-BUCH GmbH, Einbeck, Germany
Taschenbuch. Condition: Neu. Druck auf Anfrage Neuware - Printed after ordering - This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference. Seller Inventory # 9783031006395