This textbook establishes a theoretical framework for understanding deep learning models of practical relevance. With an approach that borrows from theoretical physics, Roberts and Yaida provide clear and pedagogical explanations of how realistic deep neural networks actually work. To make results from the theoretical forefront accessible, the authors eschew the subject's traditional emphasis on intimidating formality without sacrificing accuracy. Straightforward and approachable, this volume balances detailed first principle derivations of novel results with insight and intuition for theorists and practitioners alike. This self contained textbook is ideal for students and researchers interested in artificial intelligence with minimal prerequisites of linear algebra, calculus. informal probability theory. it can easily fill a semester long course on deep learning theory. For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning.
"synopsis" may belong to another edition of this title.
Daniel A. Roberts was cofounder and CTO of Diffeo, an AI company acquired by Salesforce; a research scientist at Facebook AI Research; and a member of the School of Natural Sciences at the Institute for Advanced Study in Princeton, NJ. He was a Hertz Fellow, earning a PhD from MIT in theoretical physics, and was also a Marshall Scholar at Cambridge and Oxford Universities.
Sho Yaida is a research scientist at Meta AI. Prior to joining Meta AI, he obtained his PhD in physics at Stanford University and held postdoctoral positions at MIT and at Duke University. At Meta AI, he uses tools from theoretical physics to understand neural networks, the topic of this book.
Boris Hanin is an Assistant Professor at Princeton University in the Operations Research and Financial Engineering Department. Prior to joining Princeton in 2020, Boris was an Assistant Professor at Texas A&M in the Math Department and an NSF postdoc at MIT. He has taught graduate courses on the theory and practice of deep learning at both Texas A&M and Princeton.
"About this title" may belong to another edition of this title.
FREE shipping within U.S.A.
Destination, rates & speedsSeller: Better World Books: West, Reno, NV, U.S.A.
Condition: Good. 44. Used book that is in clean, average condition without any missing pages. Seller Inventory # 52014756-75
Quantity: 1 available
Seller: GreatBookPrices, Columbia, MD, U.S.A.
Condition: New. Seller Inventory # 44117012-n
Quantity: Over 20 available
Seller: Lucky's Textbooks, Dallas, TX, U.S.A.
Condition: New. Seller Inventory # ABLIING23Mar2411530051428
Quantity: Over 20 available
Seller: California Books, Miami, FL, U.S.A.
Condition: New. Seller Inventory # I-9781316519332
Quantity: Over 20 available
Seller: GreatBookPrices, Columbia, MD, U.S.A.
Condition: As New. Unread book in perfect condition. Seller Inventory # 44117012
Quantity: Over 20 available
Seller: Grand Eagle Retail, Fairfield, OH, U.S.A.
Hardcover. Condition: new. Hardcover. This textbook establishes a theoretical framework for understanding deep learning models of practical relevance. With an approach that borrows from theoretical physics, Roberts and Yaida provide clear and pedagogical explanations of how realistic deep neural networks actually work. To make results from the theoretical forefront accessible, the authors eschew the subject's traditional emphasis on intimidating formality without sacrificing accuracy. Straightforward and approachable, this volume balances detailed first-principle derivations of novel results with insight and intuition for theorists and practitioners alike. This self-contained textbook is ideal for students and researchers interested in artificial intelligence with minimal prerequisites of linear algebra, calculus, and informal probability theory, and it can easily fill a semester-long course on deep learning theory. For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning. This is the first book focused entirely on deep learning theory. Tools from theoretical physics are borrowed and adapted to explain, from first principles, how realistic deep neural networks work, benefiting practitioners looking to build better AI models and theorists looking for a unifying framework for understanding intelligence. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Seller Inventory # 9781316519332
Quantity: 1 available
Seller: Majestic Books, Hounslow, United Kingdom
Condition: New. pp. 472 This item is printed on demand. Seller Inventory # 389471918
Quantity: 3 available
Seller: Revaluation Books, Exeter, United Kingdom
Hardcover. Condition: Brand New. 390 pages. 10.00x7.00x1.00 inches. In Stock. This item is printed on demand. Seller Inventory # __1316519333
Quantity: 1 available
Seller: Ria Christie Collections, Uxbridge, United Kingdom
Condition: New. In. Seller Inventory # ria9781316519332_new
Quantity: Over 20 available
Seller: GreatBookPricesUK, Woodford Green, United Kingdom
Condition: New. Seller Inventory # 44117012-n
Quantity: Over 20 available