Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a “cloud” to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues.
The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the “effective” hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.
We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.
"synopsis" may belong to another edition of this title.
Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a “cloud” to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues.
The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the “effective” hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.
We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.
"About this title" may belong to another edition of this title.
Seller: Grand Eagle Retail, Bensenville, IL, U.S.A.
Hardcover. Condition: new. Hardcover. Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a cloud to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues. The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the effective hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability. We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory. The excellent generalizability of deep learning is like a cloud to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Seller Inventory # 9789811682322
Seller: Books Puddle, New York, NY, U.S.A.
Condition: New. 1st ed. 2022 edition NO-PA16APR2015-KAP. Seller Inventory # 26396346908
Seller: GreatBookPrices, Columbia, MD, U.S.A.
Condition: New. Seller Inventory # 46252690-n
Seller: Majestic Books, Hounslow, United Kingdom
Condition: New. Seller Inventory # 401111491
Quantity: 1 available
Seller: Biblios, Frankfurt am main, HESSE, Germany
Condition: New. Seller Inventory # 18396346902
Quantity: 1 available
Seller: Revaluation Books, Exeter, United Kingdom
Hardcover. Condition: Brand New. 306 pages. 9.25x6.10x9.21 inches. In Stock. This item is printed on demand. Seller Inventory # __9811682321
Quantity: 1 available
Seller: GreatBookPricesUK, Woodford Green, United Kingdom
Condition: New. Seller Inventory # 46252690-n
Quantity: Over 20 available
Seller: Ria Christie Collections, Uxbridge, United Kingdom
Condition: New. In. Seller Inventory # ria9789811682322_new
Quantity: Over 20 available
Seller: California Books, Miami, FL, U.S.A.
Condition: New. Seller Inventory # I-9789811682322
Seller: GreatBookPrices, Columbia, MD, U.S.A.
Condition: As New. Unread book in perfect condition. Seller Inventory # 46252690