Principal Component Neural Networks: Theory and Applications - Hardcover

Diamantaras, K. I.; Kung, S. Y.

 
9780471054368: Principal Component Neural Networks: Theory and Applications

Synopsis

Systematically explores the relationship between principal component analysis (PCA) and neural networks. Provides a synergistic examination of the mathematical, algorithmic, application and architectural aspects of principal component neural networks. Using a unified formulation, the authors present neural models performing PCA from the Hebbian learning rule and those which use least squares learning rules such as back-propagation. Examines the principles of biological perceptual systems to explain how the brain works. Every chapter contains a selected list of applications examples from diverse areas.

"synopsis" may belong to another edition of this title.

About the Author

K. I. Diamantaras is a research scientist at Aristotle University in Thessaloniki, Greece. He received his PhD from Princeton University and was formerly a research scientist for Siemans Corporate Research.

S. Y. Kung is Professor of Electrical Engineering at Princeton University and received his PhD from Stanford University. He was formerly a professor of electrical engineering at the University of Southern California.

From the Back Cover

Principal Component Neural Networks Theory and Applications

Understanding the underlying principles of biological perceptual systems is of vital importance not only to neuroscientists, but, increasingly, to engineers and computer scientists who wish to develop artificial perceptual systems. In this original and groundbreaking work, the authors systematically examine the relationship between the powerful technique of Principal Component Analysis (PCA) and neural networks. Principal Component Neural Networks focuses on issues pertaining to both neural network models (i.e., network structures and algorithms) and theoretical extensions of PCA. In addition, it provides basic review material in mathematics and neurobiology. This book presents neural models originating from both the Hebbian learning rule and least squares learning rules, such as back-propagation. Its ultimate objective is to provide a synergistic exploration of the mathematical, algorithmic, application, and architectural aspects of principal component neural networks. Especially valuable to researchers and advanced students in neural network theory and signal processing, this book offers application examples from a variety of areas, including high-resolution spectral estimation, system identification, image compression, and pattern recognition.

From the Inside Flap

Principal Component Neural Networks Theory and Applications

Understanding the underlying principles of biological perceptual systems is of vital importance not only to neuroscientists, but, increasingly, to engineers and computer scientists who wish to develop artificial perceptual systems. In this original and groundbreaking work, the authors systematically examine the relationship between the powerful technique of Principal Component Analysis (PCA) and neural networks. Principal Component Neural Networks focuses on issues pertaining to both neural network models (i.e., network structures and algorithms) and theoretical extensions of PCA. In addition, it provides basic review material in mathematics and neurobiology. This book presents neural models originating from both the Hebbian learning rule and least squares learning rules, such as back-propagation. Its ultimate objective is to provide a synergistic exploration of the mathematical, algorithmic, application, and architectural aspects of principal component neural networks. Especially valuable to researchers and advanced students in neural network theory and signal processing, this book offers application examples from a variety of areas, including high-resolution spectral estimation, system identification, image compression, and pattern recognition.

"About this title" may belong to another edition of this title.