Delve into neural networks, implement deep learning algorithms, and explore layers of data abstraction with the help of TensorFlow.
Key Features
- Learn how to implement advanced techniques in deep learning with Google's brainchild, TensorFlow
- Explore deep neural networks and layers of data abstraction with the help of this comprehensive guide
- Gain real-world contextualization through some deep learning problems concerning research and application
Book Description
Deep learning is a branch of machine learning algorithms based on learning multiple levels of abstraction. Neural networks, which are at the core of deep learning, are being used in predictive analytics, computer vision, natural language processing, time series forecasting, and to perform a myriad of other complex tasks.
This book is conceived for developers, data analysts, machine learning practitioners and deep learning enthusiasts who want to build powerful, robust, and accurate predictive models with the power of TensorFlow, combined with other open source Python libraries.
Throughout the book, you'll learn how to develop deep learning applications for machine learning systems using Feedforward Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks, Autoencoders, and Factorization Machines. Discover how to attain deep learning programming on GPU in a distributed way.
You'll come away with an in-depth knowledge of machine learning techniques and the skills to apply them to real-world projects.
What you will learn
- Apply deep machine intelligence and GPU computing with TensorFlow
- Access public datasets and use TensorFlow to load, process, and transform the data
- Discover how to use the high-level TensorFlow API to build more powerful applications
- Use deep learning for scalable object detection and mobile computing
- Train machines quickly to learn from data by exploring reinforcement learning techniques
- Explore active areas of deep learning research and applications
Giancarlo Zaccone has more than ten years of experience in managing research projects both in scientific and industrial areas. He worked as researcher at the C.N.R, the National Research Council, where he was involved in projects relating to parallel computing and scientific visualization. Currently, he is a system and software engineer at a consulting company developing and maintaining software systems for space and defense applications. He is author of the following Packt volumes: Python Parallel Programming Cookbook and Getting Started with TensorFlow.
Md. Rezaul Karim has more than 8 years of experience in the area of research and development with a solid knowledge of algorithms and data structures, focusing C/C++, Java, Scala, R, and Python and big data technologies such as Spark, Kafka, DC/OS, Docker, Mesos, Hadoop, and MapReduce. His research interests include machine learning, deep learning, Semantic Web, big data, and bioinformatics. He is the author of the book titled Large-Scale Machine Learning with Spark, Packt Publishing. He is a Software Engineer and Researcher currently working at the Insight Center for Data Analytics, Ireland. He is also a Ph.D. candidate at the National University of Ireland, Galway. He also holds a BS and an MS degree in Computer Engineering. Before joining the Insight Centre for Data Analytics, he had been working as a Lead Software Engineer with Samsung Electronics, where he worked with the distributed Samsung R&D centers across the world, including Korea, India, Vietnam, Turkey, and Bangladesh. Before that, he worked as a Research Assistant in the Database Lab at Kyung Hee University, Korea. He also worked as an R&D Engineer with BMTech21 Worldwide, Korea. Even before that, he worked as a Software Engineer with i2SoftTechnology, Dhaka, Bangladesh.