Synopsis:
Dodge costly and time-consuming infrastructure tasks, and rapidly bring your machine learning models to production with MLOps and pre-built serverless tools!
In MLOps Engineering at Scale you will learn:
Extracting, transforming, and loading datasets
Querying datasets with SQL
Understanding automatic differentiation in PyTorch
Deploying model training pipelines as a service endpoint
Monitoring and managing your pipeline’s life cycle
Measuring performance improvements
MLOps Engineering at Scale shows you how to put machine learning into production efficiently by using pre-built services from AWS and other cloud vendors. You’ll learn how to rapidly create flexible and scalable machine learning systems without laboring over time-consuming operational tasks or taking on the costly overhead of physical hardware. Following a real-world use case for calculating taxi fares, you will engineer an MLOps pipeline for a PyTorch model using AWS server-less capabilities.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
About the technology
A production-ready machine learning system includes efficient data pipelines, integrated monitoring, and means to scale up and down based on demand. Using cloud-based services to implement ML infrastructure reduces development time and lowers hosting costs. Serverless MLOps eliminates the need to build and maintain custom infrastructure, so you can concentrate on your data, models, and algorithms.
About the book
MLOps Engineering at Scale teaches you how to implement efficient machine learning systems using pre-built services from AWS and other cloud vendors. This easy-to-follow book guides you step-by-step as you set up your serverless ML infrastructure, even if you’ve never used a cloud platform before. You’ll also explore tools like PyTorch Lightning, Optuna, and MLFlow that make it easy to build pipelines and scale your deep learning models in production.
What's inside
Reduce or eliminate ML infrastructure management
Learn state-of-the-art MLOps tools like PyTorch Lightning and MLFlow
Deploy training pipelines as a service endpoint
Monitor and manage your pipeline’s life cycle
Measure performance improvements
About the reader
Readers need to know Python, SQL, and the basics of machine learning. No cloud experience required.
About the author
Carl Osipov implemented his first neural net in 2000 and has worked on deep learning and machine learning at Google and IBM.
Table of Contents
PART 1 - MASTERING THE DATA SET
1 Introduction to serverless machine learning
2 Getting started with the data set
3 Exploring and preparing the data set
4 More exploratory data analysis and data preparation
PART 2 - PYTORCH FOR SERVERLESS MACHINE LEARNING
5 Introducing PyTorch: Tensor basics
6 Core PyTorch: Autograd, optimizers, and utilities
7 Serverless machine learning at scale
8 Scaling out with distributed training
PART 3 - SERVERLESS MACHINE LEARNING PIPELINE
9 Feature selection
10 Adopting PyTorch Lightning
11 Hyperparameter optimization
12 Machine learning pipeline
About the Author:
Carl Osipov has been working in the information technology industry since 2001, with a focus on projects in big data analytics and machine learning in multi-core, distributed systems, such as service-oriented architecture and cloud computing platforms. While at IBM, Carl helped IBM Software Group to shape its strategy around the use of Docker and other container-based technologies for serverless cloud computing using IBM Cloud and Amazon Web Services. At Google, Carl learned from the world’s foremost experts in machine learning and helped manage the company’s efforts to democratize artificial intelligence with Google Cloud and TensorFlow. Carl is an author of over 20 articles in professional, trade, and academic journals; an inventor with six patents at USPTO; and the holder of three corporate technology awards from IBM.
"About this title" may belong to another edition of this title.