AI Systems Performance Engineering: Optimizing Model Training and Inference Workloads with GPUs, CUDA, and PyTorch - Softcover

Fregly, Chris

 
9798341627789: AI Systems Performance Engineering: Optimizing Model Training and Inference Workloads with GPUs, CUDA, and PyTorch

Synopsis

Elevate your AI system performance capabilities with this definitive guide to maximizing efficiency across every layer of your AI infrastructure. In today's era of ever-growing generative models, AI Systems Performance Engineering provides engineers, researchers, and developers with a hands-on set of actionable optimization strategies. Learn to co-optimize hardware, software, and algorithms to build resilient, scalable, and cost-effective AI systems that excel in both training and inference. Authored by Chris Fregly, a performance-focused engineering and product leader, this resource transforms complex AI systems into streamlined, high-impact AI solutions.

Inside, you'll discover step-by-step methodologies for fine-tuning GPU CUDA kernels, PyTorch-based algorithms, and multinode training and inference systems. You'll also master the art of scaling GPU clusters for high performance, distributed model training jobs, and inference servers. The book ends with a 175+-item checklist of proven, ready-to-use optimizations.

  • Codesign and optimize hardware, software, and algorithms to achieve maximum throughput and cost savings
  • Implement cutting-edge inference strategies that reduce latency and boost throughput in real-world settings
  • Utilize industry-leading scalability tools and frameworks
  • Profile, diagnose, and eliminate performance bottlenecks across complex AI pipelines
  • Integrate full stack optimization techniques for robust, reliable AI system performance

"synopsis" may belong to another edition of this title.

About the Author

Chris Fregly is a performance engineer and AI product leader who has driven innovations at Netflix, Databricks, Amazon Web Services (AWS), and multiple startups. He has led performance-focused engineering teams that built AI/ML products, scaled go-to-market initiatives, and reduced cost for large-scale generative-AI and analytics workloads. Chris is coauthor of the O’Reilly books Data Science on AWS and Generative AI on AWS, and creator of the O’Reilly course "High-Performance AI in Production with NVIDIA GPUs." His work spans kernel-level tuning, compiler-driven acceleration, distributed training, and high-throughput inference. Chris is the organizer of the global AI Performance Engineering meetup with over 100,000 members worldwide.

From the Back Cover

Elevate your AI system performance capabilities with this definitive guide to maximizing efficiency across every layer of your AI infrastructure. In today's era of ever-growing generative models, AI Systems Performance Engineering provides engineers, researchers, and developers with a hands-on set of actionable optimization strategies. Learn to co-optimize hardware, software, and algorithms to build resilient, scalable, and cost-effective AI systems that excel in both training and inference. Authored by Chris Fregly, a performance-focused engineering and product leader, this resource transforms complex AI systems into streamlined, high-impact AI solutions. Inside, you'll discover step-by-step methodologies for fine-tuning GPU CUDA kernels, PyTorch-based algorithms, and multinode training and inference systems. You'll also master the art of scaling GPU clusters for high performance, distributed model training jobs, and inference servers. The book ends with a 175+-item checklist of proven, ready-to-use optimizations.
* Codesign and optimize hardware, software, and algorithms to achieve maximum throughput and cost savings
* Implement cutting-edge inference strategies that reduce
latency and boost throughput in real-world settings
* Utilize industry-leading scalability tools and frameworks * Profile, diagnose, and eliminate performance bottlenecks
across complex AI pipelines
* Integrate full stack optimization techniques for robust,
reliable AI system performance

From the Inside Flap

"AI systems are layered and fast‑moving. Chris breaks the complexity down into a reference that will set the standard for years." —Chris Lattner, CEO at Modular

"CUDA kernels, distributed training, compilers, disaggregated inference—finally in one place. An encyclopedia of ML systems." —Mark Saroufim, PyTorch at Meta and Founder of GPU MODE Community

"Squeezing the most performance out of your AI system is what separates the good from the great. This is the missing manual." —Sebastian Raschka, ML/AI Researcher

"An essential guide to modern ML systems—grounded in vLLM and distributed systems—with deep insight into inference optimization and open source." —Michael Goin, vLLM Maintainer and Principal Engineer at Red Hat

"A definitive field guide that connects silicon to application, giving AI engineers the full‑stack wisdom to turn raw compute into high‑performance models."—Harsh Banwait, Director of Product at Coreweave

"A tour‑de‑force essential for engineers working on today's AI‑driven systems." —Adrian Cockcroft, Systems Performance Expert and Thought Leader

"The master key for engineers who won't accept default performance—surgical, system‑level tools for CUDA tuning, LLM inference, and multi‑GPU orchestration." —Arpitha Srinivas, AI Systems Performance Engineer

"The most comprehensive, up‑to‑date guide to building modern AI systems—a must‑read for every AI/ML practitioner." —Chaim Rand, AI/ML Algorithm and Performance Engineer

"Bridges GPU architecture and AI workload optimization—from memory bandwidth and KV‑cache to batching, Nsight profiling, and distributed scaling—with production‑tested insights." —Amer Ather, Cloud & ML Performance Engineering, Netflix

"My go‑to AI‑performance reference—packed with practical fixes for tuning workloads and getting AI into production." —Antje Barth, Member of Technical Staff at Amazon AGI

"Unmatched in the field—fresh, digestible chapters that each deliver deep, standalone expertise." —Suman Debnath, ML Systems Engineer at Anyscale

"This is the book I was waiting for. It ties together the scattered, vast and fast-moving world of AI systems performance engineering into one clear, modern resource." --Madison Kanna, AI Engineer at Baseten

"AI is computing's foundation; accelerator mastery is a strategic imperative. Chris turns depth into clarity for hyperscale leaders." --Omer Zaki, VP of AI Infrastructure at Oracle

"About this title" may belong to another edition of this title.