This book introduces the evolving area of simulation-based optimization. Since it became possible to analyze random systems using computers, scientists and engineers have sought the means to optimize systems using simulation models. Only recently, however, has this objective had success in practice. Cutting-edge work in computational operations research, including dynamic programming, e.g.,
Reinforcement Learning (RL) or
Approximate Dynamic Programming (ADP), and static optimization via Stochastic Adaptive Search, e.g., Simultaneous Perturbation and
Meta-Heuristics, has made it possible to use simulation in conjunction with optimization techniques. Some special features of the book are:
- An Accessible Introduction to Reinforcement Learning Techniques for Solving Markov Decision Processes (MDPs)
- A Step-by-Step Description of Stochastic Adaptive Search Algorithms, e.g., Simultaneous Perturbation, Simulated Annealing, Tabu Search, and Genetic Algorithms, for Static Simulation-Based Optimization
- A Clear and Simple Introduction to the Methodology of Neural Networks
- A Gentle Introduction to Convergence Analysis of a Subset of Methods Enumerated Above
- A Clear Discussion on Dynamic Programing for Solving MDPs and Semi-MDPs (SMDPs)
This book is written for students and researchers in the fields of engineering (industrial, electrical, and computer), computer science, operations research, management science, and applied mathematics. An attractive feature of this book is its
accessibility to readers new to this topic.
Dr. Abhijit Gosavi has a Ph.D. in Industrial Engineering and an MS and a BS in Mechanical Engineering. He currently teaches in the Department of Engineering Management and Systems Engineering in Missouri University of Science and Technology (formerly University of Missouri-Rolla). He is a member of INFORMS, IIE, POMS, IEEE, and ASEE. He has written numerous research articles on simulation-based optimization and Markov decision processes.