A path-breaking account of Markov decision processes-theory and computation
This book's clear presentation of theory, numerous chapter-end problems, and development of a unified method for the computation of optimal policies in both discrete and continuous time make it an excellent course text for graduate students and advanced undergraduates. Its comprehensive coverage of important recent advances in stochastic dynamic programming makes it a valuable working resource for operations research professionals, management scientists, engineers, and others.
Stochastic Dynamic Programming and the Control of Queueing Systems presents the theory of optimization under the finite horizon, infinite horizon discounted, and average cost criteria. It then shows how optimal rules of operation (policies) for each criterion may be numerically determined. A great wealth of examples from the application area of the control of queueing systems is presented. Nine numerical programs for the computation of optimal policies are fully explicated.
The Pascal source code for the programs is available for viewing and downloading on the Wiley Web site at www.wiley.com/products/subject/mathematics. The site contains a link to the author's own Web site and is also a place where readers may discuss developments on the programs or other aspects of the material. The source files are also available via ftp at ftp://ftp.wiley.com/public/sci_tech_med/stochastic
Stochastic Dynamic Programming and the Control of Queueing Systems features:
* Path-breaking advances in Markov decision process techniques, brought together for the first time in book form
* A theorem/proof format (proofs may be omitted without loss of continuity)
* Development of a unified method for the computation of optimal rules of system operation
* Numerous examples drawn mainly from the control of queueing systems
* Detailed discussions of nine numerical programs
* Helpful chapter-end problems
* Appendices with complete treatment of background material
"synopsis" may belong to another edition of this title.
Linn I. Sennott, PhD, is Professor of Mathematics at Illinois State University.
A path-breaking account of Markov decision processes-theory and computation
This book's clear presentation of theory, numerous chapter-end problems, and development of a unified method for the computation of optimal policies in both discrete and continuous time make it an excellent course text for graduate students and advanced undergraduates. Its comprehensive coverage of important recent advances in stochastic dynamic programming makes it a valuable working resource for operations research professionals, management scientists, engineers, and others.
Stochastic Dynamic Programming and the Control of Queueing Systems presents the theory of optimization under the finite horizon, infinite horizon discounted, and average cost criteria. It then shows how optimal rules of operation (policies) for each criterion may be numerically determined. A great wealth of examples from the application area of the control of queueing systems is presented. Nine numerical programs for the computation of optimal policies are fully explicated.
The Pascal source code for the programs is available for viewing and downloading on the Wiley Web site at www.wiley.com/products/subject/mathematics. The site contains a link to the author's own Web site and is also a place where readers may discuss developments on the programs or other aspects of the material. The source files are also available via ftp at ftp://ftp.wiley.com/public/sci_tech_med/stochastic
Stochastic Dynamic Programming and the Control of Queueing Systems features:
* Path-breaking advances in Markov decision process techniques, brought together for the first time in book form
* A theorem/proof format (proofs may be omitted without loss of continuity)
* Development of a unified method for the computation of optimal rules of system operation
* Numerous examples drawn mainly from the control of queueing systems
* Detailed discussions of nine numerical programs
* Helpful chapter-end problems
* Appendices with complete treatment of background material
A path-breaking account of Markov decision processes-theory and computation
This book's clear presentation of theory, numerous chapter-end problems, and development of a unified method for the computation of optimal policies in both discrete and continuous time make it an excellent course text for graduate students and advanced undergraduates. Its comprehensive coverage of important recent advances in stochastic dynamic programming makes it a valuable working resource for operations research professionals, management scientists, engineers, and others.
Stochastic Dynamic Programming and the Control of Queueing Systems presents the theory of optimization under the finite horizon, infinite horizon discounted, and average cost criteria. It then shows how optimal rules of operation (policies) for each criterion may be numerically determined. A great wealth of examples from the application area of the control of queueing systems is presented. Nine numerical programs for the computation of optimal policies are fully explicated.
The Pascal source code for the programs is available for viewing and downloading on the Wiley Web site at www.wiley.com/products/subject/mathematics. The site contains a link to the author's own Web site and is also a place where readers may discuss developments on the programs or other aspects of the material. The source files are also available via ftp at ftp://ftp.wiley.com/public/sci_tech_med/stochastic
Stochastic Dynamic Programming and the Control of Queueing Systems features:
* Path-breaking advances in Markov decision process techniques, brought together for the first time in book form
* A theorem/proof format (proofs may be omitted without loss of continuity)
* Development of a unified method for the computation of optimal rules of system operation
* Numerous examples drawn mainly from the control of queueing systems
* Detailed discussions of nine numerical programs
* Helpful chapter-end problems
* Appendices with complete treatment of background material
"About this title" may belong to another edition of this title.
Seller: GreatBookPrices, Columbia, MD, U.S.A.
Condition: New. Seller Inventory # 31006-n
Seller: PBShop.store UK, Fairford, GLOS, United Kingdom
HRD. Condition: New. New Book. Shipped from UK. Established seller since 2000. Seller Inventory # FW-9780471161202
Quantity: 15 available
Seller: GreatBookPrices, Columbia, MD, U.S.A.
Condition: As New. Unread book in perfect condition. Seller Inventory # 31006
Seller: Brook Bookstore On Demand, Napoli, NA, Italy
Condition: new. Seller Inventory # f596cdac76f08758c08ce58ee6f5dacc
Quantity: Over 20 available
Seller: GreatBookPricesUK, Woodford Green, United Kingdom
Condition: New. Seller Inventory # 31006-n
Quantity: Over 20 available
Seller: Ria Christie Collections, Uxbridge, United Kingdom
Condition: New. In. Seller Inventory # ria9780471161202_new
Quantity: Over 20 available
Seller: GreatBookPricesUK, Woodford Green, United Kingdom
Condition: As New. Unread book in perfect condition. Seller Inventory # 31006
Quantity: Over 20 available
Seller: Majestic Books, Hounslow, United Kingdom
Condition: New. pp. 354. Seller Inventory # 7355887
Quantity: 3 available
Seller: Books Puddle, New York, NY, U.S.A.
Condition: New. pp. 354 1st Edition. Seller Inventory # 26492080
Seller: moluna, Greven, Germany
Condition: New. There is much to appreciate about his book. It is well written and thoughtfully organized and nicely integrates theory with computation. Its orientation toward SDP theory for buffer control will certainly interest scientists seeking solutions to communicati. Seller Inventory # 116767188
Quantity: Over 20 available