About this title:
What do chess-playing computer programs, biological evolution, competitive sports, gambling, alternative voting systems, public auctions, corporate globalization, and class warfare have in common? All are manifestations of a new paradigm in scientific thinking, one that the author calls “the emerging science of competition.” Drawing in part on the pioneering work of mathematicians such as John von Neumann, John Nash (of A Beautiful Mind fame), and Robert Axelrod, James Case explores the common game-theoretical strands that tie these seemingly unrelated fields together, showing how each can be better understood in the shared light of the others. Not since James Gleick’s bestselling book Chaos brought widespread public attention to the new sciences of chaos and complexity has a general-interest science book served such an eye-opening purpose. Competition will appeal to a wide range of readers, from policy wonks and futurologists to former jocks and other ordinary citizens seeking to make sense of a host of novel—and frequently controversial—issues.
About the Author:
James Case holds a PhD from the University of Michigan, has taught at a number of universities and worked in private industry, and is the regular book reviewer for SIAM News of the Society for Industrial and Applied Mathematics.
Excerpt. © Reprinted by permission. All rights reserved.:
CHAPTER 1 Man Versus Machine On May 11, 1997, an IBM computer named Deep Blue defeated the reigning world chess champion in a six-game match. It was the first loss of Garry Kasparov's professional career and, with the possible exception of Bobby Fischer's victory over Boris Spassky in 1972, the most widely reported chess match in history. A pre-match press conference was attended by more than two hundred reporters from around the world, including TV crews from ABC, CBS, NBC, CNN, Fox, and several foreign networks. As the match progressed, and the likelihood of a Deep Blue victory increased, the number of reporters in attendance grew. Daily results were reported on the evening news, analyzed in the morning papers, and satirized by late-night TV comics. Not even the launch of the System/360 line of computers in 1964--by far the biggest public-relations event in IBM history--generated as much publicity. While pundits differed as to the exact significance of Deep Blue's triumph, few doubted that an important milestone had been passed. This book will explore the implications of Deep Blue's achievement, and a growing list of others like it, as they contribute to man's understanding of competition, games, and the general welfare. For Deep Blue's triumph was by no means an isolated event. It followed hard on the heels of the 1994 defeat of Marion Tinsley--by all accounts the greatest checkers player of all time--at the hands of a computer program named Chinook, and it preceded by only three months the overwhelming(six games to none) victory of a program named Logistello over Takeshi Murakami, the then-reigning world champion player of Othello. Chess, checkers, and Othello are but a few of the games at which computers now rival the strongest human players. Some of the others, like poker, bridge, backgammon, Monopoly, and Scrabble, will be familiar to most readers. Yet others, like awari, Kalah, dakon, draughts, Lines of Action, Nine Men's Morris, Qubic, shogi, Go-Moku, and Hex, may be less so.1 Of particular interest is the ancient Asian game of Go, at which even the best computer programs are still no match for quite ordinary human players. Why do computers find Go so difficult? The short answer to that question is complexity. A chess player has, on average, only twenty to thirty legal moves to choose from on any given turn, while checkers players seldom have more than a dozen. Go players, in contrast, have a hundred or more. This turns out to mean that brute-force searching--in which one simply tries every alternative--is virtually useless in Go, although it is an important component of every successful computer program for playing chess, checkers, and most of the other games at which computers currently excel. For any number of reasons, including the fact that at least two different measures of complexity have proven relevant for the analysis of computer games, a more complete explanation of Go's intractability will have to be deferred until later in the book. The reader should bear in mind that an entire science of complexity has emerged in recent years, to take its place beside the almost equally novel science of chaos. Both are at least peripherally related to the study of games and competition, and will recur in what follows. Deep Blue's triumph was exceedingly hard won. After absorbing the lessons learned by previous students of computer chess, a three-man team still required twenty-eight man-years (beginning in or about 1985) to complete the project. Hardware designer Feng-hsiung Hsu alone devoted twelve years to the project. This lead threesome was assisted by an ever-changing cast of human chess experts, IBM project managers, and faculty members at Carnegie Mellon University, where the project began. Chinook was developed at the University of Alberta, under the direction of Professor Jonathan Schaeffer, beginning in 1989, and achieved its triumph within a "mere" five years. Logistello was built at the NEC Research Institute in Princeton, New Jersey, by ateam led by Michael Buro. Hsu and Schaeffer have written books recounting the inside stories of their memorable campaigns.2 Computers exhibiting the ability to mimic aspects of human thought are said to possess artificial intelligence, or AI. A simple electronic calculator doesn't have AI, but a machine that can learn from its mistakes, or that exhibits the ability to reason effectively, undoubtedly does. As computers have grown more powerful, the dividing line between AI and "routine applications" has receded to the point where deeds once deemed indicative of AI are no longer so interpreted. Wags now suggest that a capability constitutes AI only as long as it retains the gee-whiz ability to delight and amaze. That said, computers able to play board games like chess and checkers at the highest human level, to compose arresting melodies, or to provide original proofs for known mathematical theorems have always been acknowledged to possess AI, and probably always will be. All this being the case, it is hardly surprising that most world-class game-playing computer programs have been designed by AI experts. Arthur Samuel, the first man to program a computer to play checkers, once offered the following explanation for the scientific community's continuing interest in game-playing computers: Programming computers to play games is but one stage in the development of an understanding of the methods which must be employed for the machine simulation of intellectual behavior. As we progress in this understanding it seems reasonable to assume that these newer techniques will be applied to real-life situations with increasing frequency, and the effort devoted to games ... will decrease. Perhaps we have not yet reached this turning point, and we may still have much left to learn from the study of games.3 Samuel's explanation rings almost as true today as it did when he first offered it. The "effort devoted to games" has actually increased in recent years, due in part to the advent of extraordinarily powerful desktop computers--a development not foreseen in 1960--and to the recent successes of Deep Blue, Chinook, Logistello, and other game-playing computer systems. His "turning point" is not yet in sight, and mankind still seems to have much to learn about competition and games. Nevertheless, his "newer techniques" have materialized and arebeing "applied to real-life situations with increasing frequency," not to mention admirable effect. Nowhere are the effects more apparent than in the area of expert systems, also known as knowledge-based or rule-based systems. They consist of five component parts: an asset, a human manager, a book of rules, a compendium of facts, and an "inference engine." The latter three constitute a support system meant to help the human manager extract a high rate of return from the asset in question. Commercial inventory and revenue management systems are typical examples. The facts stored in an airline's revenue management system concern the number of seats in each category that remain unsold for each scheduled flight, along with the economic data needed to estimate the expected demand for such seats. If an inordinate number of tickets for a given flight remain unsold as departure time approaches, the system will recommend that management begin discounting the more expensive ones. The inference engine will apply the rules contained in the system to the available facts to determine an effective schedule of discounts. The facts stored in a motel chain's revenue management system concern the number of rooms of each size that remain unsold for each night in the coming year, along with the economic data required to predict unusual levels of demand. If an inordinate number of rooms remain unsold for a given date, the system will recommend that the manager begin discounting them, in order to stimulate sales. Alternatively, if unexpectedly many rooms have been sold, the system may recommend that the manager begin to encumber the remaining ones. If, for instance, Tuesday is the busiest night at a particularly business-oriented motel, the manager may refuse on Saturday, Sunday, and Monday to accept additional reservations for the coming Tuesday night, except from customers prepared to stay either Monday or Wednesday night as well. Again, the inference engine will apply the rules built into the system to the available facts to identify an effective program of discounts and/or encumbrances. Complicated though commercial revenue management systems may sometimes be, their complexity pales by comparison with the knowledge-based systems contained in many game-playing programs. When Chinook reaches the stage at which eight or fewer pieces remain on the board--meaning that the pieces are deployed in one ofexactly 443,748,401,247 possible configurations--it transfers command to its "endgame manager," an expert system of truly epic complexity. Game-playing computers stand at the forefront of expert system design. The facts embedded in Chinook's endgame manager specify, for each possible configuration of eight or fewer pieces, whether it is a winning position for White, a winning position for Black, or a draw, as well as the maximum number of moves required to obtain the promised result. If, for instance, the system describes the current position as a W-43, meaning that White can win in forty-three moves, the inference engine will search the rule book for directions to a position designated W-42. Should Chinook's opponent play skillfully, the board will then be in a position designated W-41 when Chinook's turn comes to play again. But if the opponent makes a mistake, Chinook's next move may be made from a position designated W-40 or fewer. Only if Chinook's endgame manager contains a hitherto undetected programming error will it ever find itself in a non-winning position after having occupied a winning one with eight or fewer pieces left on the board. The lessons learned about the care and feeding of expert systems by the builders of Deep Blue, Chinook, Logistello, and other world-class game-playing programs all have been published in the open scientific literature, for the benefit of all. It would be surprising indeed if none of those lessons should prove applicable to future management systems. Samuel's "study of games" has proven more subtle, surprising, and fruitful than even he expected. Fruitful though that study has been, it has failed to answer the most intriguing questions. It can be proven, for instance, as a mathematical theorem applicable to chess, checkers, and a host of other two-player games, that one of three mutually exclusive conditions prevails at any time during the course of play: either the player whose turn it is to move can win the game, or the opponent can win, or both can avoid losing, in which case the game will presumably end in a draw. Yet nobody knows which of those three alternatives applies to the opening move in chess, checkers, Othello, and other familiar board games currently ruled by machines. The game plans executed by Deep Blue, Chinook, Logistello, and the like can all be beaten, and occasionally are. They are by no meansthe unbeatable game plans whose existence the above-cited theorem guarantees. Unbeatable game plans are known only for simpler--and less frequently played--games such as nim, tic-tac-toe, Qubic, Nine Men's Morris, mancala, Go-Moku, and Connect Four. Games for which unbeatable game plans are known are often described as "solved," "cracked," or "cooked." People soon lose interest in playing such games unless the unbeatable game plans are so complicated that only a computer can carry them out. The trade-off between strategic complexity and effectiveness first became an issue during the 1960s, after Edward Thorp discovered that twenty-one (a.k.a. blackjack)--as then played in leading casinos--could be beaten by certain "card-counting" strategies.4 Thorp, then a math instructor at MIT, used the school's state-of-the-art computer to demonstrate that the blackjack strategy that commits the house to "take a hit" on seventeen or less, but to "stand" on eighteen or more, is far from unbeatable.5 He then published three increasingly sophisticated card-counting strategies and used the strongest to win thousands of dollars in casino play. As a result, he was allegedly asked to leave a number of such establishments and threatened with bodily harm should he return. The simplest of Thorp's published strategies was not strong enough to beat the house strategy on a consistent basis. It merely allowed weekend warriors to get in on the action without undue risk. The second was a more or less break-even proposition requiring more skill on the part of the player. The third was a reliable cash cow but required users to practice several hours a day in order to realize its full potential. To defend their incomes from the army of card counters spawned by Thorp's book, and its eventual emulators, the casinos soon replaced the single deck with which blackjack had traditionally been played with a "shoe" containing no fewer than four identical decks, thereby increasing the number of cards to be counted seemingly beyond the limits of human mental arithmetic. After satisfying himself that he could clear perhaps $300,000 a year playing blackjack, Thorp abandoned casino gambling in favor of Wall Street--the biggest game of all--where he and his computer-generated strategies have prospered ever since.6 Chess, checkers, and Othello are games of "perfect information," meaning that anything either player might care to know about thecurrent state of a game in progress is there on the game board for all to see. Because the players of Scrabble lack knowledge of the tiles held by their opponents, and the players of bridge and poker lack knowledge of the cards held by the other players, the players of those games are said to possess only "imperfect information." Games of imperfect information are conceptually more complex than those of perfect information. Backgammon represents an intermediate level of complexity in that, although players possess perfect information, the randomness introduced by frequent rolls of the dice represents a definite complicating factor. It means, among other things, that the same two strategies, game plans, or computer programs for playing backgammon may have to oppose each other many times over before a clear winner can be determined. The strongest backgammon-playing computer programs now perform at a superhuman level.7 They differ from Deep Blue in that they employ machine-learning techniques to constantly improve their games. The idea that a computer program could teach itself to play board games by playing them over and over again against its own clone goes back to Arthur Samuel, as does so much of AI.8 Samuel lacked only the computing power and machine-learning techniques to carry out his plan. The first program to do so successfully, called TD-Gammon, was constructed by Gerald Tesauro at IBM during the 1990s. The project has an interesting history. In 1987 Tesauro, then at the Institute for Advanced Study in Princeton, New Jersey, collaborated with Terry Sejnowski, then at the Johns Hopkins University, to train a "neural network" to play backgammon. Neurons, the fundamental building blocks of the human nervous system, have been known to medical science for more than a century. During World War II, pioneers in AI invented an artificial (electronic) neuron, assembled small networks of them, and predicted that larger and speedier networks would one day emulate human intelligence. It took a long time to realize that prediction, but by the mid...
"About this title" may belong to another edition of this title.