Misplaced Pages

Deep Blue (chess computer)

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Computer chess includes both hardware (dedicated computers) and software capable of playing chess . Computer chess provides opportunities for players to practice even in the absence of human opponents, and also provides opportunities for analysis, entertainment and training. Computer chess applications that play at the level of a chess grandmaster or higher are available on hardware from supercomputers to smart phones . Standalone chess-playing machines are also available. Stockfish , Leela Chess Zero , GNU Chess , Fruit , and other free open source applications are available for various platforms.

#919080

80-469: Deep Blue was a chess-playing expert system run on a unique purpose-built IBM supercomputer . It was the first computer to win a game, and the first to win a match, against a reigning world champion under regular time controls. Development began in 1985 at Carnegie Mellon University under the name ChipTest . It then moved to IBM, where it was first renamed Deep Thought , then again in 1989 to Deep Blue. It first played world champion Garry Kasparov in

160-556: A bug in Deep Blue's code led it to enter an unintentional loop , which it exited by taking a randomly selected valid move. Kasparov did not take this possibility into account, and misattributed the seemingly pointless move to "superior intelligence". Subsequently, Kasparov experienced a decline in performance in the following game, though he denies this was due to anxiety in the wake of Deep Blue's inscrutable move. After his loss, Kasparov said that he sometimes saw unusual creativity in

240-583: A command-line interface which calculates which moves are strongest in a position) or a graphical user interface (GUI) which provides the player with a chessboard they can see, and pieces that can be moved. Engines communicate their moves to the GUI using a protocol such as the Chess Engine Communication Protocol (CECP) or Universal Chess Interface (UCI). By dividing chess programs into these two pieces, developers can write only

320-550: A parallel search algorithm as calculations on the GPU are inherently parallel. The minimax and alpha-beta pruning algorithms used in computer chess are inherently serial algorithms, so would not work well with batching on the GPU. On the other hand, MCTS is a good alternative, because the random sampling used in Monte Carlo tree search lends itself well to parallel computing, and is why nearly all engines which support calculations on

400-415: A six-game match in 1996, where it lost four games to two. It was upgraded in 1997 and in a six-game re-match, it defeated Kasparov by winning two games and drawing three. Deep Blue's victory is considered a milestone in the history of artificial intelligence and has been the subject of several books and films. While a doctoral student at Carnegie Mellon University , Feng-hsiung Hsu began development of

480-416: A November 2006 match between Deep Fritz and world chess champion Vladimir Kramnik , the program ran on a computer system containing a dual-core Intel Xeon 5160 CPU , capable of evaluating only 8 million positions per second, but searching to an average depth of 17 to 18 plies (half-moves) in the middlegame thanks to heuristics ; it won 4–2. Deep Blue's evaluation function was initially written in

560-468: A built in mechanism for reducing the Elo rating of the engine (via UCI's uci_limitstrength and uci_elo parameters). Some versions of Fritz have a Handicap and Fun mode for limiting the current engine or changing the percentage of mistakes it makes or changing its style. Fritz also has a Friend Mode where during the game it tries to match the level of the player. Chess databases allow users to search through

640-435: A certain maximum search depth or the program determines that a final "leaf" position has been reached (e.g. checkmate). One particular type of search algorithm used in computer chess are minimax search algorithms, where at each ply the "best" move by the player is selected; one player is trying to maximize the score, the other to minimize it. By this alternating process, one particular terminal node whose evaluation represents

720-606: A chess engine connected to a GUI, such as Winboard or Chessbase . Playing strength, time controls, and other performance-related settings are adjustable from the GUI. Most GUIs also allow the player to set up and to edit positions, to reverse moves, to offer and to accept draws (and resign), to request and to receive move recommendations, and to show the engine's analysis as the game progresses. There are thousands of chess engines such as Sargon , IPPOLIT , Stockfish , Crafty , Fruit , Leela Chess Zero and GNU Chess which can be downloaded (or source code otherwise obtained) from

800-565: A chess-playing supercomputer under the name ChipTest . The machine won the North American Computer Chess Championship in 1987 and Hsu and his team followed up with a successor, Deep Thought , in 1988. After receiving his doctorate in 1989, Hsu and Murray Campbell joined IBM Research to continue their project to build a machine that could defeat a world chess champion. Their colleague Thomas Anantharaman briefly joined them at IBM before leaving for

880-587: A chess-playing computer system must decide on a number of fundamental implementation issues. These include: Adriaan de Groot interviewed a number of chess players of varying strengths, and concluded that both masters and beginners look at around forty to fifty positions before deciding which move to play. What makes the former much better players is that they use pattern recognition skills built from experience. This enables them to examine some lines in much greater depth than others by simply not considering moves they can assume to be poor. More evidence for this being

SECTION 10

#1732780303920

960-419: A computer must examine a quadrillion possibilities to look ahead ten plies (five full moves); one that could examine a million positions a second would require more than 30 years. The earliest attempts at procedural representations of playing chess predated the digital electronic age, but it was the stored program digital computer that gave scope to calculating such complexity. Claude Shannon, in 1949, laid out

1040-545: A current computer program could ever win a single game against a master player would be for the master, perhaps in a drunken stupor while playing 50 games simultaneously, to commit some once-in-a-year blunder". In the late 1970s chess programs suddenly began defeating highly skilled human players. The year of Hearst's statement, Northwestern University 's Chess 4.5 at the Paul Masson American Chess Championship's Class B level became

1120-465: A general purpose computer and allocate move generation, parallel search, or evaluation to dedicated processors or specialized co-processors. The first paper on chess search was by Claude Shannon in 1950. He predicted the two main possible search strategies which would be used, which he labeled "Type A" and "Type B", before anyone had programmed a computer to play chess. David Levy (chess player) Too Many Requests If you report this error to

1200-435: A generalized form, with many to-be-determined parameters (e.g., how important is a safe king position compared to a space advantage in the center, etc.). Values for these parameters were determined by analyzing thousands of master games. The evaluation function was then split into 8,000 parts, many of them designed for special positions. The opening book encapsulated more than 4,000 positions and 700,000 grandmaster games, while

1280-499: A large library of historical games, analyze them, check statistics, and formulate an opening repertoire. Chessbase (for PC) is a common program for these purposes amongst professional players, but there are alternatives such as Shane's Chess Information Database (Scid) for Windows, Mac or Linux, Chess Assistant for PC, Gerhard Kalab's Chess PGN Master for Android or Giordano Vicoli's Chess-Studio for iOS. Programs such as Playchess allow players to play against one another over

1360-441: A large number of training apps such as CT-ART and its Chess King line based on tutorials by GM Alexander Kalinin and Maxim Blokh. There is also software for handling chess problems . After discovering refutation screening—the application of alpha–beta pruning to optimizing move evaluation—in 1957, a team at Carnegie Mellon University predicted that a computer would defeat the world human champion by 1967. It did not anticipate

1440-420: A list ("piece list"), collections of bit-sets for piece locations (" bitboards "), and huffman coded positions for compact long-term storage. Computer chess programs consider chess moves as a game tree . In theory, they examine all moves, then all counter-moves to those moves, then all moves countering them, and so on, where each individual move by one player is called a " ply ". This evaluation continues until

1520-470: A long-term advantage the computer is not able to see in its game tree search. Fritz, however, won game 5 after a severe blunder by Kramnik. Game 6 was described by the tournament commentators as "spectacular". Kramnik, in a better position in the early middlegame , tried a piece sacrifice to achieve a strong tactical attack, a strategy known to be highly risky against computers who are at their strongest defending against such attacks. True to form, Fritz found

1600-452: A match under standard chess tournament time controls. The version of Deep Blue that defeated Kasparov in 1997 typically searched to a depth of six to eight moves, and twenty or more moves in some situations. David Levy and Monty Newborn estimate that each additional ply (half-move) of forward insight increases the playing strength between 50 and 70 Elo points. In the 44th move of the first game of their second match, unknown to Kasparov,

1680-399: A mate in one ), and drew the next four. In the final game, in an attempt to draw the match, Kramnik played the more aggressive Sicilian Defence and was crushed. There was speculation that interest in human–computer chess competition would plummet as a result of the 2006 Kramnik-Deep Fritz match. According to Newborn, for example, "the science is done". Human–computer chess matches showed

SECTION 20

#1732780303920

1760-409: A microprocessor running a software chess program, but sometimes as a specialized hardware machine), software programs running on standard PCs, web sites, and apps for mobile devices. Programs run on everything from super-computers to smartphones. Hardware requirements for programs are minimal; the apps are no larger than a few megabytes on disk, use a few megabytes of memory (but can use much more, if it

1840-510: A new game, Arimaa , which was intended to be very simple for humans but very difficult for computers to master; however, in 2015, computers proved capable of defeating strong Arimaa players. Since Deep Blue's victory, computer scientists have developed software for other complex board games with competitive communities. The AlphaGo series ( AlphaGo , AlphaGo Zero , AlphaZero ) defeated top Go players in 2016–2017. Computer scientists such as Deep Blue developer Campbell believed that playing chess

1920-462: A number of common de facto standards. Nearly all of today's programs can read and write game moves as Portable Game Notation (PGN), and can read and write individual positions as Forsyth–Edwards Notation (FEN). Older chess programs often only understood long algebraic notation , but today users expect chess programs to understand standard algebraic chess notation . Starting in the late 1990s, programmers began to develop separately engines (with

2000-545: A player's rating is determined – the advantage was not so clear. In the early 2000s, commercially available programs such as Junior and Fritz were able to draw matches against former world champion Garry Kasparov and classical world champion Vladimir Kramnik . In October 2002, Vladimir Kramnik and Deep Fritz competed in the eight-game Brains in Bahrain match, which ended in a draw. Kramnik won games 2 and 3 by "conventional" anti-computer tactics – play conservatively for

2080-455: A prize created by computer science professor Edward Fredkin in 1980 for the first computer program to beat a reigning world chess champion. Kasparov initially called Deep Blue an "alien opponent" but later belittled it, stating that it was "as intelligent as your alarm clock". According to Martin Amis , two grandmasters who played Deep Blue agreed that it was "like a wall coming at you". Hsu had

2160-542: A return match. A documentary mainly about the confrontation was made in 2003, titled Game Over: Kasparov and the Machine . With increasing processing power and improved evaluation functions, chess programs running on commercially available workstations began to rival top-flight players. In 1998, Rebel 10 defeated Viswanathan Anand , who at the time was ranked second in the world, by a score of 5–3. However, most of those games were not played at normal time controls. Out of

2240-418: A six-game match (though Adams' preparation was far less thorough than Kramnik's for the 2002 series). In November–December 2006, World Champion Vladimir Kramnik played Deep Fritz. This time the computer won; the match ended 2–4. Kramnik was able to view the computer's opening book. In the first five games Kramnik steered the game into a typical "anti-computer" positional contest. He lost one game ( overlooking

2320-647: A system of defining upper and lower bounds on possible search results and searching until the bounds coincided, is typically used to reduce the search space of the program. In addition, various selective search heuristics, such as quiescence search , forward pruning, search extensions and search reductions, are also used as well. These heuristics are triggered based on certain conditions in an attempt to weed out obviously bad moves (history moves) or to investigate interesting nodes (e.g. check extensions, passed pawns on seventh rank , etc.). These selective search heuristics have to be used very carefully however. Over extend and

2400-466: A watertight defense and Kramnik's attack petered out leaving him in a bad position. Kramnik resigned the game, believing the position lost. However, post-game human and computer analysis has shown that the Fritz program was unlikely to have been able to force a win and Kramnik effectively sacrificed a drawn position. The final two games were draws. Given the circumstances, most commentators still rate Kramnik

2480-588: Is available), and any processor 300 Mhz or faster is sufficient. Performance will vary modestly with processor speed, but sufficient memory to hold a large transposition table (up to several gigabytes or more) is more important to playing strength than processor speed. Most available commercial chess programs and machines can play at super-grandmaster strength (Elo 2700 or more), and take advantage of multi-core and hyperthreaded computer CPU architectures. Top programs such as Stockfish have surpassed even world champion caliber players. Most chess programs comprise

Deep Blue (chess computer) - Misplaced Pages Continue

2560-726: Is held by the National Museum of American History , having previously been displayed in an exhibit about the Information Age , while the other rack was acquired by the Computer History Museum in 1997, and is displayed in the Revolution exhibit's "Artificial Intelligence and Robotics" gallery. Several books were written about Deep Blue, among them Behind Deep Blue: Building the Computer that Defeated

2640-732: Is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second. Advanced Chess is a form of chess developed in 1998 by Kasparov where a human plays against another human, and both have access to computers to enhance their strength. The resulting "advanced" player was argued by Kasparov to be stronger than a human or computer alone. This has been proven in numerous occasions, such as at Freestyle Chess events. Players today are inclined to treat chess engines as analysis tools rather than opponents. Chess grandmaster Andrew Soltis stated in 2016 "The computers are just much too good" and that world champion Magnus Carlsen won't play computer chess because "he just loses all

2720-470: Is likely to refute another). The drawback is that transposition tables at deep ply depths can get quite large – tens to hundreds of millions of entries. IBM's Deep Blue transposition table in 1996, for example was 500 million entries. Transposition tables that are too small can result in spending more time searching for non-existent entries due to threshing than the time saved by entries found. Many chess engines use pondering , searching to deeper levels on

2800-435: Is not currently possible for modern computers due to the game's extremely large number of possible variations . Computer chess was once considered the " Drosophila of AI", the edge of knowledge engineering . The field is now considered a scientifically completed paradigm, and playing chess is a mundane computing activity. Chess machines/programs are available in several different forms: stand-alone chess machines (usually

2880-529: The Internet free of charge. Perhaps the most common type of chess software are programs that simply play chess. A human player makes a move on the board, the AI calculates and plays a subsequent move, and the human and AI alternate turns until the game ends. The chess engine , which calculates the moves, and the graphical user interface (GUI) are sometimes separate programs. Different engines can be connected to

2960-770: The Swedish Chess Computer Association rated computer program Komodo at 3361. Chess engines continue to improve. In 2009, chess engines running on slower hardware have reached the grandmaster level. A mobile phone won a category 6 tournament with a performance rating 2898: chess engine Hiarcs 13 running inside Pocket Fritz 4 on the mobile phone HTC Touch HD won the Copa Mercosur tournament in Buenos Aires , Argentina with 9 wins and 1 draw on August 4–14, 2009. Pocket Fritz 4 searches fewer than 20,000 positions per second. This

3040-458: The TOP500 list, achieving 11.38 GFLOPS on the parallel high performance LINPACK benchmark. Computer chess Computer chess applications, whether implemented in hardware or software, use different strategies than humans to choose their moves: they use heuristic methods to build, search and evaluate trees representing sequences of moves from the current position and attempt to execute

3120-491: The vacuum-tube computer age (1950s). The early programs played so poorly that even a beginner could defeat them. Within 40 years, in 1997, chess engines running on super-computers or specialized hardware were capable of defeating even the best human players . By 2006, programs running on desktop PCs had attained the same capability. In 2006, Monty Newborn , Professor of Computer Science at McGill University , declared: "the science has been done". Nevertheless, solving chess

3200-428: The GPU use MCTS instead of alpha-beta. Many other optimizations can be used to make chess-playing programs stronger. For example, transposition tables are used to record positions that have been previously evaluated, to save recalculation of them. Refutation tables record key moves that "refute" what appears to be a good move; these are typically tried first in variant positions (since a move that refutes one position

3280-453: The GUI, permitting play against different styles of opponent. Engines often have a simple text command-line interface , while GUIs may offer a variety of piece sets, board styles, or even 3D or animated pieces. Because recent engines are so capable, engines or GUIs may offer some way of handicapping the engine's ability, to improve the odds for a win by the human player. Universal Chess Interface (UCI) engines such Fritz or Rybka may have

Deep Blue (chess computer) - Misplaced Pages Continue

3360-505: The Spracklens predicted 15; Ken Thompson predicted more than 20; and others predicted that it would never happen. The most widely held opinion, however, stated that it would occur around the year 2000. In 1989, Levy was defeated by Deep Thought in an exhibition match. Deep Thought, however, was still considerably below World Championship level, as the reigning world champion, Garry Kasparov , demonstrated in two strong wins in 1989. It

3440-503: The World Chess Champion by Deep Blue developer Feng-hsiung Hsu. Subsequent to its predecessor Deep Thought's 1989 loss to Garry Kasparov , Deep Blue played Kasparov twice more. In the first game of the first match, which took place from 10 to 17 February 1996, Deep Blue became the first machine to win a chess game against a reigning world champion under regular time controls . However, Kasparov won three and drew two of

3520-491: The average human player". The magazine described SPOC as a "state-of-the-art chess program" for the IBM PC with a "surprisingly high" level of play, and estimated its USCF rating as 1700 (Class B). At the 1982 North American Computer Chess Championship , Monroe Newborn predicted that a chess program could become world champion within five years; tournament director and International Master Michael Valvo predicted ten years;

3600-606: The best computer systems overtaking human chess champions in the late 1990s. For the 40 years prior to that, the trend had been that the best machines gained about 40 points per year in the Elo rating while the best humans only gained roughly 2 points per year. The highest rating obtained by a computer in human competition was Deep Thought's USCF rating of 2551 in 1988 and FIDE no longer accepts human–computer results in their rating lists. Specialized machine-only Elo pools have been created for rating machines, but such numbers, while similar in appearance, are not directly compared. In 2016,

3680-511: The best such sequence during play. Such trees are typically quite large, thousands to millions of nodes. The computational speed of modern computers, capable of processing tens of thousands to hundreds of thousands of nodes or more per second, along with extension and reduction heuristics that narrow the tree to mostly relevant nodes, make such an approach effective. The first chess machines capable of playing chess or reduced chess-like games were software programs running on digital computers early in

3760-419: The case is the way that good human players find it much easier to recall positions from genuine chess games, breaking them down into a small number of recognizable sub-positions, rather than completely random arrangements of the same pieces. In contrast, poor players have the same level of recall for both. The equivalent of this in computer chess are evaluation functions for leaf evaluation, which correspond to

3840-462: The computer's play that were revealed during the course of the match. Kasparov requested printouts of the machine's log files, but IBM refused, although the company later published the logs on the Internet. The 1997 tournament awarded a $ 700,000 first prize to the Deep Blue team and a $ 400,000 second prize to Kasparov. Carnegie Mellon University awarded an additional $ 100,000 to the Deep Blue team,

3920-589: The difficulty of determining the right order to evaluate moves. Researchers worked to improve programs' ability to identify killer heuristics , unusually high-scoring moves to reexamine when evaluating other branches, but into the 1970s most top chess players believed that computers would not soon be able to play at a Master level. In 1968, International Master David Levy made a famous bet that no chess computer would be able to beat him within ten years, and in 1976 Senior Master and professor of psychology Eliot Hearst of Indiana University wrote that "the only way

4000-451: The eight games, four were blitz games (five minutes plus five seconds Fischer delay for each move); these Rebel won 3–1. Two were semi-blitz games (fifteen minutes for each side) that Rebel won as well (1½–½). Finally, two games were played as regular tournament games (forty moves in two hours, one hour sudden death); here it was Anand who won ½–1½. In fast games, computers played better than humans, but at classical time controls – at which

4080-460: The endgame database contained many six-piece endgames and all five and fewer piece endgames. An additional database named the "extended book" summarizes entire games played by Grandmasters. The system combines its searching ability of 200 million chess positions per second with summary information in the extended book to select opening moves. Before the second match, the program's rules were fine-tuned by grandmaster Joel Benjamin . The opening library

SECTION 50

#1732780303920

4160-495: The finance industry and being replaced by programmer Arthur Joseph Hoane. Jerry Brody, a long-time employee of IBM Research, subsequently joined the team in 1990. After Deep Thought's two-game 1989 loss to Kasparov, IBM held a contest to rename the chess machine: the winning name was "Deep Blue", submitted by Peter Fitzhugh Brown , was a play on IBM's nickname, "Big Blue". After a scaled-down version of Deep Blue played Grandmaster Joel Benjamin , Hsu and Campbell decided that Benjamin

4240-400: The first move by each player, about 200,000 after two moves each, and nearly 120 million after just 3 moves each. So a limited lookahead (search) to some depth, followed by using domain-specific knowledge to evaluate the resulting terminal positions was proposed. A kind of middle-ground position, given good moves by both sides, would result, and its evaluation would inform the player about

4320-401: The first to win a human tournament. Levy won his bet in 1978 by beating Chess 4.7 , but it achieved the first computer victory against a Master-class player at the tournament level by winning one of the six games. In 1980, Belle began often defeating Masters. By 1982 two programs played at Master level and three were slightly weaker. The sudden improvement without a theoretical breakthrough

4400-456: The following five games, beating Deep Blue by 4–2 at the close of the match. Deep Blue's hardware was subsequently upgraded, doubling its speed before it faced Kasparov again in May 1997, when it won the six-game rematch 3½–2½. Deep Blue won the deciding game after Kasparov failed to secure his position in the opening, thereby becoming the first computer system to defeat a reigning world champion in

4480-533: The game of chess (and other games like checkers): Using "ends-and-means" heuristics a human chess player can intuitively determine optimal outcomes and how to achieve them regardless of the number of moves necessary, but a computer must be systematic in its analysis. Most players agree that looking at least five moves ahead (ten plies ) when necessary is required to play well. Normal tournament rules give each player an average of three minutes per move. On average there are more than 30 legal moves per chess position, so

4560-442: The game of chess, because of its daunting complexity, became the " Drosophila of artificial intelligence (AI)". The procedural resolution of complexity became synonymous with thinking, and early computers, even before the chess automaton era, were popularly referred to as "electronic brains". Several different schema were devised starting in the latter half of the 20th century to represent knowledge and thinking, as applied to playing

4640-466: The goodness or badness of the moves chosen. Searching and comparing operations on the tree were well suited to computer calculation; the representation of subtle chess knowledge in the evaluation function was not. The early chess programs suffered in both areas: searching the vast tree required computational resources far beyond those available, and what chess knowledge was useful and how it was to be encoded would take decades to discover. The developers of

4720-419: The human players' pattern recognition skills, and the use of machine learning techniques in training them, such as Texel tuning, stochastic gradient descent , and reinforcement learning , which corresponds to building experience in human players. This allows modern programs to examine some lines in much greater depth than others by using forwards pruning and other selective heuristics to simply not consider moves

4800-501: The internet. Chess training programs teach chess. Chessmaster had playthrough tutorials by IM Josh Waitzkin and GM Larry Christiansen . Stefan Meyer-Kahlen offers Shredder Chess Tutor based on the Step coursebooks of Rob Brunia and Cor Van Wijgerden. Former World Champion Magnus Carlsen 's Play Magnus company released a Magnus Trainer app for Android and iOS. Chessbase has Fritz and Chesster for children. Convekta provides

4880-435: The like" much more often than they realized; "in short, computers win primarily through their ability to find and exploit miscalculations in human initiatives". By 1982, microcomputer chess programs could evaluate up to 1,500 moves a second and were as strong as mainframe chess programs of five years earlier, able to defeat a majority of amateur players. While only able to look ahead one or two plies more than at their debut in

SECTION 60

#1732780303920

4960-433: The machine's moves, suggesting that during the second game, human chess players had intervened on behalf of the machine. IBM denied this, saying the only human intervention occurred between games. Kasparov demanded a rematch, but IBM had dismantled Deep Blue after its victory and refused the rematch. The rules allowed the developers to modify the program between games, an opportunity they said they used to shore up weaknesses in

5040-469: The mid-1970s, doing so improved their play more than experts expected; seemingly minor improvements "appear to have allowed the crossing of a psychological threshold, after which a rich harvest of human error becomes accessible", New Scientist wrote. While reviewing SPOC in 1984, BYTE wrote that "Computers—mainframes, minis, and micros—tend to play ugly, inelegant chess", but noted Robert Byrne 's statement that "tactically they are freer from error than

5120-442: The opponent's time, similar to human beings, to increase their playing strength. Of course, faster hardware and additional memory can improve chess program playing strength. Hyperthreaded architectures can improve performance modestly if the program is running on a single core or a small number of cores. Most modern programs are designed to take advantage of multiple cores to do parallel search. Other programs are designed to run on

5200-606: The principles of algorithmic solution of chess. In that paper, the game is represented by a "tree", or digital data structure of choices (branches) corresponding to moves. The nodes of the tree were positions on the board resulting from the choices of move. The impossibility of representing an entire game of chess by constructing a tree from first move to last was immediately apparent: there are an average of 36 moves per position in chess and an average game lasts about 35 moves to resignation (60-80 moves if played to checkmate, stalemate, or other draw). There are 400 positions possible after

5280-434: The program assume to be poor through their evaluation function, in the same way that human players do. The only fundamental difference between a computer program and a human in this sense is that a computer program can search much deeper than a human player could, allowing it to search more nodes and bypass the horizon effect to a much greater extent than is possible with human players. Computer chess programs usually support

5360-663: The program wastes too much time looking at uninteresting positions. If too much is pruned or reduced, there is a risk cutting out interesting nodes. Monte Carlo tree search (MCTS) is a heuristic search algorithm which expands the search tree based on random sampling of the search space. A version of Monte Carlo tree search commonly used in computer chess is PUCT, Predictor and Upper Confidence bounds applied to Trees. DeepMind's AlphaZero and Leela Chess Zero uses MCTS instead of minimax. Such engines use batching on graphics processing units in order to calculate their evaluation functions and policy (move selection), and therefore require

5440-420: The rights to use the Deep Blue design independently of IBM, but also independently declined Kasparov's rematch offer. In 2003, the documentary film Game Over: Kasparov and the Machine investigated Kasparov's claims that IBM had cheated. In the film, some interviewees describe IBM's investment in Deep Blue as an effort to boost its stock value. Following Deep Blue's victory, AI specialist Omar Syed designed

5520-451: The search control. The move generator is a 8x8 combinational logic circuit, a chess board in miniature. Its chess playing program was written in C and ran under the AIX operating system . It was capable of evaluating 200 million positions per second, twice as fast as the 1996 version. In 1997, Deep Blue was upgraded again to become the 259th most powerful supercomputer according to

5600-420: The searched value of the position will be arrived at. Its value is backed up to the root, and that evaluation becomes the valuation of the position on the board. This search process is called minimax. A naive implementation of the minimax algorithm can only search to a small depth in a practical amount of time, so various methods have been devised to greatly speed the search for good moves. Alpha–beta pruning ,

5680-471: The stronger player in the match. In January 2003, Kasparov played Junior , another chess computer program, in New York City. The match ended 3–3. In November 2003, Kasparov played X3D Fritz . The match ended 2–2. In 2005, Hydra , a dedicated chess computer with custom hardware and sixty-four processors and also winner of the 14th IPCCC in 2005, defeated seventh-ranked Michael Adams 5½–½ in

5760-404: The time and there's nothing more depressing than losing without even being in the game." Since the era of mechanical machines that played rook and king endings and electrical machines that played other games like hex in the early years of the 20th century, scientists and theoreticians have sought to develop a procedural representation of how humans learn, remember, think and apply knowledge, and

5840-474: The user interface, or only the engine, without needing to write both parts of the program. (See also chess engine .) Developers have to decide whether to connect the engine to an opening book and/or endgame tablebases or leave this to the GUI. The data structure used to represent each chess position is key to the performance of move generation and position evaluation . Methods include pieces stored in an array ("mailbox" and "0x88"), piece positions stored in

5920-473: Was a massively parallel IBM RS/6000 SP Supercomputer with 30 PowerPC 604e processors and 480 custom 600 nm CMOS VLSI "chess chips" designed to execute the chess-playing expert system, as well as FPGAs intended to allow patching of the VLSIs (which ultimately went unused) all housed in two cabinets. The chess chip has four parts: the move generator, the smart-move stack, the evaluation function, and

6000-421: Was a good measurement for the effectiveness of artificial intelligence, and by beating a world champion chess player, IBM showed that they had made significant progress. Deep Blue is also responsible for the popularity of using games as a display medium for artificial intelligence, as in the cases of IBM Watson or AlphaGo . While Deep Blue, with its capability of evaluating 200 million positions per second,

6080-544: Was not until a 1996 match with IBM's Deep Blue that Kasparov lost his first game to a computer at tournament time controls in Deep Blue versus Kasparov, 1996, game 1 . This game was, in fact, the first time a reigning world champion had lost to a computer using regular time controls. However, Kasparov regrouped to win three and draw two of the remaining five games of the match, for a convincing victory. In May 1997, an updated version of Deep Blue defeated Kasparov 3½–2½ in

6160-538: Was provided by grandmasters Miguel Illescas , John Fedorowicz , and Nick de Firmian . When Kasparov requested that he be allowed to study other games that Deep Blue had played so as to better understand his opponent, IBM refused, leading Kasparov to study many popular PC chess games to familiarize himself with computer gameplay. Deep Blue used custom VLSI chips to parallelize the alpha–beta search algorithm, an example of symbolic AI . The system derived its playing strength mainly from brute force computing power. It

6240-421: Was the expert they were looking for to help develop Deep Blue's opening book , so hired him to assist with the preparations for Deep Blue's matches against Garry Kasparov. In 1995, a Deep Blue prototype played in the eighth World Computer Chess Championship , playing Wchess to a draw before ultimately losing to Fritz in round five, despite playing as White . Today, one of the two racks that made up Deep Blue

6320-472: Was the first computer to face a world chess champion in a formal match, it was a then-state-of-the-art expert system , relying upon rules and variables defined and fine-tuned by chess masters and computer scientists. In contrast, current chess engines such as Leela Chess Zero typically use reinforcement machine learning systems that train a neural network to play, developing its own internal logic rather than relying upon rules defined by human experts. In

6400-582: Was unexpected, as many did not expect that Belle's ability to examine 100,000 positions a second—about eight plies—would be sufficient. The Spracklens, creators of the successful microcomputer program Sargon , estimated that 90% of the improvement came from faster evaluation speed and only 10% from improved evaluations. New Scientist stated in 1982 that computers "play terrible chess ... clumsy, inefficient, diffuse, and just plain ugly", but humans lost to them by making "horrible blunders, astonishing lapses, incomprehensible oversights, gross miscalculations, and

#919080