Home * Search * Alpha-Beta
Alpha Beta [1]
The Alpha-Beta algorithm (Alpha-Beta Pruning, Alpha-Beta Heuristic [2] ) is a significant enhancement to the minimax search algorithm that eliminates the need to search large portions of the game tree applying a branch-and-bound technique. Remarkably, it does this without any potential of overlooking a better move. If one already has found a quite good move and search for alternatives, onerefutation is enough to avoid it. No need to look for even stronger refutations. The algorithm maintains two values, alpha and beta. They represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of respectively. Consider the following example...
- 4Quotes
- 5Implementation
- 6Enhancements
- 9Selected Publications
- 10Forum Posts
Say it is White's turn to move, and we are searching to a depth of 2 (that is, we are consider all of White's moves, and all of Black's responses to each of those moves.) First we pick one of White's possible moves - let's call this Possible Move #1. We consider this move and every possible response to this move by black. After this analysis, we determine that the result of making Possible Move #1 is an even position. Then, we move on and consider another of White's possible moves (Possible Move #2.) When we consider the first possible counter-move by black, we discover that playing this results in black winning a Rook! In this situation, we can safely ignore all of Black's other possible responses to Possible Move #2 because we already know that Possible Move #1 is better. We really don't care exactly how much worse Possible Move #2 is. Maybe another possible response wins a Queen, but it doesn't matter because we know that we can achieve at least an even game by playing Possible Move #1. The full analysis of Possible Move #1 gave us a lower bound. We know that we can achieve at least that, so anything that is clearly worse can be ignored.
The 'complete' algorithm can be explained in code much more elegantly over at the Github repo than in the words I'm about to type, but I'll try to highlight the main points. Just note that these techniques are found in nearly every decent chess engine nowadays, and are not what make Stockfish unique compared to other engines.
The situation becomes even more complicated, however, when we go to a search depth of 3 or greater, because now both players can make choices affecting the game tree. Now we have to maintain both a lower bound and an upper bound (called Alpha and Beta.) We maintain a lower bound because if a move is too bad we don't consider it. But we also have to maintain an upper bound because if a move at depth 3 or higher leads to a continuation that is too good, the other player won't allow it, because there was a better move higher up on the game tree that he could have played to avoid this situation. One player's lower bound is the other player's upper bound.
The savings of alpha beta can be considerable. If a standard minimax search tree has xnodes, an alpha beta tree in a well-written program can have a node count close to the square-root of x. How many nodes you can actually cut, however, depends on how well ordered your game tree is. If you always search the best possible move first, you eliminate the most of the nodes. Of course, we don't always know what the best move is, or we wouldn't have to search in the first place. Conversely, if we always searched worse moves before the better moves, we wouldn't be able to cut any part of the tree at all! For this reason, good move ordering is very important, and is the focus of a lot of the effort of writing a good chess program. As pointed out by Levin in 1961, assuming constantly b moves for each node visited and search depth n, the maximal number of leaves in alpha-beta is equivalent to minimax, b ^ n. Considering always the best move first, it is b ^ ceil(n/2) plus b ^ floor(n/2) minus one. The minimal number of leaves is shown in following table which also demonstrates the odd-even effect:
depth n | bn | b⌈n/2⌉ + b⌊n/2⌋ - 1 |
---|---|---|
0 | 1 | 1 |
1 | 40 | 40 |
2 | 1,600 | 79 |
3 | 64,000 | 1,639 |
4 | 2,560,000 | 3,199 |
5 | 102,400,000 | 65,569 |
6 | 4,096,000,000 | 127,999 |
7 | 163,840,000,000 | 2,623,999 |
8 | 6,553,600,000,000 | 5,119,999 |
Alpha-Beta was invented independently by several researchers and pioneers from the 50s [3], and further research until the 80s, most notable by
- John McCarthy proposed the idea of Alpha-Beta after the representation of the Chess Program by Alex Bernstein[4] at the Dartmouth Workshop in 1956[5]
- Allen Newell and Herbert Simon Approximation in 1958
- Arthur Samuel Approximation in 1959
- Daniel Edwards and Timothy Hart, Description in 1961 [6]
- Alexander Brudno, Description in 1963
- Samuel Fuller, John Gaschnig, James Gillogly, Analysis 1973 [7]
- Donald Knuth, Analysis in 1975
- Knuth and Moore‘s famous Function F2, aka AlphaBeta
- Knuth already introduced an iterative solution, see Iterative Search
- Knuth's node types
- Gérard M. Baudet, Analysis in 1978
McCarthy
Quote by John McCarthy from Human-Level AI is harder than it seemed in 1955 on the Dartmouth workshop:
Ershov and Shura-Bura
Quote from The Early Development of Programming in the USSR by Andrey Ershov and Mikhail R. Shura-Bura[8]
Knuth
Quote by Knuth[9] :It is interesting to convert this recursive procedure to an iterative (non-recursive) form by a sequence of mechanical transformations, and to apply simple optimizations which preserve program correctness. The resulting procedure is surprisingly simple, but not as easy to prove correct as the recursive form.
Max versus Min
A C-like pseudo code implementation of the alpha-beta algorithm with distinct indirect recursive routines for the max- and min-player, similar to the minimax routines. Making and unmakingmoves is omitted, and should be done before and after the recursive calls. So called beta-cutoffs occur for the max-play, alpha-cutoffs for the min-player.
With this call from the Root:
Alpha-beta search tree with two alpha-cuts at min nodes [10]
Negamax Framework
Inside a negamax framework the routine looks simpler, but is not necessarily simpler to understand. Despite negating the returned score of the direct recursion, alpha of the min-player becomes minus beta of the max-player and vice versa, and the term alpha-cutoff or alpha-pruning is somehow diminished.
Note #1: Notice the call to quiesce(). This performs a quiescence search, which makes the alpha-beta search much more stable.
Note #2: This function only returns the score for the position, not the best move. Normally, a different (but very similar) function is used for searching the root node. The SearchRoot function calls the alphaBeta function and returns both a score and a best move. Also, most search functions collect the Principal Variation not only for display purposes, but for a good guess as the leftmost path of the next iteration inside an iterative deepening framework.
Fail hard
Since alpha and beta act as hard bounds of the return value if depth left is greater zero in the above code samples, this is referred to a fail-hard-framework.
Outside the Bounds
Fail-Soft Alpha-Beta [11] may return scores outside the bounds, that is either greater than beta or less than alpha. It has to keep track of the best score, which might be below alpha.
The alpha-beta algorithm can also be improved. These enhancements come from the fact that if you restrict the window of scores that are interesting, you can achieve more cutoffs. Since move ordering is so much important, a technique applied outside of the search for this is iterative deepening boosted by a transposition table, and possibly aspiration windows. Other enhancements, applied within the search function, are further discussed.
Obligatory
Selectivity
Scout and Friends
- Parallel Alpha-Beta in Cilk
1958 ..
- Allen Newell, Cliff Shaw, Herbert Simon (1958). Chess Playing Programs and the Problem of Complexity. IBM Journal of Research and Development, Vol. 4, No. 2, pp. 320-335. Reprinted (1963) in Computers and Thought (eds. Edward A. Feigenbaum and Julian Feldman), pp. 39-70. McGraw-Hill, New York, N.Y., pdf
- Arthur Samuel (1959). Some Studies in Machine Learning Using the Game of Checkers. IBM Journal July 1959
1960 ...
- Daniel Edwards, Timothy Hart (1961). The Alpha-Beta Heuristic, AIM-030, reprint available from DSpace at MIT. Retrieved on 2006-12-21.
- Alexander Brudno (1963). Bounds and valuations for shortening the search of estimates. Problemy Kibernetiki (10) 141–150 and Problems of Cybernetics (10) 225–241
- James R. Slagle (1963). Game Trees, M & N Minimaxing, and the M & N alpha-beta procedure. Artificial Intelligence Group Report 3, UCRL-4671, University of California
- James R. Slagle, John K. Dixon (1969). Experiments With Some Programs That Search Game Trees. Journal of the ACM, Vol. 16, No. 2: 189-207, pdf
1970 ...
- Samuel Fuller, John Gaschnig, James Gillogly (1973). An Analysis of the Alpha-Beta Pruning Algorithm. Technical Report, Carnegie Mellon University, pdf
- Donald Knuth, Ronald W. Moore (1975). An Analysis of Alpha-Beta Pruning. Artificial Intelligence, Vol. 6, No. 4, pp 293–326. Reprinted in Donald Knuth (2000). Selected Papers on Analysis of Algorithms. CSLI lecture notes series 102, ISBN 1-57586-212-3, pdf
- Arnold K. Griffith (1976). Empirical Exploration of the Performance of the Alpha-Beta Tree-Searching Heuristic. IEEE Transactions on Computers, Vol. C-25, No. 1
- Gérard M. Baudet (1978). An Analysis of the Full Alpha-Beta Pruning Algorithm. STOC 1978: 296-313
- Gérard M. Baudet (1978). On the branching factor of the alpha-beta pruning algorithm. Artificial Intelligence, Vol. 10
- Patrick Winston (1978). Dealing with Adversaries. Personal Computing, Vol. 2, No. 11, pp. 30, November 1978 » Alpha-Beta
- Gary Lindstrom (1979). Alpha-Beta Pruning on Evolving Game Trees. Technical Report UUCCS 79-101, University of Utah, UScholar Works
- Ward Douglas Maurer (1979). Alpha-Beta Pruning. BYTE, Vol. 4, No. 11, pp. 84-96
1980 ...
- David Levy (1980). Intelligent Games. Creative Computing, Vol. 6, No. 4, hosted by the Internet Archive
- Judea Pearl (1981). Heuristic search theory: A survey of recent results. IJCAI-81, pdf
- Igor Roizen (1981). On the Average Number of Terminal Nodes examined by Alpha-Beta. Technical Report UCLA-ENG-CSL-8108, University of California at Los Angeles, Cognitive Systems Laboratory
- Judea Pearl (1982). The Solution for the Branching Factor of the Alpha-Beta Pruning Algorithm and its Optimality. Communications of the ACM, Vol. 25, No. 8
- Peter W. Frey (1983). The Alpha-Beta Algorithm: Incremental Updating, Well-Behaved Evaluation Functions, and Non-Speculative Forward Pruning. Computer Game-Playing (ed. Max Bramer), pp. 285-289. Ellis Horwood Limited
- John Philip Fishburn (1983). Another optimization of alpha-beta search. SIGART Bulletin, Issue 84, pdf » Fail-Soft
- John Hughes (1984). Why Functional Programming Matters. 5 An Example from Artificial Intelligence, Chalmers Tekniska Högskola, Göteborg, pdf,
- Stephen F. Wheeler (1985). A performance benchmark of the alpha-beta procedure on randomly ordered non-uniform depth-first game-trees generated by a chess program. M.Sc. thesis, East Texas State University
- Toshihide Ibaraki (1986). Generalization of Alpha-Beta and SSS* Search Procedures.Artificial Intelligence, Vol. 29
- Matthew Huntbach, F. Warren Burton (1988). Alpha-beta search on virtual tree machines. Information Sciences, Vol. 44, No. 1
- Robert Hyatt, Bruce W. Suter, Harry Nelson (1989). A Parallel Alpha-Beta Tree Searching Algorithm. Parallel Computing, Vol. 10, No. 3
1990 ...
- Ingo Althöfer, Bernhard Balkenhol (1991). A Game Tree with Distinct Leaf Values which is Easy for the Alpha-Beta Algorithm. Artificial Intelligence Vol. 52, No. 2
- Alois Heinz, Christoph Hense (1993). Bootstrap learning of α-β-evaluation functions. ICCI 1993, pdf
- Van-Dat Cung (1993). Parallelizations of Game-Tree Search. PARCO 1993, pdf hosted by CiteSeerX
- Alois Heinz (1994). Efficient Neural Net α-β-Evaluators. pdf
- Yanjun Zhang (1995). On the Optimality of Randomized Alpha-Beta Search. SIAM Journal on Computing, Vol. 24, No. 1
- Ernst A. Heinz (1999). Scalable Search in Computer Chess. Morgan Kaufmann, ISBN 3-528-05732-7
2000 ...
- Matthew L. Ginsberg, Alan Jaffray (2002). Alpha-Beta Pruning Under Partial Orders. in Richard J. Nowakowski (ed.) More Games of No Chance. Cambridge University Press, pdf
- Todd W. Neller (2002). Information-Based Alpha-Beta Search and the Homicidal Chauffeur. HSCC 2002, in Claire J. Tomlin, Mark R. Greenstreet (eds.) (2002). Hybrid Systems: Computation and Control. Lecture Notes in Computer Science 2289, Springer[12]
- Jacek Mańdziuk, Daniel Osman (2004). Alpha-Beta Search Enhancements with a Real-Value Game-State Evaluation Function. ICGA Journal, Vol. 27, No. 1, pdf
- Hendrik Baier (2006). Der Alpha-Beta-Algorithmus und Erweiterungen bei Vier Gewinnt. Bachelor's thesis (German), TU Darmstadt, advisor Johannes Fürnkranz, pdf
2010 ...
- Damjan Strnad, Nikola Guid (2011). Parallel Alpha-Beta Algorithm on the GPU. CIT. Journal of Computing and Information Technology, Vol. 19, No. 4 » GPU, Parallel Search, Reversi
- Abdallah Saffidine, Hilmar Finnsson, Michael Buro (2012). Alpha-Beta Pruning for Games with Simultaneous Moves. AAAI 2012
- Daniel S. Abdi (2013). Analysis of pruned minimax trees. pdf » Late Move Reductions, Null Move Pruning
- Jr-Chang Chen, I-Chen Wu, Wen-Jie Tseng, Bo-Han Lin, Chia-Hui Chang (2015). Job-Level Alpha-Beta Search. IEEE Transactions on Computational Intelligence and AI in Games, Vol. 7, No. 1
- Bojun Huang (2015). Pruning Game Tree by Rollouts. AAAI » MCTS, MT-SSS*, Rollout Paradigm[13]
- Naoyuki Sato, Kokolo Ikeda (2016). Three types of forward pruning techniques to apply the alpha beta algorithm to turn-based strategy games. CIG 2016
- Hendrik Baier (2017). A Rollout-Based Search Algorithm Unifying MCTS and Alpha-Beta. Computer Games » MCαβ, Monte-Carlo Tree Search
1993 ...
- computer chess by Paul W Celmer, rgc, September 10, 1993
- Re: Computer Chess (LONG) by Andy Walker, rgc, September 16, 1993
- Computer Chess and alpha-beta pruning by Rickard Westman, rgc, September 21, 1993
- Re: Computer Chess and alpha-beta pruning by Johannes Fürnkranz, rgc, September 22, 1993 » Iterative Deepening
- alpha-beta pruning != brute force by Feng-hsiung Hsu, rgc, September 22, 1993 » Brute-Force
- Re: Computer Chess and alpha-beta pruning by Robert Hyatt, rgc, September 25, 1993
- Alpha-beta inconsistencies by Chua Kong Sian, rgc, February 19, 1994
1995 ...
- Alpha-Beta explained? by Brian, rgcc, October 15, 1996
- New improvement to alpha/beta + TT? by Heiner Marxen, rgcc, January 13, 1997 » Fail-Soft
- Re: Argument against Alpha-Beta searching by Robert Hyatt, rgcc, March 19, 1997
- computer chess 'oracle' ideas... by Robert Hyatt, rgcc, April 1, 1997 » Oracle
- Re: computer chess 'oracle' ideas... by Ronald de Man, rgcc, April 3, 1997
- Re: computer chess 'oracle' ideas... by Ronald de Man, rgcc, April 7, 1997
- Basic alpha-beta question by John Scalo, CCC, January 06, 1998
- alpha-beta is silly? by Werner Inmann, CCC, June 02, 1998
- Re: Basic questions about alpha beta by Bruce Moreland, CCC, September 28, 1998
2000 ...
- Another Alpha-Beta algorithm question by Jeff Anderson, CCC, February 18, 2000
- A Question on simple Alpha-Beta versus PVS/Negascout by Andrei Fortuna, CCC, March 21, 2000 » Principal Variation Search, NegaScout
- outline for alpha beta by John Coffey, CCC, May 12, 2000
- Alpha-Beta explanation on Heinz's book? by Severi Salminen, CCC, October 05, 2000 [14]
- Who invented the Alpha-Beta-algorithm? by Rafael B. Andrist, CCC, April 09, 2001
- The Alpha-Beta search! by Sune Fischer, CCC, August 14, 2001
- An Idiot's Guide to Minimax, Alpha/Beta, etc... by Mike Carter, CCC, February 03, 2003
- Fail soft alpha-beta by Russell Reagan, CCC, September 08, 2003 » Fail-Soft
- Complexity Analyses of Alpha-Beta by Renze Steenhuisen, CCC, October 07, 2003
- Mixing alpha-beta with PN search by Tord Romstad, CCC, January 18, 2004 » Proof-Number Search
- Question for Hyatt about Alpha/Beta by Bob Durrett, CCC, February 05, 2004
2005 ...
- Iterative alpha-beta search? by Andrew Wagner, CCC, January 11, 2006 » Iterative Search
- Trivial alfa-beta question by Jouni Uski, CCC, February 18, 2006
2010 ...
- Dumb question about alpha-beta by Daylen Yang, CCC, March 04, 2014
2015 ...
- Search algorithm in it's simplest forum by Mahmoud Uthman, CCC, February 25, 2015
- Negative alpha/beta windows: Are they useful? by Thomas Dybdahl Ahle, CCC, March 06, 2015
- Stuck on Alphabeta by kuket15, OpenChess Forum, December 07, 2015
- Alpha-Beta woes, textbook-like resources, etc. by Meni Rosenfeld, CCC, January 14, 2016
- Search by Laurie Tunnicliffe, CCC, June 24, 2016
- Alpha-Beta as a rollouts algorithm by Daniel Shawul, CCC, January 25, 2018 » MCαβ, Monte-Carlo Tree Search, Scorpio
- Integer programming from Wikipedia[15]
- The alpha-beta heuristic from Alex Bell (1972). Games Playing with Computers. Allen & Unwin, ISBN-13: 978-0080212227
- Alpha-Beta Search from Bruce Moreland'sProgramming Topics
- Lecture notes for April 22, 1997 Alpha-Beta Search by David Eppstein
- G13GAM -- Game Theory - alpha-beta pruning by Andy Walker
- Alpha-Beta from How Rebel Plays Chess by Ed Schröder
- Alpha-Beta search from Paul Verhelst'sComputer Chess Sites
- The Minimax Algorithm and Alpha-Beta Pruning from The Computer History Museum
- Alpha-Beta Slide Show in 18 steps by Mikael Bodén
- Alpha Beta - Astral Abuse, 1971, YouTube Video
- Alpha Beta are Vilma Lado, Vangelis Papathanassiou, Argyris Koulouris and Giorgio Gomelski
- ↑Alpha Beta - Astral Abuse a look at the music of Vangelis Papathanassiou
- ↑Arthur Samuel (1967). Some Studies in Machine Learning. Using the Game of Checkers. II-Recent Progress. pdf, IBM Journal - November 1967,on the name Alpha-Beta Heuristic pp. 602: So named by Prof. John McCarthy. This procedure was extensively investigated by Prof. John McCarthy and his students at M.I.T. but it has been inadequately described in the literature. It is, of course, not a heuristic at all, being a simple algorithmic procedure and actually only a special case of the more general 'branch and bound' technique which was been rediscovered many times and which is currently being exploited in integer programming research.
- ↑Jaap van den Herik (2001). Science, Competition and Business. ICGA Journal, Vol. 24, No. 4, pdf
- ↑The Dartmouth Workshop--as planned and as it happened
- ↑A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence by John McCarthy, Marvin Minsky, Nathaniel Rochester, Claude Shannon, August 31, 1955
- ↑Daniel Edwards and Timothy Hart (1961). The Alpha-Beta Heuristic, AIM-030, reprint available from DSpace at MIT. Retrieved on 2006-12-21.
- ↑Samuel Fuller, John Gaschnig, James Gillogly (1973). An Analysis of the Alpha-Beta Pruning Algorithm. Technical Report, Carnegie Mellon University
- ↑Andrey Ershov, Mikhail R. Shura-Bura (1980). The Early Development of Programming in the USSR. in Nicholas C. Metropolis (ed.) A History of Computing in the Twentieth Century. Academic Press, preprint pp. 44
- ↑Donald Knuth, Ronald W. Moore (1975). An analysis of alpha-beta pruning.Artificial Intelligence, Vol. 6, No. 4, pp 293–326. Reprinted in Donald Knuth (2000). Selected Papers on Analysis of Algorithms. CSLI lecture notes series 102, ISBN 1-57586-212-3
- ↑McGill University, Winter 1997 Class Notes, Topic #11: Game trees. Alpha-beta search, Diagram by Pui Yee Chan
- ↑John Philip Fishburn (1983). Another optimization of alpha-beta search. SIGART Bulletin, Issue 84
- ↑Homicidal chauffeur problem - Wikipedia
- ↑Re: Announcing lczero by Daniel Shawul, CCC, January 21, 2018 » Leela Chess Zero
- ↑Ernst A. Heinz (1999). Scalable Search in Computer Chess. Morgan Kaufmann, ISBN 3-528-05732-7
- ↑William Cook (2009). Fifty-Plus Years of Combinatorial Integer Programming. pdf
Retrieved from 'https://www.chessprogramming.org/index.php?title=Alpha-Beta&oldid=12393'
How Computers 'Think' in Chess
Over time, a number of people have raised interesting questions about computer (artificial) intelligence and chess, what chess engines really do and how, and how far that technology could reach in comparison to human intelligence.
I'd like to offer some thoughts and pointers for those of you who are interested in exploring this topic deeper or even just knowing how engines 'think'. I have graduate degrees in computer science, and I've explored AI (artificial intelligence) in particular at some depth, hence my knowledge of the topic.
So sit back and relax for a quick crash course on chess engines and how far artificial intelligence reaches today:
- Chess engines recognize a number of patterns (typical chess positions); the more patterns and the more refined those patterns are, the higher the quality of the engine. Some of those are endgame patterns (e.g., akin to the Nalimov tablebase), while others are middlegame patterns (e.g., good moves to play against isolated pawns, etc.). They also have vast opening databases, so they 'know' what has been played by strong players before, and how those games have continued and ended.
- Chess engines evaluate a position in part based on the material balance (where each piece has a preset value, which may change based on how developed and active each piece is in the specific game, which is itself a positional characteristic) and in part based on positional characteristics (which come from those patterns I talked about). The base material unit is 1 pawn, which is approximately equal to 1.00, so if you see an engine give an evaluation of +1.00, that means it deems the position of white as equivalent to one where that side has 1 extra pawn (even though all of that advantage may come from positional characteristics, not material ones). Typically, an advantage of +1.00 (or more) is sufficient for that side to win the game, if played correctly, while an advantage below that threshold indicates a likelihood for a draw. (If black has an advantage, the evaluation would be expressed in negative numbers but with the same magnitude, e.g., -0.58.) Naturally, there is no hard line between one and the other, but that's a reasonable rule of thumb to keep in mind.
- Each move has two parts -- white's move and black's move. Each of those two is a called half-move. In slightly more technical terms, this 'half-move' term is useful in chess engines (as you will find out if you study Artificial Intelligence) when the engine does what is called 'alpha-beta search and pruning'. The latter is a fancy term for building a tree of possibilities and then evaluating it one step at a time as it traverses that tree from bottom (leaves) to top (root). (In computer science trees are not like those in nature.) So, each half-move corresponds to one level of depth of your tree. At a given level, the engine attempts to maximize the evaluation function (which corresponds to picking a move that is as strong as possible for you, to increase the evaluation function the most), while at the next level it attempts to minimize that function (which corresponds to picking the opponent's best move, which from your perspective is the worst you can possibly expect to confront -- that's why it's minimizing the function there).
- Modern engines are not primitive, so they cut out some variations early and do not waste time (or memory) to explore them in depth. (This is the 'alpha-beta pruning' of the tree of possible moves.) Sometimes this pruning may result in a perfectly valid sacrifice being ignored (which is why engines tend to not be very good at sac'ing lines), but most of the time that saves a huge amount of time, which can be better spent on evaluating other potentially useful moves. The memory issue is not as severe in fact, since once the engine throws away a given possibility, it clears up and reuses that memory, while retaining only an encoding of the potentially fruitful continuations.
- Engines use heuristics (in human parlance these are rules of thumb -- principles that may or may not be true, but most of the time are true), and the more these heuristics they know about (e.g., difference between rook pawns and other pawns, difference between passed pawns and backward pawns, some strategic and positional considerations, etc.) the stronger the engines can be. This also includes knowledge of typical patterns, which is why computers are good at solving typical chess puzzles -- because puzzles are a distilled form of patterns.
- On the topic of whether engines can get better over time, the answer is 'Absolutely yes.' Both in terms of how deep they are able to think, as well as how truly intellingent they can become (i.e., ability to see and recognize more patterns beyond raw calculation). If you read books on the functioning of the human brain, you'll realize that people in that branch of science believe that even we, humans, learn and act based purely on pattern matching: recognizing patterns in our environment and acting on them, based on subconscious and conscious actions that are programmed in our psyche.
The depth of exploration of a chess engine (unless limited by a human using a specific parameter) depends on the processing speed of your computer (a combination of clock speed, memory size, bus transfer speed, disk seek speed, etc.): a more powerful computer is able to compute more in a given chunk of time. I used to be able to beat some engines at level 5 years ago; now I struggle at level 2 -- that's how far they've come (I assume I am no worse than I used to be).
- As far as whether chess engines will ever be able to express in human terms why they do something and not other, that's one of the Holy Grails of artificial intelligence. While with the use of expert systems one can gain some traction in the area, the difficulty comes from the need to reverse engineer their raw calculation into something resembling a thought process (in the end, engines do raw calculation more than anything else and are exceedingly good at it). Humans are for the time being only able to balance somewhat the strength of chess engines based on better intuition and knowledge of patterns (what to look for, what not to look for, etc.) -- but that has its limits which aren't advancing nearly as fast as machines' calculating power.
Those who are interested to explore more can look up the topic 'Turing test' (http://en.wikipedia.org/wiki/Turing_test) -- this is a litmus test of intellingence: in laymen's language, it's about whether a computer could fool a human into being unable to distinguish between human-generated answers and computer-generated answers to a set of questions that the human poses. Clearly, we're not quite there yet, but it's getting closer with time.
- As to whether it's good or not to use chess engines:
GMs get a strong buddy for little or no money when they use an engine. It's not easy to find someone at your level when you're a very strong player and want to train, say, in the middle of the night, or prepare a novelty for tomorrow's tournament game even while you're asleep! Engines can do that sort of thing quite well if you give them enough time. Also, they can be very good at hinting possible hidden resources, which to a master-level player is sufficient (they can figure out the meaning behind a move because of their vast experience, if you only give them a slight hint, with no need for an in-depth explanation).
With casual players, however, this is often not the case, so a strong engine is much less helpful to someone who is still learning the basics of the game, because they need a real coach instead, someone who can train them to see and phrase the ideas and strategies in human-understandable language.
- Chess engines evaluate a position in part based on the material balance (where each piece has a preset value, which may change based on how developed and active each piece is in the specific game, which is itself a positional characteristic) and in part based on positional characteristics (which come from those patterns I talked about). The base material unit is 1 pawn, which is approximately equal to 1.00, so if you see an engine give an evaluation of +1.00, that means it deems the position of white as equivalent to one where that side has 1 extra pawn (even though all of that advantage may come from positional characteristics, not material ones). Typically, an advantage of +1.00 (or more) is sufficient for that side to win the game, if played correctly, while an advantage below that threshold indicates a likelihood for a draw. (If black has an advantage, the evaluation would be expressed in negative numbers but with the same magnitude, e.g., -0.58.) Naturally, there is no hard line between one and the other, but that's a reasonable rule of thumb to keep in mind.
- Each move has two parts -- white's move and black's move. Each of those two is a called half-move. In slightly more technical terms, this 'half-move' term is useful in chess engines (as you will find out if you study Artificial Intelligence) when the engine does what is called 'alpha-beta search and pruning'. The latter is a fancy term for building a tree of possibilities and then evaluating it one step at a time as it traverses that tree from bottom (leaves) to top (root). (In computer science trees are not like those in nature.) So, each half-move corresponds to one level of depth of your tree. At a given level, the engine attempts to maximize the evaluation function (which corresponds to picking a move that is as strong as possible for you, to increase the evaluation function the most), while at the next level it attempts to minimize that function (which corresponds to picking the opponent's best move, which from your perspective is the worst you can possibly expect to confront -- that's why it's minimizing the function there).
- Modern engines are not primitive, so they cut out some variations early and do not waste time (or memory) to explore them in depth. (This is the 'alpha-beta pruning' of the tree of possible moves.) Sometimes this pruning may result in a perfectly valid sacrifice being ignored (which is why engines tend to not be very good at sac'ing lines), but most of the time that saves a huge amount of time, which can be better spent on evaluating other potentially useful moves. The memory issue is not as severe in fact, since once the engine throws away a given possibility, it clears up and reuses that memory, while retaining only an encoding of the potentially fruitful continuations.
- Engines use heuristics (in human parlance these are rules of thumb -- principles that may or may not be true, but most of the time are true), and the more these heuristics they know about (e.g., difference between rook pawns and other pawns, difference between passed pawns and backward pawns, some strategic and positional considerations, etc.) the stronger the engines can be. This also includes knowledge of typical patterns, which is why computers are good at solving typical chess puzzles -- because puzzles are a distilled form of patterns.
- On the topic of whether engines can get better over time, the answer is 'Absolutely yes.' Both in terms of how deep they are able to think, as well as how truly intellingent they can become (i.e., ability to see and recognize more patterns beyond raw calculation). If you read books on the functioning of the human brain, you'll realize that people in that branch of science believe that even we, humans, learn and act based purely on pattern matching: recognizing patterns in our environment and acting on them, based on subconscious and conscious actions that are programmed in our psyche.
The depth of exploration of a chess engine (unless limited by a human using a specific parameter) depends on the processing speed of your computer (a combination of clock speed, memory size, bus transfer speed, disk seek speed, etc.): a more powerful computer is able to compute more in a given chunk of time. I used to be able to beat some engines at level 5 years ago; now I struggle at level 2 -- that's how far they've come (I assume I am no worse than I used to be).
- As far as whether chess engines will ever be able to express in human terms why they do something and not other, that's one of the Holy Grails of artificial intelligence. While with the use of expert systems one can gain some traction in the area, the difficulty comes from the need to reverse engineer their raw calculation into something resembling a thought process (in the end, engines do raw calculation more than anything else and are exceedingly good at it). Humans are for the time being only able to balance somewhat the strength of chess engines based on better intuition and knowledge of patterns (what to look for, what not to look for, etc.) -- but that has its limits which aren't advancing nearly as fast as machines' calculating power.
Those who are interested to explore more can look up the topic 'Turing test' (http://en.wikipedia.org/wiki/Turing_test) -- this is a litmus test of intellingence: in laymen's language, it's about whether a computer could fool a human into being unable to distinguish between human-generated answers and computer-generated answers to a set of questions that the human poses. Clearly, we're not quite there yet, but it's getting closer with time.
- As to whether it's good or not to use chess engines:
GMs get a strong buddy for little or no money when they use an engine. It's not easy to find someone at your level when you're a very strong player and want to train, say, in the middle of the night, or prepare a novelty for tomorrow's tournament game even while you're asleep! Engines can do that sort of thing quite well if you give them enough time. Also, they can be very good at hinting possible hidden resources, which to a master-level player is sufficient (they can figure out the meaning behind a move because of their vast experience, if you only give them a slight hint, with no need for an in-depth explanation).
With casual players, however, this is often not the case, so a strong engine is much less helpful to someone who is still learning the basics of the game, because they need a real coach instead, someone who can train them to see and phrase the ideas and strategies in human-understandable language.
- To understand the similarities and differences between human chess players and modern computer engines, one can assume (which is a fair assumption, close to reality) that a modern chess computer/engine resembles a human chess player who has all of the following characteristics:
- highly disciplined (i.e., reliably consistent in the decisions it makes);
- completely unemotional (i.e., a loss and a win are simply two states, neither of affects the machine's ability to play more games equally strongly and consistently; also while computers' strength of play depends on various parameters, none of them have to do with who the opponent is in the sense of any hierarchies/cults);
- extremely knowledgeable (i.e., has full and unclouded knowledge of openings, endgame positions, middle game strategic heuristics, and other patterns that have been programmed into it);
- very strong (i.e., capable of beating even GMs, for the most advanced engines);
- makes almost no calculation errors, except those due to the limited window of how many moves ahead it's programmed (or allowed by practical time constraints) to look;
- reliant solely on logical inferences based on pre-existing heuristics and memory (of positions, openings, patterns), but has no substitute for human intuition ('gut feeling');
- (generally) unable to learn from his/her own mistakes by itself, or to derive new knowledge without explicit new programming by humans.
These last couple of aspects are the humans' greatest enduring advantage over computers chess programs. They are the fundamental reasons allowing the strongest GMs to still beat computers occasionally (though humans are increasingly wary of trying, for fear of what might happen to their image if they failed).
Note: A lot of additional content has been added in the Q&A section below, as people kept asking me specific questions, to which I have provided answers there. Be sure to look through that section for more interesting aspects of the ins and outs of computer engines.
I hope this article has been useful. Please ask further questions in the comments and I'll be glad to offer my perspective, based on what I know on the topic.