Chess players: familiar to all as the Chess Challenger, etc. A lot of progress from big programs: Belle in 1983 got US Master status; Deep Thought in 1989 got a US Chess Federation rating of 2500, aspiring Grand Master status. See article on Computer Chess in the Encyclopaedia of AI.
Backgammon: Hans Berliner's program defeated the world champion Villa in 1979. There have been a lot of improvements since. See Computer Backgammon by Berliner, in the June 1980 Scientific American.
Go: more intelligent than chess programs. The latter rely on brute-force search and fast computers. Most chess programs search the tree of possible moves to five or six levels deep. Eventually, the branching factor will overwhelm them: an example of combinatorial explosion. But Go has many more possible alternative moves from any position - perhaps 150-200 options as against 25 for typical mid-game chess. So Go programs can only search one level deep. This means that they must consider alternative moves, and evaluate each in terms of global strategy. Hence - say many Go players - a successful Go program will probably ``see'' the board in terms of eyes and other patterns, and react to these as an expert would.
As Berliner points out, the same applies to backgammon, and his article is a nice introduction to the subject of intelligent searching.
So how do humans play games? Concerns AI-as-engineering because game-playing machines sell (and the methods might apply to other kinds of problem); concerns AI-as-psychology just for the intrinsic interest of knowing the mechanism.
De Groot: one of the earliest to ask this question. Chess masters and weaker players differ in performance - what's the difference in mechanism? Difference in search methods perhaps: depth of search, or number of alternatives considered for each move? Experimental evidence suggests not: de Groot found these the same for masters and weaker players.
But although masters and weaker players consider the same number of moves, masters consider better moves! Also, masters are better at remembering (correct) chess positions. Instead of perceiving them as collections of individual pieces, they appear to work in larger chunks, such as pawn chains, castled-King positions. These units are probably strategically significant, and may serve to trigger suitable moves for further consideration. See article on Human Chess Skill by Neil Charness in Chess Skills in Man and Machine edited by Frey (2nd ed 1984; PSY KH:F 089).
Several important themes of AI arise from this:
How can an agent apply its knowledge to cut down the number of possibilities it must search through? What's the best way to store that knowledge? Can clever indexing methods reduce the time taken to find items?