11

A similar question asks whether a computer can learn to play optimally in chess by analyzing thousands of games.

If a machine can look at the state of the board for a few games of chess (or a few games of checkers) in the beginning and after each move, can it be programmed to learn the rules of the game?

If it can, to what extent (e.g., would it be able to account for castling or promotion) would this work? Which machine learning algorithms would make this possible?

Yktula
  • 213
  • 1
  • 5
  • 3
    The machine should be able to get to a state where it can say "I have seen this move performed, so I'll assume I can perform it under similar circumstances." Whether that constitutes "learning the rules" is almost a philosophical question. ;) – deceze Jul 12 '11 at 11:13
  • @deceze Well, you are not entirely right. By learning the rules one means being able to make the move given the layout that never happened to the program before. Otherwise it is not learning the rules but memorizing the moves. –  Jul 12 '11 at 11:17
  • 2
    @Max Would the computer be able to correctly make moves it has never seen before? Say, it has seen the knight move two forward and one to the side many many times, but never two back, one to the side. How could it confidently say what the rules are regarding the movement of knights? Maybe there's a special clause in the rules that states "knights can't move back, only forward" (just like pawns). Hence deducing rules with certainty seems like an impossible thing. That pretty much goes for humans as well. – deceze Jul 12 '11 at 11:27
  • 8
    @Max In fact, there *could* be an infinite number of rules the computer cannot *deduce*. `On the 8th turn the knight may not turn right.` `On a sunny day the pawns may jump over bishops.` The computer can't say *why* that pawn *didn't* move on a particular turn. Maybe there was a rule against it moving in that particular circumstance. Hence, the computer should only be able to reproduce similar (actually, exact) patterns, but never be able to *deduce the rules* with 100% confidence. – deceze Jul 12 '11 at 11:38
  • @deceze Yes it can if you showed that rule to it somehow. Same way as you need to teach human the rules (for example give him the printout of rules to read) you need to somehow provide that rules to the machine. The easiest way is to replay ALL of the rules in some amount of games for machine to learn. Same goes with humans - human won't know that `On a sunny day the pawns may jump over bishops.` if you never shown that to him. –  Jul 12 '11 at 11:43
  • @deceze: I love you 'jumping pawns' way of explaining that. – sehe Jul 12 '11 at 11:44
  • @Max But that wouldn't satisfy the OP's question now, would it? The question is: Can a machine *learn the rules simply by observing patterns?* Showing it all the possible moves gets you to what I commented on first: the machine has seen this move, so it can make it. That's doesn't mean it has *learned the rules*. – deceze Jul 12 '11 at 11:46
  • @sehe @deceze No, I'm just trying to tell that as long as a machine was `able` to learn (read: observe) the rule - it is possible to make it learn it. The example that deceze gave stated that there could be 'hidden' rules that the machine will not know about, which by definition is impossible to learn neither by machine nor by humans. @deceze Can human *learn the rules* by observing the game? –  Jul 12 '11 at 11:49
  • @sehe: It actually is feasible to make that kind of program. Answer me this - you are at the alien ship watching some weird 4D game they are playing. You watch zillion of games. Then you bring your knowledge back and show it to other humans. How will your rules look compared with original rules the aliens use to play? Same stuff works here. Take Kohonen network as a simple example. Make it learn 2 symbols, like 'A' and 'B'. Then make a program that checks which pixels make the biggest influence on deciding the answer. There you go - the rules of OCRing symbols 'A' and 'B'. Arent they rules? –  Jul 12 '11 at 12:00
  • I'm glad I read this question. It'd be neat to tell the program the basic mechanics of the game (that may vary--I'd say that pieces make discrete moves to squares, that occupying the same square as an opponent captures it, and that capturing the king wins the game (you might forgo 'checkmate' and simplify it to a capture)), and then the software attempts to play you. It won't know what to do, so you either tell it what to do first, or it asks you if you can do something and you tell it yes or no. If it can learn what works and what doesn't, it might get as good as you. – Carson Myers Jul 12 '11 at 12:39
  • Teaching it to play is the easy part. Teaching it ["the only winning move is not to play"](http://www.youtube.com/watch?v=NHWjlCaIrQo) is the hard part. – Tester101 Jul 14 '11 at 20:20
  • @all: Quoting SyntaxT3rr0r: [`"as for a reference, the Wikipedia entry is actually quite good: en.wikipedia.org/wiki/Solving_chess Once again that chess is solvable is not up for debate. However there's some dispute as to if one day we'll be able to build a computer fast enough to solve it or not: some people claim the combinatorial explosion is too big to be solved in our universe, others thinks one day it shall be doable, around year 2250 or so."`](http://stackoverflow.com/questions/2717476/statistical-approach-to-chess/2717887#2717887), **More notable** references preceed that comment. – sehe Jul 18 '11 at 08:44
  • Could a human learn chess just from observing games? My thought is they would not learn all specific rules of the game. For example, how would a human learn that a pawn can not go backwards? Just because it never happens does not necessarily mean it's not possible to do so. – Matthew Jan 15 '14 at 16:00

9 Answers9

10

If a machine can look at the state of the board for a few games of chess (or a few games of checkers) in the beginning and after each move, can it be programmed to learn the rules of the game?

Certainly not for a few games of chess; you'd need to analyse incredibly large numbers of them to stop it from making invalid moves. How much, I don't know; this problem belongs to the area of computational learning theory, PAC learning and the problem of learnability in the limit.

The classification algorithms suggested by the other posters might be able to discriminatively learn chess rules: given two board setups, they might answer "yes" or "no" to the question of whether a valid move transforms one into the other. With some effort, they might also be used to generate moves. They will then, however, either only generate moves they've seen in games they've been trained on, or generate a combination of valid and invalid moves, each with a score stating how probable they consider the move in question, with invalid rules hopefully getting very small probabilities.

(I.e., either the program would not recognize some valid moves at its disposal; or you might be able to cheat without it noticing; or you'd have to train it for so long that you lose all interest in the game.)

For techniques that can learn rules, check out inductive logic programming and genetic programming. I'm not sure if anyone has ever tried to apply them to chess learning; since the rules of chess are fixed, it's much more interesting (even for academia) to build good chess playing programs rather than ones that have to learn the basic rules from scratch.

Fred Foo
  • 1,316
  • 7
  • 12
4

The rules of chess are pretty complex and some very rarely executed in a game.

For instance the en passant rule. How many games will you need to observe in order to deduce it is only allowed the first move after the two-step-forward move?

Another example. The b-square can be attacked in long castling. How many games do you see where this happen?

In other words you will need many, many, many games to derive all the rules correctly.

But, perhaps, Google will find a corner of their cloud for a complete archive of chess games "soon"...

  • "The b-square can be attacked in long castling" what move is this? Can you explain it or link it for me? – CaffGeek Jul 12 '11 at 14:53
  • @chad, also called castling queenside - the notation "0-0-0" is used. It is only the three fields the kings "touch" that may not be under attack. –  Jul 12 '11 at 15:12
  • 2
    Also, the d-square **cannot** be attacked when castling queenside *(or the f-square when castling kingside)*; you also cannot castle after moving your king. How are you supposed to know that something *can't* be done by observing games where it *wasn't* done? As Lars mentions, the computer could assign probabilities that a move is valid, but ascertaining that specific rule by observing games is bordering on impossible. – BlueRaja - Danny Pflughoeft Jul 12 '11 at 16:46
  • Also, there have been games, even by grandmasters in a tournament settings, where the rules have been *broken*, and neither player noticed! According to tournament rules, if neither player notices within 10 moves, the game is valid and continues. *(I very clearly remember a game where a grandmaster moved his king, then moved it back, and later castled. I cannot find it now, though)* – BlueRaja - Danny Pflughoeft Jul 12 '11 at 16:55
  • Regardless of whether the castling rule you describe is valid or not, +1 for the *en passant* rule. That one's that's very hard to observe and generalize from; the same goes for castling which is a complicated rule. – Fred Foo Jul 14 '11 at 12:51
  • @larsmans, it is valid. Why do you doubt that? –  Jul 14 '11 at 13:12
  • Only because I don't know the rules of chess well enough. I'm not doubting your knowledge of them. Sorry for making that impression. – Fred Foo Jul 14 '11 at 14:46
  • @larsman also the difficulty of the rules show in the order they are taught. En passant usually comes very late (and was most likely created to balance the pawns-first-move-can-be-two rule which in turn most likely was made to speed the game up). –  Feb 11 '12 at 13:36
  • 1
    en passant is rare enough but there are some forced draw rules that are *FAR* more rare. I've been playing for 40 years and I've never seen a game where you could have a forced draw from having 50 moves with no pawn moves and no captures. I think there's one more such rule that I don't even recall. In practice players are going to agree to a draw before they're invoked. (They exist to ensure that you can't "win" a drawn position by dragging it out until your opponent quits.) – Loren Pechtel Dec 27 '13 at 21:54
2

Yes and No

I'm not sure if you're asking if it's possible to learn the rules of chess using machine learning/neural networks or whether it's possible to train a "grand master" level chess machine using it.

You can certainly teach a computer the rules and some level of chess using those. However I do not think you can feasibly train it to a higher level using it. Computer Science has as yet afaik failed to produce a machine that can "understand" chess from a positional/intuitive point of view. All current chess computers use an extensive database, brute force calculations and a possibly machine learning on top of that.

Guess it depends if you count using a reference database as cheating or not :) Also, it's hard to know whether you actually can separate knowing lot's of games from being good at it. Human's that are good at chess are good precisely because they've seen so many games which are referenced by the part of the brain that is typically known to recognize faces. From this recall it seems human chess players are able to develop an "intuition" of the strength of a position.

Homde
  • 11,104
  • 3
  • 40
  • 68
  • the best grandmasters memorize vast opening books and that's not considered cheating. And anyway, it happens all the time that what was once considered a good opening variation gets busted by a new idea. So the "cheat" of playing book moves may not even be the optimal way for a computer to play. – Kevin Feb 11 '12 at 02:29
2

If you think of the rules of Chess as the physics engine of the game and the Laws of Physics as the permanent and unbending accepted rules of the universe then think about how few actual "Laws" there are in our natural universe. It is EXTREMELY hard if not impossible to make a hard and fast RULE, but we can make a number of tested and accepted theories as of a specific point in time.

It COULD be possible assuming that the computer observes and logs moves, but does NOT make assertions about valid moves directly.

For example, it would observe a pawn moving forward one piece and make new hypotheses that all pieces can move forward only one space, and another that a pawn can move forward only one space. It will form as many restrictive hypotheses as possible until the next move occurs and a number of them can be thrown our or made more liberal.

Eventually after so many moves you will have a set of solid theories but it will be a living dataset permanently approaching 0 but never reaching it.

So the answer is that a computer could make a really good guess but it will (EDIT: NOT) know the rules for sure.

maple_shaft
  • 26,401
  • 11
  • 57
  • 131
2

As many of the comments have said, this is almost a philosophical question debating the definition of 'learn'. Most artificial intelligence programs rely on determining rational solutions. Given enough data a learning chess a.i. program will determine a list of moves that are rational to make in certain situations. This does not mean that it knows the rules of chess, it merely understands what moves are beneficial and which ones are not. Even if the data set includes players making illegal moves, the illegal move will cause an instant loss so the a.i. will ignore it and never use that move because it is never beneficial.

It does not matter whether neural networks or evolutionary algorithms or any other kind of learning algorithm is used, an a.i. can never explicitly learn rules from watching something, it can only determine a list of rationally beneficial options.

SC Ghost
  • 121
  • 2
1

This is a philosophical question. You might as well ask if a person could learn to play chess solely by observing people while they play chess. In fact it is basically the same kind of question Nelson Goodman asks in his great book Fact, Fiction, and Forecast: how can we move from a finite set of observations already made to a prediction of future observations. The observations already made would be the observed chess moves so far and the future observations would be all chess moves that haven't happend yet. The questions is, is there a nomological relation between past observations and future observations (as opposed to the purely causal relation between past events and future events)?

If we interpret the word nomological as by a law of nature or logic and nothing can ever happen in disaccord to this law then, there is certainly no such relation, since the first person who moves a castle diagonally, would break the law of nature and the universe as we know it would collapse.

But even if, in fact, by some freak accident of nature, every move any chess player in the world would make from now on, was valid (nobody would ever make any mistakes or try to cheat and even people clueless about the rules of chess would start pushing chess pieces around randomly across the board, but accidently always according to the rules), that would not convince us that there is a law of nature (or a law of logic) that forced all of this. We'd consider it purely accidental.

Ludwig Wittgenstein covered similar ground in his Philosophical Investigations. He insists that any set of observations is in accordance to arbitrarily many, and even conflicting, rules. For example, if all chess games observed by me would have happend in the afternoon, then my rule could be in the afternoon the bishop can only be moved diagonally. That the time of day is unimportant to the game is something that couldn't have been observed by me since I haven't observed chess games during different times of the day. Or, incidentally, if I have never observed a woman playing chess, then the rule could be the bishop can only be moved by men at all. What is relevant to an observation and what is not is determined as a prerequisite to the observation and cannot be part of the observation itself.

BTW: Wittgenstein's solution to the problem is quite similar to that of Goodman. I won't spoil the surprise, though ;-)

Addendum:

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

"What are you doing?", asked Minsky. "I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied. "Why is the net wired randomly?", asked Minsky. "I do not want it to have any preconceptions of how to play", Sussman said.

Minsky then shut his eyes. "Why do you close your eyes?" Sussman asked his teacher. "So that the room will be empty." At that moment, Sussman was enlightened.

pillmuncher
  • 853
  • 1
  • 6
  • 7
1

In theory - yes it can. It can even become a grandmaster in chess. The answer you are looking for is neural networks. Neural networks are in essence the same thing that happens inside our brain. Moreover, given the perfectly (read - impossibly perfect) designed neural network and a perfect hardware - it can learn everything a human being can learn in same or even better ways.

Read more about it:

bezmax
  • 219
  • 1
  • 2
  • 7
  • 2
    This "answer" is just a bunch of Wikipedia links, two of which describe the same concept, one of which is extremely broad, and one of which has nothing to do with the problem at hand. Please show how a Kohonen map would be used to learn effectively learn the rules of chess. – Fred Foo Jul 12 '11 at 11:48
  • 1
    ANN and NN are no way near the complexity of an actual neuron/human brain, an ANN is at best a hideous crude approximation. – Darknight Jul 12 '11 at 15:48
  • "in theory, theory and practice are the same, but in practice they never are." Chess programs can play like grandmasters, but I don't think it's because of neural networks or machine learning. Most good programs use vast opening books, endgame tablebases, and a depth first search ("negamax with alpha beta pruning"), and a probably a very complicated evaluation function (which most likely is written with the help of grandmasters). – Kevin Feb 11 '12 at 02:24
  • Neural networks can only learn what they are trained to do. Chess has some rules which are very rarely applied, but part of the game. Would it be legal to promote into a bishop? Would it be legal to promote into a king? –  Feb 11 '12 at 13:34
  • 2
    Ha...promoting to king makes checkmate a little harder! – Kevin Feb 11 '12 at 16:42
1

I think it can learn the moves that it is allowed to make just by analyzing, but how would it learn the moves it is not allowed to make? For example, the pawn never seems to move forward one square when an opposing piece is in front of it. How does the computer know whether this is by player choice or that it is not allowed to do it? You can come up with some algorithm that says if 99.99% of the time or greater an event doesn't happen then that means you aren't allowed to do it, but it could also be that 99.99% of the time that is considered a bad move, but that 0.01% of the time it is the move that wins the game. So, my answer is no, it can't learn all the rules simply by analyzing games, but it can probably learn enough to play a game.

Dunk
  • 5,059
  • 1
  • 20
  • 25
-4

No

Because simply learning patterns (via which ever method) is not the same as "learning to play chess optimally".

This requires planing & Strategy, which is very distinct from just merely learning set patterns.

Darknight
  • 12,209
  • 1
  • 38
  • 58
  • 1
    You don't actually know that; besides, the question is whether it is possible (for a computer, but the same constraints would probably also apply to humans) to learn the actual rules of chess merely by analyzing example games. – tdammers Jul 12 '11 at 13:58
  • 1
    I think you misunderstood the question... – maple_shaft Jul 12 '11 at 14:29
  • After working (master thesis) with AI (neural networks), I'm fairly sure it can't. Unless you have evidence to say otherwise, you can't simply say "you don't know that" – Darknight Jul 12 '11 at 14:40