Mig 
Greengard's ChessNinja.com

Chess in the Mags

| Permalink | 65 comments

Chess seems to be everywhere lately, at least in the newspapers and magazines I read. First we had op-eds on chess in the New York Times. This month there is a long article on Kasparov (politics, not chess) in The Atlantic. Now there is a nice piece on computer chess by Tom Mueller in this week's New Yorker. (Dec. 12 issue. This article not available online, unfortunately. Go pick it up.) Not too much new information for this crowd, but a some good interview clips with Hydra programmer Chrilly Donninger and Junior team Amir Ban and Shay Bushinsky. Even my ChessBase chum Freddy Friedel gets a nod at the end. The Adams-Hydra match gets attention, the programmers express boredom with playing against humans, and credit their machines with playing chess that is more creative than that played by humans.

That last is a bit of devil's advocate, but I suppose it's true if creativity is defined as lack of dogma. By definition computers don't have entrenched prejudices that prevent them from finding objectively superior moves, as often happens with humans. But no matter how well they play I will always take issue with any claim of computer chess meaning intelligence. Chess is but a sophisticated equation and from a human perspective (i.e. beating humans) it has been almost completely broken by the brute force of today's computer technology. This is not surprising and not quite yet horrifying.

This leads to a separate entry, but I've been getting a fair amount of mail about computer chess lately thanks to a program you've never heard of. "Rybka" is all the rage with the computer chess geeks this week. You'd think it was the second coming of Deep Blue meets Petrosian. It may well be! More on that in a bit.

65 Comments

Mig,
You're being to hard on the machines. I'm sure you've heard of the Turing Test. Basically, if it waddles like a duck, and quacks like a duck, then it is a duck! So, if a machine plays intelligent moves, then it is intelligent. It doesn't matter how it is doing this.

Ciao,
Jonathan

Jonathan, I don't think you have an understanding of the turing test. there is no turing test for chess, or for math, or for cartography, just a turing test for intelligence in general. when a computer bums a smoke in return for buying someone a coffee to sit down and talk about how it was feeling while they were playing the game they had just played, then it would be passing the turing test...

Chess is just not a good foundation for the Turing Test. You could equally claim that a computer is intelligent if it can multiply numbers.
In the original Turing Test, the machine has to maintain a conversation in a natural language on various topics that are unknown in advance, which is infinitely more difficult than performing a clearly defined task such as playing chess.

http://en.wikipedia.org/wiki/Turing_test

Chess is just not a good foundation for the Turing Test. You could equally claim that a computer is intelligent if it can multiply numbers.
In the original Turing Test, the machine has to maintain a conversation in a natural language on various topics that are unknown in advance, which is infinitely more difficult than performing a clearly defined task such as playing chess.

http://en.wikipedia.org/wiki/Turing_test

Yo, mig! Have you seen this one from the LA Times / Chicago Tribune: http://www.chicagotribune.com/features/chi-0512050206dec06,1,4345728.story

Jonathan: in any case the Turing test is not an uncontroversial or universally accepted "test for intelligence". In Turing's original article discussing it, he started off by talking about a game in which men and women converse by typing and women try to simulate men's sentences or vice versa. But if a man was successful in fooling the other participants into thinking that he was a woman, it wouldn't mean that he WAS a woman or "had femininity" - it would just mean that he was good at simulating it. By analogy, a computer that is able to simulate "human intelligence" doesn't necessarily have it.

That doesn't mean that I don't think intelligent machines are possible - I do. But I think we would need to think harder about what we mean by intelligence. In fact, I think we would be unlikely to give anything real credit for intelligence unless we first became convinced that it had an ego. Now, if Hydra starts boasting of its great moves, begins plotting to run FIDE, etc. .....!

p>

I watched the live commentary by Shipov on chesspro.ru of the Hydra-Ponomariov game. It went something like that:

"Oh, Hydra doesn't know that in this structure white should attack on the kingside. Look, the comp played b2-b3, advancing on the wing where it should stand still."
"Oh, look, Hydra blocked its own f-pawn with Nf3. He should move the knight away and play f2-f4, why doesn't he do it, silly comp!"
"Oh, Hydra played Re1 instead of using the rook on the f-file after f2-f4-f5. How silly."
"Uhm, on second thought, the move b2-b3 was actually quite useful in stopping black's queenside advance."
"Uhm, actually Re1 and Nf3 do a great job controlling the center. White is doing well even without the f-pawn advance."
"Oops, white is winning."

Without getting into philosophical/linguistic debates on what "creativity" means, the computers are playing chess that seems, to the untrained eye (and even to the trained eye!) very original and creative.

Chess programs would be excellent for Turing tests, and in fact that's a hot topic right now... just in a different guise: "computers playing human-like errors." The Turing test would not be about "how good is your computer at chess," but "can it fool someone into thinking they're playing against a human?".

The actual chess Turing test would probably be something like "play internet chess with X number of opponents, with no chat allowed, and tell us afterwards which ones were human and which ones were computers."

I had the same thoughts as you Alex, as I listening to the same type of commentary from the Chess.fm folks, Kopec and Pascal, what these guys I think fail to realize is that a machine like Hydra is going so deep, that it could probably open as white with 1.a4 and against ANY black move, play 2.h4 and still beat any human. But as Kasparov quite correctly stated IMHO anyway, all it will take is one victory over Hydra in normal time control, to prove humans are still at the top, so far I dont believe Hydra has lost a game to any human in a normal tournament time control.

You have made the ever-popular mistake of confusing Pascal (my first name) with Bill Paschall (his last name). I have not been on chess.fm for a gazillion years. And neither Bill nor I are very pleased to be mixed up with one another :-)

As a sidenote, thanks to those who had suggested me as a potential upset at FIDE championship. Unfortunately my studies have not given me a lot of prep time, I'd like to get another shot at him after the Christmas holidays!

From my point of view, a computer's play is indeed objective, as Mig says, and purely based on calculations. It is interesting, as someone pointed out, that this can seem original to the trained eye. Kind of like the discussion about the objectivity of science vs the artistic side of science. In fact the two are not as separate as people like to think.

However, I'd like to comment a little more on what Shipov was saying. The thing is, Shipov is probably objectively right. The computers tend not to make the very best moves, particularly when several "positional" moves are available, while the trained human GM can make the right choice intuitively. There is some objectivity to it, and denying it by saying the computers simply make the better moves and we feeble beings don't get it is, in my opinion, incorrect. However, the computer's moves in those cases are "good enough". Humans make their mistakes when it matters most, changing the evaluation of the position completely. The human's dream against the computer is to play a whole game where a lot of moves are available, with very little difference in evaluation. In the end, these little differences pile up and the computer will realize it too late. Humans err vastly as soon as many moves bring different evaluations.

Pascal

All the best,
Pascal

Mig, why do you point to an external message board which requires registration even for reading, but not to your own message board?? It seems to me that there are a couple of computer chess experts there, at the Chess Ninjas as well. For example a guy called Permanent Brain. Don't you ever read your own message boards?

Bye.

But this is not what Shipov was saying. He did not say "oh, computer makes a bad move and still wins". What he said was "oh, computer make a bad move" and several moves later he added "oh, that move was actually good".

Specific example from Hydra-Pono: locked center, white pawns c3-d4-e5, black pawns c4-d5-e6. What is the standard plan for white? f2-f4-f5, of course, and do nothing on the Qside.
What is the standard plan for black? b7-b5-b4, DO NOT exchange on c3, then run the a-pawn forward. If white allows this pawn to reach a3, the c3-pawn is blown off the board, and if white plays a3 himself, ONLY THEN exchange on c3, and penetrate into white's position via b3.

Those as the standard plans that humans employed in similar positions. Hydra?! He (it?) played b3 followed by b3xc4 b5xc4. Now, suddenly, we have the same position as if black exchanged on c3, with a2 unmoved - which is exactly what black didn't want! At the cost of a tempo, black's standard play on the Qside was completely stifled.

Shipov criticized b3 and bxc4 harshly at the time those moves were played ... until he realized what happened.

Alex, actually Shipov didn't criticize b3 and bxc4 harshly. His exact comment for 10.b3, translated by me from Russian verbatim (which makes it sound a bit awkward in English, but I wanted to make sure that I didn't change anything), is:

"Non-human reaction. Novelty! Our brother would have played 10.f4, then the pawn at any cost must go to f5, knight to f4, etc. Actually, this very move has been tried in practice. To tell the truth, there are no well-known names among the authors of these games. The theory is not big."

And there is no comment at all to 11. bxc4. I don't see any particular criticism here.

For those of you who know Russian, all of Shipov's comments to this game can be found at http://www.chesspro.ru/online/bilbao05-3.htm

So essentially he's saying that:
a. A human GM would have played something entirely different
b. The move in question was played before, but only by weak players.

I interpret this as some measure of criticism.

Also, Stan, you are referring to the comments written after the game, while I talk about the comments which I saw in real time. Often, the real-time comments are later edited, and I don't have the memory resources to quote exactly what I saw at the time.

Finally, I stand by the gist of my statement: Several times during the game, Shipov made a comment in the spirit of "what a typical computer move, the computer doesn't understand the position", which was several moves later supplemented with "you know, it was quite a good move actually".

13.Nf4 - he makes a humorous comment about the computer not reading Nimzowich, and not knowing the f-pawn should advance without interference from the knight.
15.Nh5 - he finally realizes the knight is very well positioned, attacking g7, which ties black's pieces to defense (and if g7-g6 then Nh5-f6).

The machine not only out-played Ponomariov, it also out-understood Shipov. Admit it.

Alex, about the 10. b3 move Shipov says that it was not played before. The alternative, 10. f4, which is what human GMs would probably play, was played previously by bad players. The whole position didn't occur much between good players since Ponomariov made the "anti-computer" move c5-c4, which he probably knew was not very good but which closed the position.

13. Nf4 was in fact criticized by Shipov sharply, he said that the standard f2-f4-f5 plan was, without a doubt, better (and maybe he is not wrong, hard to tell without a lot of analysis). Then he admitted that Nf4 was all right, too.

So, in a way I agree that the machine "out-understood" Shipov, as long as it is clear that you are not talking about real understanding here, just the one that comes from extra-long calculations.

Well, Stan, the catch in what you say is that you are assuming up front that the kind of understanding the computer has is not "real understanding", but "only" tactical understanding that comes from long calculations. But would you think that way if you were comparing two humans? Suppose that in some position human A prefers one move because it looks better according to principles, but human B points out a different move which begins a 7-ply combo that wins a pawn. You wouldn't say that human A is the one with "real understanding" and that human B is the one without it, would you? (Remember Reti's story about how Capa wouldn't play the "principled" move in a consultation game, and instead went in for a concrete line that led to a positional plus.)

Suppose the computer could talk with you like Data on Star Trek Next Generation: "Stan, I have looked at 145 million nodes here, and I assure you, Nf4 leads to a better set of possible positions. I can go over them with you, but it would take about three weeks." Would that move you in the direction of thinking that it had "real understanding"? :-)

p>

I am not talking about real "understanding". It's obvious that he computer doesn't "understand" chess, or anything else for that matter. In fact, the computer doesn't even play chess, it just crunches numbers. No, it doesn't crunch them, there is no actual crunching involved. It just moves pieces on the screen according to... no, wait, there are no pieces on the screen, there are cathod rays that hit the screen and create images that look like pieces. If you want to get rigorous, there will be no end to this ....

What I am saying relates to outside perception: If we have the game score + Shipov's comments as our only source of information, then, to an unbiased outside observer, it would seem that the player of the white pieces not only PLAYS chess better, but also UNDERSTANDS chess better than his opponent and the commentator.

Of course people rebel against this notion. "It cannot be!" - but they say it only because they have additional sources of information. Eg., they know Shipov is a strong Grandmaster, known for his perceptive annotations. They know Hydra is a multi-processor chess computer that calculates long variations. Try to ignore those extra sources and only look at the bare score and comments - and you will agree that among the three (white, black, commentator), White exhibited the higher level of chess understanding.

Computers have absolutely 0 positional "understanding". Computers are cold calculators (extremely good ones). Positional understanding, in a very general sense, refers to an ability to identify and classify positions by certain strenghts and weaknesses that are inherent to them. Computers simply don't think this way....if they did, it would be because the programmer put in a very specific set of code (eg if this position occurs, regardless of your evaluation, your next moves must integrate this concept..etc). Computers calculate...this appears to show superior positional understanding because superior tactics ALWAYS trump positional considerations. Tactics in chess constantly find exceptions to rules....when the tactics are proven "correct" through computer calculation, they become gospel. This doesn't relegate human positional generalities to the land of irrelevant. For instance, you can't teach someone to play like a computer but you can teach them positional concepts that will improve their play. In the end, the computer is showing the depths of our game, not disproving our level of understanding. The computer doesn't understand more, it just calculates further and more accurately. I believe there is a very large difference. To exhibit this, I would really prefer computers to play without opening books...superior human understanding would probably mean a great deal in the early stages and level the playing field a bit.

Why confuse the loaded word "understanding" with making good moves? Computers make good moves, often better moves than Grandmasters make and moves that GMs don't understand. The algorithms that computers use to reach these moves are known to the programmers, who may be quite weak as players. Saying a computer program understands chess is like saying your calculator understands calculus. Chess is a fairly complex, traditionally human game, but that doesn't mean it cannot be broken down into mathematics. Piece values, values for space, passed pawns, king safety, etc. are all part of this. You plug in all these values and let the wind-up toy run across the rug.

That computers play chess WELL leads people to talk about AI and understanding, which is a fallacy. Teaching a computer about space and pawns is more complicated than teaching it how the pieces move and what check is, but it is the same principle. Does a program that plays a perfect game of tic-tac-toe understand the game? Does a chess program that knows the rules but otherwise plays almost randomly understand? No less than a program that plays at a 2000 level or 2800 level. Adding knowledge is adding and calibrating numerical values. This is a programming science that as currently performed is complex and varied enough to approach art, which is true of all sophisticated programming projects. But we have a romantic notion about chess because of its history and status as a sport.

No, Pascal, current computer play is not intelligent in the sense that each position is evaluated according to arbitrary and human appreciation. The values of the pieces to start with, then king safety, pawn chains, piece mobility, open columns, all there empirical knowledge of the game of chess acquired by the human intelligence over the centuries correspond to lines of code that will lead to a positive or negative composite evaluation of the position.
It is important to understand that these evaluations may be "objective" (in the sense that they correspond to relatively simple and reproductible positional patterns), they are not absolute nor perfect.
And this has nothing to do with artificial intelligence, this is just brute force and empirical knowledge, to a large scale, this is not rocket science and actually hardly interesting.
I would be interested to hear about a computer that is initially fed with just the rules of chess, and from there would build strategies. That would be intelligence indeed.

There have been numerous projects that have attempted to actually have computers "learn" chess from the ground-up, using various AI techniques such as reinforcement learning, neural networks, etc. None have worked very well, as far as I know.

Mig and John, spot on. Very well articulated if I may say so. That is why I'm constantly bored by this Computer superiority argument. They still do it according to a prescribed set of rules, following an 'if-then-else-while-for (etc)' logic structure. Whether a chess program is implemented in full-custom logic (deep blue), fully programmable processor (any PC program), programmable logic or a mix of programmable logic and programmable processors (Hydra) that still holds. Yes, the computing power has increased exponentially and gotten cheaper at the same time (its called Moore's law) which is the reason why we have ever more sophisticated electronics in smaller packages, and will soon lead to chess programs that will be unplayable by humans, but that doesnt mean chess programs can understand anything or think.

d, Mig, and others,

You are arguing with strawmen. None of the "Computer superiority" advocates (at least not on this thread) claimed that computers can "think", that they are "intelligent", that they "understand".

Yet you continually precede your arguments with "but computers do not understand or think! they are not intelligent!" and such. Strawman slaying in its finest.

Of course it's easy for you to "prove" that Hydra doesn't think, doesn't understand, doesn't learn - because you know its inner workings, you know it's made of hardware and software, located in Dubai, playing via Internet, and so on. Hence, your argument is won before you even started, and you are in a very convenient position indeed. What I am asking, if you are up to the challenge, is to defend a much more difficult position.

Just look at the moves produced by the machine... I mean, the chessplaying entity. Ignore your own knowledge that Hydra is a machine, just look at the games. Plenty of material to go around, the Adams match, the Bilbao match.... just look at the games as your only evidence.

Now prove Hydra doesn't understand chess. Can you?

I guess the reason so many people want computers to "understand" chess, that is, to have an evaluation heuristic that is equal to or better than that of human GMs, is because that, as opposed to pure calculation, would be a big step towards computers behaving intelligently in real-life situations. The branching factor in life (the number of possibilities at each point) is practically infinite, which makes calculation impossible. Even the go game, which has many more possibilities on each move than chess but is of course just another game, is not solved by computers yet. Thus, coming up with a way to generate good evaluation functions would be a big advance in AI.

And Alex, I believe I can distinguish any computer from a strong human player if I am allowed to give it a set of positions and see which moves it would choose.

*sigh* another strawman goes down in flames. I don't want computers to understand chess, whatever gave you that idea? I am just stating a fact (which no one disproved, and no one dared to even seriously try) that in its games, Hydra displayed either CHESS UNDESTANDING or a PERFECT IMITATION OF CHESS UNDERSTANDING. And that just by looking at the games, you cannot tell whether it understands chess or is faking it.

By the way, you used the words "evaluation function" in your argument again. So you KNOW the inner workings of Hydra, you know it's just calculating, not thinking, and use this knowledge to prove it's just calculating, not thinking - just the kind of cyclic logic I kindly asked you not to use.

Stan, maybe you should approach the Hydra programmers and suggest your experiment. Let me know how it goes. Until it's actually performed, I will treat your belief with some measure of scepticism; being able to distinguish ANY computer from a strong player is an unproven theory.

Alex, think of the following scenario: You're sitting at a computer terminal, and the words come up on the console, "Hi, what's your name?". You type in "Alex". Up comes the text, "Hi Alex, nice to meet you". If you disregard the context (input/output could well be spoken text), this would be "thinking" by your definition. Yet its something I could accomplish with say 4 lines of C code (in the textual case). To answer your premise, yes, a black box could be programmed to respond within certain boundary conditions with responses that may be indistinguishable from that of a human. There are many such experiments, some of which are simple rule-based structured logic approaches, others which are not, such as CNNs. Now I never read your post at all when I replied to Mig and John, so dont get so hot under the collar about it! I was not answering you, but two other folks. My point was simply that the first approach does not qualify as AI. As soon as there is a situation which is not covered by the enumerated rules, the black box breaks down in its stated function, and has no possibility of recovering. In Chess programs, they have gotten better and better at defining the rules and algorithms to interpret them. In the realms of computer science, as John pointed out, this is a fairly trivial accomplishment, resulting from the Moore law growth of device integration on a single die.

Let's nip this argument at the bud. Please do not make up definitions for me.

The text message example, no, it won't be thinking under my definition. It will either be thinking or a perfect imitation of thinking, in the context of composing a brief text message. And in the context of the limited experiment you suggested, it's impossible to tell whether the text message composer is thinking or faking it.

The rest of your post is just the same cyclic logic of computers-cannot-think-because-they-only-calculate. I know they cannot think, and only calculate INSIDE THE BOX, what else is new?

Just take a step OUTSIDE THE BOX and look at what has been achieved with the help of this ... well, I'm not allowed to say "calculation", since we don't really know what's in the box, do we? Take a look at the moves the box produces, that's all.

And the fact that you have to resort to Moore's law, integration on a single die, C-code, enumerated structures, and other technical terms in your arguments, just proves my point. And my point is:

YOU CAN ONLY PROVE HYDRA IS A NON-THINKING ENTITY IF YOU OPEN THE BOX AND LOOK INSIDE.

You change your point regularly, it's hard to keep up. My point, Alex, is that you don't understand what the word "understand" means, or do not wish to. It implies awareness or, at the very least, to comprehend the nature of something. Computers do not do this. They make good chess moves by following a sophisticated algorithm. Same conditions, same result, same moves, every time. This is programming, not understanding.

You wish to conflate good moves with understanding, or say that understanding is required to make good moves. Chess is far too simple to be used as a Turing Test; we have known this for decades. If your (new) point is that we can't distinguish most of Hydra's games from the games of a GM, the answer is yes, we know that. The same would be true of most computer games for the past 10 years. It would take a GM-level player considerable time and in the case of some games it would be totally impossible. This reflects how strong they are, not how much they understand.

Chess is limited. In any given position there are only a few reasonable moves. Looking at the moves is simply looking at the product of an equation. That a computer can solve 2+2=4 just like a human can solve 2+2=4 doesn't mean anything in particular. It's that chess moves represent more than math to humans that tempts people into talking about intelligence and understanding when these moves are produced by computers.

Well, then we agree. In the context of playing a chess game, a perfect imitation of thought was achieved. Yes/No?

At the risk of missing the point entirely, I think that a huge reason people want to say chess computers don't "think" is that they're clinging to the hope that a deep understanding of principle can still compete with brute calculation. People desperately want chess to remain an "art."

Sorry, folks. Chess is a discrete and solvable math problem. It's too complex for us, so we can find beauty in it, but in the long run, no amount of understanding, creativity, or intuition will hold out.

If you define "think" as something different than "solve equations quickly," then "thinking" is NOT the goal of a chess computer, and hasn't been for decades.

At the risk of missing the point entirely, I think that a huge reason people want to say chess computers don't "think" is that they're clinging to the hope that a deep understanding of principle can still compete with brute calculation. People desperately want chess to remain an "art."

Sorry, folks. Chess is a discrete and solvable math problem. It's too complex for us, so we can find beauty in it, but in the long run, no amount of understanding, creativity, or intuition will hold out.

If you define "think" as something different than "solve equations quickly," then "thinking" is NOT the goal of a chess computer, and hasn't been for decades.

Alex, I dont need you to understand my point. However you started replying to my first post as if I intended it for you. Actually I never even read anything you wrote. Instead I just replied to Mig and John, pointing out why I agreed with them. As for the point you make in CAPS, it may well be! Depends how well the rule set is enumerated. I never contested that, which you would have realised if you actually read my post.

For the others who might read your post and come upon the following line attributed to me: "The rest of your post is just the same cyclic logic of computers-cannot-think-because-they-only-calculate. ". Nonsense. What I actually say is Computer Chess programs cannot think because they follow a set of rules that clearly enumerate how they should act in every situation. Thinking and learning would be appropriate words if they were presented simply with the rules for the game and could play it. If you define thinking as passing the litmus test of whether or not a box could produce a set of responses that would be indistinguishable from another box (possibly a carbon based life form such as a human, possibly a Si based box) for well defined boundary conditions, sure, Hydra, Fritz, my C program, game boxes, and a myriad other 'boxes' could think.

Finally why do you get so annoyed? What's with the tone, CAPs etc?

Only if your calculator has a perfect imitation of thought when it produces the sum of 2+2. With something as relatively trivial as chess, this is a worthless argument. (Relative to language, or politics, or poetry.) This is why the "how" is more interesting than the "what."

Ends are not means. Imitating thought is not the same as imitating the results of thought. Otherwise we could be having the same conversation about the automatic doors at the supermarket. "In the context of opening (and closing!) a door, a perfect imitation of thought was achieved."

Adaptive programs that learn and change their own code are closer to the mark. (Current chess programs do this with their opening books.) It's still an algorithm, but giving the computer the ability to adjust its own algorithm can produce surprising results.

PS: Why do you keep putting words in my mouth? I never claimed the computers "think" or "understand". What I said was, that to an unbiased outside observer, they often exhibit perfect imitation of understanding.

But you, of course, insist on opening the box and looking inside. "Sophisticated algorithm".... "programming". Thus, you cannot be called an outside observer. You use inside information about what's in the box.

Alex, two of your comments (I read your pevious posts only now:
1. "The machine not only out-played Ponomariov, it also out-understood Shipov. Admit it."
2. "I am not talking about real "understanding". It's obvious that he computer doesn't "understand" chess, or anything else for that matter."
So which is it? real understanding, or unreal understanding?

As a final thought, I would like to leave you with two quotes, both made by people who understand quite a lot about chess; probably much better than me. It will show you have far we've come in 35 years.

"Chess is far too complex to solve... No computer will ever defeat a Grandmaster."
- GM Vladimir Liberzon, circa 1970

"Most of Hydra's games are indistinguishable from the games of a GM...Chess is far too simple to be used as a Turing test."
- Mig Greengard, 2005

In 35 years we went from "impossible to solve" to "too simple". Which only goes to show you that the real winners in this argument are the computer programmers. As for politics and poetry, Mig, I will be sure to keep your quote and remind you in 35 years :-)

Maybe in 2040 someone will post on a message board: "Poetry is too simple to be used as a Turing test".

d: to an unbiased outside observer who doesn't use knowledge of what's in the box, the machine created the impression it understands chess better than Shipov. Ok? Rigorous enough?

As a final thought, I would like to leave you with two quotes, both made by people who understand quite a lot about chess; probably much better than me. It will show you have far we've come in 35 years.

"Chess is far too complex to solve... No computer will ever defeat a Grandmaster."
- GM Vladimir Liberzon, circa 1970

"Most of Hydra's games are indistinguishable from the games of a GM...Chess is far too simple to be used as a Turing test."
- Mig Greengard, 2005

In 35 years we went from "impossible to solve" to "too simple". Which only goes to show you that the real winners in this argument are the computer programmers. As for politics and poetry, Mig, I will be sure to keep your quote and remind you in 35 years :-)

Maybe in 2040 someone will post on a message board: "Poetry is too simple to be used as a Turing test".

Alex, the sentence is rigorous enough, thanks. As for what the sentence says, sure, if the Turing Test is your gold standard, absolutely. But to others (like myself) understanding isnt achieved by passing such a simple Turing test, that's why a few people took exception to your use of the word "understand".

Btw Alex, sorry if I used "technical" terms earlier, which was unintentional. I use those words so often, I dont think of them as technical. I'll clarify what I meant:
Moore's law: Gordon Moore, co founder of Intel predicted that the number of transistors would grow exponentially with time. This being true is the foundation of the digital and IT revolution.
die: Computer Chip
C - A popular programming language.
Hope that helps. Best, d

..number of transistors ..

man, didnt know putting angle brackets would take out the text btween them. ..number of transistors *that could be fitted on a single chip* would ..

d,

I am a computer engineer, and if it helps, I actually worked with Gordon Moore. No need to explain what Moore's law means. I deliberately wanted you and others to stop using technical terms so you can actually evaluate the output of the box - the chess. As long as you remember what's inside the box, it's just too easy for you to win the argument against me, since I handicap myself by not knowing the box's contents.

I am sorry if I came off as being too argumentative or angry. This was not my intention. What I wanted to really say was this:

In days of old, only several years ago, computers were still able to outplay humans, but they way they did it was completely different from the way Hydra did to Adams, Ponomariov, and others.

In days of old, had you shown a comp-vs-human game to a top level GM, his commentary would go like this:
"A bad positional move by the computer"
"Good positional play by the human, he's slightly better"
"The center opens, complications begin"
"Oops, the human make a tactical error and loses"

In the case of Hydra, the comments were somewhat different.
"A bad positional move by the computer"
"Ugh, it's not easy to find a plan for the human"
"Actually, I think the move I previously called bad was actually quite good."
"Another bad positional move by the comp... no, wait, 2 moves later I see it's actually a good move again"
"Well, the human lost"

Notice the difference?

Alex, read my previous post.
"Alex, the sentence is rigorous enough, thanks. As for what the sentence says, sure, if the Turing Test is your gold standard, absolutely. But to others (like myself) understanding isnt achieved by passing such a simple Turing test, that's why a few people took exception to your use of the word "understand".

Btw, out of curiosity, and apologies for being off-track, may I ask in which organisation and in what capacity you worked with Gordon?

Yes, d, I read that post.

As I said, from impossible to too easy in 35 years. Good job, programmers and engineers.

Intel. Engineering. Too low of a rank to interact with Moore directly, or course, but saw him addressing employees (in person and via web) quite a few times.

I agree that Hydra might have shown a perfect imitation of chess understanding, but keep in mind that it's possible to achieve that through superior tactics. The further ahead someone can look tactically, the more it resembles positional understanding. I'm sure you can find a pair of human players, A and B, where A beats B most of the time, because A has a knack for spotting tactics, even though he only has the most rudimentary grasp of strategy and positional ideas. While B never really understood that training tactics was necessary, nor played enough blitz to improve the skill much naturally, even if he is usually careful enough to not stumble into blunders within his "horizon". And now he's so old that his tactical skill has gone down even further. But if B loves chess and has read and learned much chess during the years, he'd probably be a lot more use to a school class tryng to learn chess than player A would. Even if a game between them would indicate that player A was better.

Alex, the real winners are not computer programmers, it's the chip makers. Give me a computer program that will defeat a GM on a computer made in 1970, then you can claim a victory for programmers.

Nowadays any high school student who knows a programming language and very basics of AI (such as alpha-beta pruning), and is willing to spend the time can write a program that will be at least competitive against human GMs.

As for distinguishing between a human and a computer program, just give it any endgame position with two choices:
1) Gain big material advantage, but the opponent can build an easy fortress;
2) Continue playing with a slight advantage.

Any strong GM will choose option #2, while any program that does not have this position in a tablebase will go down the first path.

But Stan, doesn't the fact that such intricacies as alpha-beta pruning are even available to high school students show you that the skill of programming came a long way since the 1970's?

And by the way, chips are designed using programs... and other chips. It's a power spiral, where more powerful software begets more powerful hardware, and vice versa, and you cannot really disconnect the hardware and the software elements. Can we just agree that the HW/SW cooperation is the winner without trying to assign "blame" too specifically?

As for your second point, I grudgingly agree that it is yet possible to construct positions in which a human will pick a clearly superior move to a comp. Such positions, however, are quickly becoming extinct.

Very soon, it will be possible to distinguish between a computer and a top-level GM only according one criteria: "Let me see the game score ... white won - ok, white is the comp, black is the human."

Shapen those pencils, folks, and start writing poetry, for soon it will be your only refuge to display your creativity. Chess, as Mig said, is far too easy, just like opening supermaket doors :-))

A current example for a fortress situation being San Luis, round 6, Adams-Morozevich, where some internet observers let themselves confuse by their engines.

"Such positions, however, are quickly becoming extinct."

Not really sure about that if the programmers continue with the current approach, i.e. (semi-) brute force calculation. We seem to be at a point where adding more knowledge just slows the program down and weakens it. Some progress may be still possible through tweaking the search algorithm and faster hardware. But this will be not enough to play endgames correctly (tablebases approach being a dead end due to massive storage requirements).

"...white won - ok, white is the comp, black is the human."

As well if you take away the opening book, or play chess960 ? Or correspondence chess ?

"Chess, as Mig said, is far too easy..."

If chess is so easy, why don't you come up with a mathematical theory solving even the "simplest" rook endings r v. r+p ?

Which brings me to my last point: do not confuse an engineering solution with mathematics. Current computer play is not the result of mathematical methods (afaik). There are no "chess equations". _That_ would be real progress.

Of course there are exceptions, including blockades, which is why it is still an interesting problem. Chess has not been solved. Removing the opening book to a certain degree is another area of interest I have written about extensively. I was referring to easily distinguishing between human play and computer play, which can't be done.

Computers are still mediocre at chess for the most part. People get distracted by how easily they beat Grandmasters in tournament games, which probably needs to be set aside as a metric these days. Time and human blunders are the main factors there, not programming improvements or positional chess. I don't play against them, but I use them to analyze a great deal. I have also spent a lot of time working with world-class players and engines and their annotations. Computers still exhibit an evalution level of around 2000 at best when there are no clear tactical considerations.

Interestingly, this is most clearly seen in engine matches. Roughly equal forces often block up the position and they are reduced to shuffling and silliness. This rarely happens against humans because humans can't keep all the tactical elements under control.

1. I dunno, just look at how Hydra demolished Adams in a couple of innocuously looking endgames. In spite of my engineering education, I'm resisting the temptation to peek inside the box and only examine the output. And the output says "I have no problems in the endgame".

2. Ok, you found one position where a comp was dead wrong. But, since I never said there aren't any, it neither advances your argument nor refutes mine.

3. R vs. R+P was solved a long time ago. Yuri Averbach wrote about it in an old book I have. He had the honor of being the first to test this comp, and gave a seal of approval that it plays R vs R+P flawlessly. Which methods exactly were used to solve it, is no concern of mine. As you said, it's an engineering solution, it works - a box plays a flawless R vs R+P sans human intervention.

4. I do not confuse anything with anything. There is a box. I'm not really keen on knowing what's inside the box. All I know is that humans made it, closed it, and left to its own devices, the box plays chess.

35 years ago, wise people said it is impossible for a box to play chess well. That a human has an inherent advantage. It's so impossible, in fact, that further research in teaching boxes to play chess is futile. The problem is too complex.
Today, equally wise people say that it's obvious that the box plays better chess than a human. That the box has an inherent advantage. That it's so obvious that the box plays better chess, that further contests with humans are futile (unless the box is handicapped in some way) and that the box's victory doesn't really teach us anything, because the human was doomed to begin with and the problem is too simple.

And that, my friends, is progress. Progress is not only about algorithms, motherboards, and processors. It is also about human perception. And maybe, just maybe, while we were teaching the boxes to play chess, they taught us something about ourselves. Or not.

Clarification: my post above is a response for pois, not Mig (whose post I didn't see as I was making mine)

"R vs. R+P was solved a long time ago."

They built databases of all possible positions and outcomes, but that doesn't mean it's solved (if you think otherwise, pls. cite your sources exactly). This uses up 60MB only for this type of endgame. This approach doesn't scale. Feed a comp Korchnoj-Karpow (31) Baguio '78, let it calculate till the end of the universe and the output reads "I don't know jack xxx about rook endings".

"...while we were teaching the boxes to play chess..."

All "we" did is implement some inefficient, non-scaling forms of knowledge representation. Certainly did "we" not teach a cumputer anything (i.e. show it how to draw it's own conclusions, acquire new knowledge).

With the current chess programming technique and silicon based chips I am afraid (not!) we will always be able to distinguish the positionally clumsy, overly tactics oriented computer play from that of a supergrandmaster. My take is, for the time being, in chess960 or correspondance chess, even non- top 10 players will regularly beat the comps.

Dunno where you get that about Chess960. It removes the computer's opening book, but it also removes much of a GM's vast library of patterns. For a human it is therefore basically tactics right from the start and computers crush humans in tactics. Computers have been devastating at shuffle chess against GMs so far. I believe it was Jussupow who played the first high-profile examples and that was a while ago. It did not go well for him.

Certainly, "we" did not teach a computer anything. We just typed a bunch of things on keyboards and soldered a bunch of circuits. I was speaking metaphorically. Computers are, rigorously speaking, incapable of learning. As are some humans. Somewhat less rigorously speaking.

I guess we'll all meet in 35 years on a message board on www.StorySamurai.com, when the Pulitzer-winning computer-generated novel spawns a discussion along the lines of:
"But computers cannot really write stories! I will always be able to distinguish the cold unfeeling computer prose from human creation! And ... and ... even if they are capable of writing stories, they are still uncapable of doing ... ugh, something else!"

Sorry I missed this from a previous post. It merits its own response:

Instead of admitting the computer play to be blunder-free, pois calls it "overly tactically oriented". Isn't it like calling a tall guy "midgetically-challenged"?

Previously, I thought mankind's last line of defense to be "they play better but we don't care" (as already uttered on this thread), but the retreat went further, and humanity was routed all the way to the "they play better, we blunder, that makes us special" line. Oy vey.

PS: Pulitzer prize 2041: After the Silicon makes a clean sween of the top 5 the previous year, this time around, only romance novels are admitted (it is well known that computers have difficulties writing about romance). Thus, the top 10 spots are occupied by humans, and only in the 11th place, languishes "Summer of Passion" by KRX-100. Thus, mankind's pride is restored.

Yeah, Alex has a point. Humans are equipped with a variety of ways for salvaging their ego when circumstances turn against them: sour grapes, moving the mileposts, selective thinking, etc.

What we've discovered in the last 30 years ("we" as in, the majority of people; von Neumann and Turing knew it long ago) is that there aren't many problems that can't be reduced to computation. And moreover, we have learned that we're much better computer engineers than expected, which makes it increasingly feasible to implement computational solutions to these problems.

I'll save us all the suspense and make a prediction here and now: there is virtually no cognitive task humans perform (including writing novels or poetry) that will be beyond computers one day.

Alex, I think you are being too optimistic about computers writing literature that humans would enjoy. Can you name any artistic field at all where a computer has created anything of interest as of yet? By an artistic field I mean one where the creator is not limited by any strict artificially imposed rules.

Mig, point taken.

Alex (16:48), are you referring to ... sleeping :-) (Spielberg's idea of AI)

Alex (17:12), blunder-free, as in the recent Ponomariov-Fritz, 39...Bc2 ? Not quite there yet...
But I give up, your case is hopeless :-)

I agree with macuga and Alex that computers have passed a significant milestone in chess play, and someday will be able to perform other cognitive tasks that are currently in the 'human-only' domain. (But this does not mean that computers can pass a Turing test or that they are intelligent.)

To me the main surprise is how long it took. GM Liberzon knew a lot about chess, but perhaps less about computers. I think it was David Levy who offered an open challenge around 1968 that no computer could beat him in a chess match within the next ten years. When we discussed this in the early seventies, most chess players thought he was going to regret it. It turned out that while the problem is 'easy' from today's viewpoint, it is less easy than many people once thought.

You guys are arguing the main question of AI. Is there anything that makes us special with respect to computers? Is there anything humans can do that computers will never be able to do? We don't know for certain, and we will probably never know.

One thing is likely: the way humans play and reason about chess positions is different from that used by computers. Many use this as their definition of "understanding". Alex argues that the method does not matter as long as the end result is the same.

Most AI researchers, though, are interested in the difference, and wonder whether computers will ever be able to "think" like humans (because they believe it is the only way computers will ever be able to achieve certain "human" traits, though only time will tell whether this is true or whether computation is the answer to everything).

Or maybe, at the end of the day, humans are just computers programmed by God or by some other superior entity. God laughs at us and thinks we are stupid. Our perception of ourselves might be no different from that which a computer has of itself, because we can't know what a computer "thinks" or "feels". But I digress...

Stan,

Being a hobby fiction writer myself (insert shameless self-promotion: check out my stories on chessbase.com), I took creative writing courses. And let me tell you, there are some strict algorithms for creating a short story. There is the "15 word outline", the "story spider", and the Location/Animal/Emotion technique. It won't be a great story by any stretch of imagination, of course. But the fact is, a passable story can be made according to a set of rules.

While nobody took upon himself the (daunting!) task of representing those rules in a computer form, I have no doubt this can be done. And it will be done. Probably not in 35 years, (that part was a joke anyway) but it will.

As for music, computer programs for composing music do exist already, although the quality of music they produce is quite horrible. About the same as the quality of chess computers produced in 1970 ...

PS: This will, undoubtedly, be posted on the www.StorySamurai.com message board circa 2041:

"Computers can write novels? Are you joking? Did you even take a look at "Summer of Passion"? The sister character is a mess, and there's a split infinitive on page 188! Pfey!"

> The actual chess Turing test would probably be
> something like "play internet chess with X number
> of opponents, with no chat allowed, and tell us
> afterwards which ones were human and which ones
> were computers."

Has such a test been conducted? The results might be interesting. It shouldn't be too hard to set up a double blind protocal for this. The most difficult part would be recruiting enough strong players to play "behind the screen."

It's too easy to blather on about "human-like moves" in games that we know were played by computers. The real test is to carry out an experiment like this. I would guess that human play and computer play are a lot less distinguishable than we would like to believe.

IMO, the beauty of the Turing test is that it avoids a lot of the philosophical "what is intelligence?" arguments and focuses instead on results. This makes sense to me. After all, we measure the intelligence of people by what they can do and how they act, not by examining the internal structure of their brains. ("Wow, Mig's dopamine numbers are off the chart! He must be really smart!") It seems reasonable to hold computers to same standard.

John

Twitter Updates

    Follow me on Twitter

     

    Archives

    About this Entry

    This page contains a single entry by Mig published on December 7, 2005 9:33 AM.

    World Cup 2005 r4.2 was the previous entry in this blog.

    World Cup 2005 r4.3 is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.