Mig 
Greengard's ChessNinja.com

Kasparov on Man and Machine

| Permalink | 55 comments

The New York Review of Books piece by Garry Kasparov is turning out to be quite popular. It was picked up by the Huffington Post, the Guardian, and now it's been cited in The Atlantic and the NY Times "Ideas" blog. I believe the NY Times Syndicate will also be offering it to subscribers abroad. At first it wasn't something we thought we had time for, but the more we talked about ideas for it, and how it worked in with some of the recent themes of Garry's business speeches, the more he wanted to do it. Now we're all glad it's out there. The main theme I'm referring to is that of "commercialism and incrementalism is replacing innovation and risk," to simplify it far too much.

With the supremacy of the chess machines now apparent and the contest of "Man vs. Machine" a thing of the past, perhaps it is time to return to the goals that made computer chess so attractive to many of the finest minds of the twentieth century. Playing better chess was a problem they wanted to solve, yes, and it has been solved. But there were other goals as well: to develop a program that played chess by thinking like a human, perhaps even by learning the game as a human does. Surely this would be a far more fruitful avenue of investigation than creating, as we are doing, ever-faster algorithms to run on ever-faster hardware.

This is our last chess metaphor, then--a metaphor for how we have discarded innovation and creativity in exchange for a steady supply of marketable products. The dreams of creating an artificial intelligence that would engage in an ancient game symbolic of human thought have been abandoned. Instead, every year we have new chess programs, and new versions of old ones, that are all based on the same basic programming concepts for picking a move by searching through millions of possibilities that were developed in the 1960s and 1970s.

Like so much else in our technology-rich and innovation-poor modern world, chess computing has fallen prey to incrementalism and the demands of the market. Brute-force programs play the best chess, so why bother with anything else? Why waste time and money experimenting with new and innovative ideas when we already know what works? Such thinking should horrify anyone worthy of the name of scientist, but it seems, tragically, to be the norm. Our best minds have gone into financial engineering instead of real engineering, with catastrophic results for both sectors.

The ending paragraphs of the article aren't really an endorsement to drop chess for poker, of course. But the investigation into teaching computers to play better poker seems likely to be more beneficial on a scientific level both for understanding human thought and computer intelligence than chess computing turned out to be in the end. I'm not overlooking the progress made in various areas, from parallel processing to the clever playing algorithms themselves, but even enthusiasts admit we didn't get what we came for, so to speak.

And yes, the programs have gotten smarter, not just faster, although it's the incredibly fast hardware available today that allows the "slow" programs to be effective. That is, it used to be that adding significant knowledge slowed the algorithm to a crawl, more than negating the beneficial playing strength effects of the knowledge. This trade-off is still very much in effect, but the faster the machines, the more knowledge you can work in and the better the program will be relative to the dumber programs, since the law of diminishing returns and the branching factor are so extreme. To again oversimplify, a smart program at 6 ply will get killed by a dumber one reaching 10, but a smart one at 14 will beat a dumber one at 18.

Those unfamiliar with the legendary NYRB might be surprised that the book in question, Chess Metaphors: Artificial Intelligence and the Human Mind by Diego Rasskin-Gutman, isn't mentioned much in the article. The Review doesn't really review books (four stars!) as much as it attempts to present good writing and interesting ideas on the topic in general. It was nice working on something other than politics and business!

The Review has asked Garry to do a podcast interview following up on the article; we'll see if he has time this week.

55 Comments

Sent the article to a friend of mine. His comments:
***
Fascinating - also with implications for translation software.
But who wrote it - it's too good to be a translation, and can Kasparov really write so well?
***
I hope you are blushing now, Mig.

“Brute-force programs play the best chess”

“But there were other goals as well: to develop a program that played chess by thinking like a human”

I was surprised by the above two statements. The first statement and article as a whole takes a very simplistic view of chess programs a huge disservice to chess programs and the creativity/innovation behind them. The quality of current chess programs has very little to do with the increase in computational power. Also, the best programs have a good positional understanding built into them and that’s one reason that makes them best.

The second statement betrays a clear lack of understanding of AI. AI does involve learning but is not about mimicking human mind (neural networks do attempt that). The goal/end is to achieve human-like intelligence but the means/path could be anything and need not mimic human brain. In fact the success of chess programs and the failure of neural networks to achieve much demonstrates that mimicking the brain is not the best approach.

Kapalik

"The second statement betrays a clear lack of understanding of AI."

In the big overall picture, this is how I feel about your post. It is impossible for any computer to play chess, because they cannot think, they are merely calculators. Disagree? Name all the novelties created by the calculators.

Hi Mig, congrats on a very well-written piece. Anyone who's ever heard Kasparov speak English (not bad, but, well, 'different') knows what a great job you did! :-)

The NYRB doesn't really do reviews? 'Hm, I guess I must've been reading the wrong magazine for quite some time then... Anyway, if one thing becomes clear to me from the article is that Kasparov is not at all interested in the philosophy and science of AI and computers, but instead only focuses on some practical applications and his own past experiences. That's all right, but why then present it as a review at all?

I guess I mostly find Kasparov's views disappointing from an intellectual point of view IMHO. The article contains no elaborate or new thoughts on AI or computer chess, and at best presents a nice overview of Kasparov's own experiences. And really, to suggest the reader 'skips ahead' instead of reading about the actual history of brain research... what kind of message does that send? I had expected more depth of the heavyweight intellectual Kasparov is supposed to be.

Sorry to sound so stung, but the article was a disappointment to me despite its excellent prose.

This post is especially for British fan and others on this board who are not computer scientists:

The term "AI" means something very specific to a computer scientist. For a computer scientist, AI does *not* mean creating a machine that "thinks like a human."

If you think - like Kasparov does - that one of AI's goals was to create a machine that "thinks like a human," then Kapalik is absolutely correct: the sentence betrays a lack of understanding of AI, the way computer scientists define the term.

On the other hand, if you take the layman's viewpoint, then AI is all about creating a machine that "thinks like a human." Kasparov is a layman when it comes to computers (or politics for that matter). Don't listen to him. It is OK for him to wish that people should try to come up with a computer that "thinks like a human," but to say that it was one of AI's main goals... come on.

I don't think Kasparov understands how computers work. People like Kasparov think they know everything about everything. Not so. It's like Jordan playing golf.

I am curious about the comment about novelties and chess programs. Are there any chess programs that create novelties?

I have a few ideas of how one can go about doing that, but there is a pretty good chance that those ideas have already been tried.

Actually, I am a computer scientist. I never mentioned AI. Computers are machines. They carry out executable commands. They are not capable of creativity, independent thought, or innovation anymore than my electric toothbrush.

Except, of course, that in chess any new move that's never been played before is an 'innovation' by definition. And if this innovation happens to be a strong move, it automatically is a 'novelty' created by the machine. Which just goes to show that common terminology in AI does not always apply to chess.

If they are restricted to using their opening books of course they wont come up with novelties but as a novelty is only a move not played before they soon play "new" moves. Additionally for example in the final game of Vlad's last computer match the (White) Sicilian played by the "computer" had the feel of very powerful opening prep!

Come on guys. What you are defending are your own beloved specialist terminologies and paradigms on AI -- while Kasparov and Mig talk about the general AI problem (not just "what is means to a computer scientist"), and link it to another general socio-cultural problem (innovation vs incrementalism). And I think they are quite right in this linkage, as your reactions reveal quite well.

Just one example. Is it only me wondering why all computer chess programs work on the basis of search trees and evaluating the resulting position after ply n? And why not on the basis of *plans* in a given position, like humans tend to do? Like "where would be my pieces best placed? Well, wouldn't it be nice to put my queen on c7, knight on e5 and push the pawn to b4 for a dominating position, is there a way to achieve that?"

In general: how do I factor back from desired positions to the existing one, other than the tablebase method. Granted: it would require a totally different start, that in the past might have been tried and failed to yield better results than the ply-hunting. It would require some top-end (possibly even brand new) math that's quite different from the (not too difficult) math of search-tree-trimming and evaluation-based-on-preprogrammed-human-acquired-chess-knowledge.

But I don't see why such an innovative research programme could not in principle bear eventual results on the long-term. This would illustrate well how chess programming could further progress AI. AI in a more general sense than some computer professionals, limited by the current AI paradigm and the next incremental step, would like to define it.

In chess, computers have routinely produced new (and great) moves that surprise everybody, even the bast players. Hence, by definition: creative and innovative. Those moves surprise even their programmers who cannot explain in detail how exactly they found them. Hence: independently thought. Note: I am not talking about explaining in general how an algorithm works, but explaining the details of how it finds THAT particular move. Even the programmers don't know that.

Computers don't do those things the way humans do them. That's right, if that's your point. But creativity, innovativeness, and independence do not imply doing them in a human-like way.

And yes, computers are machines. But so are humans.

Well, what is the "general AI program" ?
That depends on your personal interest. Hence, there is no one "correct" answer.

If you are interested in how to make computers not only perform like humans, but also do it the way humans would, then yeah computer chess hasn't reached that goal. It might be a very interesting research subject for those who define the problem that way.

But we could also define the goal purely in terms of performance in tasks that for humans would require high-level intelligence, regardless of how it is done by the machines. Creativity can be defined just in terms of a solution being new (and qualified as a good solution, of course), regardless of how it is found.

I don't see why one definition should be preferred above the other. Doing it the human way has the advantage of producing insights on how humans do it. But doing it following a not-so-human, or even non-human, way also has the advantage of finding alternative ways to be smart and creative - perhaps even ways better than human's. Why should we restrict ourselves to the human way?

I'm not saying research on planning in chess is useless. Do them if you want, and that's cool, and I might also enjoy the fruits. But don't insist that that is THE "general AI program".

For example, evolution has produced creative solutions, without "planning" them. So that's one example of being creative without planning. In fact, our very ability to plan is a result of evolution, which does not "plan".

You're right thinking about computer chess leads to all sorts of interesting questions. The ones you're asking are good examples. Problem is, Kasparov doesn't ask these questions, he merely repeats the cliche that 'we have discarded innovation and creativity in exchange for a steady supply of marketable products' without elaborating or even giving evidence that this is indeed the case. (Would Rybka-programmers agree? Would someone like Dan Dannett agree?). This is why I think the article is disappointing for such a formidable thinker as Kasparov.

"such a formidable thinker as Kasparov"
no offence, but you need to broaden your reading considerably.

Dennett wouldn't agree, I think, with the AI naysayers, but maybe thats just semantics.

I expected the 'Turing test' to enter this thread:

The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. It proceeds as follows: a human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are placed in isolated locations. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen.
{from wiki}

So from my point of view fritz passed the turing test. I couldn't judge the moves coming out of my pc as being made up in a human brain, or from silicon.

"o from my point of view fritz passed the turing test. I couldn't judge the moves coming out of my pc as being made up in a human brain, or from silicon."
But a strong GM would know most of the time. And even a weak player like me can tell sometimes too.
However, from my experiences with Fritz at 1 am after some beers, you might have a point regarding the conversations.

Of course, I have no clue why Rybka is so much better than Fritz & Co, but I'm pretty sure the improvement must be quite innovative. However, I still believe it's an incremental improvement, fitting into the old chess computing paradigm of optimising search trees and evaluation functions.

Thus, it's not innovation as such that should be set against incrementalism, but something like "paradigm-shifting" or "disruptive" innovation. And indeed, this is what seems to have become increasingly rare and difficult in current research environments, where scientists and engineers need to strive for peer approval (or short-term market success) to secure their livelihood; and where globalisation and increasing peer-pressure helps to cement existing paradigms and prevent fully independent thinking.

Someone cited this link in another thread:

http://scienceblogs.com/cortex/2010/01/chess_intuition.php?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+scienceblogs/wDAM+%28The+Frontal+Cortex%29

Both Gladwell's Outliers (which I've read) and Lehrer's How We Decide (which I haven't, yet) discuss the feebleness of human RAM. We demand a minimum of 4 GB for our computers, but we can't hold a 10-digit phone number in our brains.

So how in the world did humans compete with computers so long, and, before the Hydra/Rybka generation, so (relatively) successfully?

This piece is full of ignorance about computer chess.

"Create a computer that plays chess as well as possible" is a very clearly formulated problem. Call it an engineering problem. It has been solved to the extent that top-10 program can beat the World champ on a <1000$ laptop. Rybka played matches against GMs with pawn, pawn + move odds with very good results.
Making such a program is a remarkable engineering achievement. On equal hardware, these programs are 200-400 ELO stronger than their counterparts which drew against Kramnik(Fritz, Bahrain) and Kasparov(Junior, Fritz X3D). Go achieve that incrementally.
According to an engineer, that is it. Whether it is "like a human" or not is totally irrelevant. Making good moves is what it takes, it has always been the target. No sane person would ever consider a program that beats a WCh a failure because it is not able to reproduce the inaccuracies of the WCh it beats.

Personally I just want a program that can play "realistic" 1400-level chess--one that makes the same sorts of mistakes (and punishes the same sorts of mistakes) that 1400-level humans do. Is there such a beast? Everything I've ever tried seems to have basically two levels: ludicrously bad and totally crushing.

When we talk about how "humans play chess" and contrast it with "how computers play chess", we make similarly bad claims like the ones Kasparov did. I had mixed feelings about the article, enjoyed the flow and presentation, but disliked the assumption that we actually understand how human beings think well enough to say what a computer is or is not doing.

As Dennett once pointed out, do you strip a grandmaster of his study of opening theory and make him play chess?

If the answer is "no", then why do we ask that a computer play chess without an opening book when we use opening books all the time?

Part of the real problem here is that thought is *introspective*. When we see Deep Blue play a fanstastic combination, it is not being creative, but when a human being does the same, he is a genius. Let's be careful when discussing the double standard. Human beings have all kinds of learning advantages that we tend to disregard when we talk about making a computer "play or think or create like a human". We build upon aons of civilization.

Moreover, it has been known for a long time that the neural complexity of the brain is amazing. Why should you be able to simulate that kind of information processing on the relatively archaic processors that we had in the past? In fact, any reasonable thinker will realize that developing great hardware is a necessary precondition to developing anything that even comes close to human intelligence. And for this reader, computers have come close enough. The real key is how to simulate learning, and when we do that, I am satisfied even if those holding out for some special solution are not.

There was a feature on NPR this morning ( http://www.npr.org/templates/story/story.php?storyId=122781981&ps=cprs )based on this WSJ article: http://online.wsj.com/article/SB10001424052748703478704574612052322122442.html?mod=article-outset-box

For "emotional" (re chocolate cake), substitute "intuitive" (re chess intuition).

Meanwhile Kramnik and Karjakin totally outplay Carlsen and Nakamura, respectively. Both with black.

"As Dennett once pointed out, do you strip a grandmaster of his study of opening theory and make him play chess?

If the answer is "no", then why do we ask that a computer play chess without an opening book when we use opening books all the time?"

Humans do not "use opening books all the time." We recollect the opening books we once read. Computers are not required to recollect during a game (whatever that would mean). They are free to crack open a library for the first time during play. The proper analogy would be to allow a grandmaster to conceal a never-before-looked-upon library of openings in his person and consult it during play. That looks like cheating. That a grandmaster is not permitted to do this when playing a computer is an artifact of the architectural differences between humans and computing machines (what part of either's "body" constitutes a fair place to conceal one's fresh database), the way computer "identity" has developed during the last 50 years, and the fact that the rules of engagement in human v computer matches contain conceptual vestiges carried over from computer v computer and human v human rules of engagement. The clumsy "memory" metaphors used to describe computer storage and access give the impression that something like human/computer parity is happening.


Kramnik revenges C+K :-)
The man can play dynamic chess !!

As is frequently the case, Kasparov writes about something he knows very little about and reaches absurd conclusions like the nice-sounding, but completely false "Our best minds have gone into financial engineering instead of real engineering, with catastrophic results for both sectors." It's pure BS, devoid of a shred of evidence to support the theory.

Kaspy is a moron away from the chess board. He's lucky to have Mig to put a little writing flair into his utterly ignorant ramblings.

*******************
A short reply to those questioning computers' contributions to "chess innovation": computers have brought more ideas to chess than all GM's combined. The amount of corrections to books like Ruben Fine's Chess Endings is staggering. The number of opening novelties, developed and checked by the machines is pretty high.

The only "downside" to computer chess is that the game has been exposed as a simple one of calculation, and much of the "mystique" is gone. Computers have brought truth to the game, and that's something pepole like Kaspy can't tolerate.

ra said: "Is it only me wondering why all computer chess programs work on the basis of search trees and evaluating the resulting position after ply n? And why not on the basis of *plans* in a given position, like humans tend to do?"

No, I don't think it is only you who wonders about such issues. Kasparov seems to be wondering about these issues also.

But computer scientists who work in AI don't wonder about these issues any long because they know the answer.

The answer is simple. We don't have a way to represent the "plan" based approach for playing chess in a way that can be used to program a computer. A lot of time and resources have been spent on this subject (i.e., making a computer come up with plans and then execute those plans) since the 1960s, but we haven't made much progress.

The so-called brute force techniques, on the other hand, have made remarkable progress because of the increase in compute power and because of the creation of highly sophisticated evaluation functions.

So, it is not that the "plan" based techniques have not been researched, it is simply that they have been beaten to the finish line by the "brute force" techniques.

Computer Scientist wrote:

****************

But computer scientists who work in AI don't wonder about these issues any long because they know the answer.

The answer is simple. We don't have a way to represent the "plan" based approach for playing chess in a way that can be used to program a computer. A lot of time and resources have been spent on this subject (i.e., making a computer come up with plans and then execute those plans) since the 1960s, but we haven't made much progress.

The so-called brute force techniques, on the other hand, have made remarkable progress because of the increase in compute power and because of the creation of highly sophisticated evaluation functions.

So, it is not that the "plan" based techniques have not been researched, it is simply that they have been beaten to the finish line by the "brute force" techniques.

**********

Or to put it more bluntly: because there is no evidence of a better method, including the one used by the human mind.

If the computer plays better chess, it's very difficult to justify not using whatever method led to its superiority.

As simple as that.

Some years back I thought up an idea for reinvigorating computer chess as an academic research field by initiating a new sort of competition: "Limited search computer chess" (LSCC). Computer participants in these tournaments would be restricted in the number of moves or nodes per second they could search: maybe 50/sec. to start, moving down to 5/sec. as the years go by and the participants improve. (Human players search about two nodes/second; current World Chess Champion Anand once claimed five.) Like the current computer soccer tournaments, all code would be published after each tourney so everyone can study their opponents and improve. This would aid academic researchers as well as allow a double-check on the moves/second restriction.

Calling today's programs "brute-force" is another example of ignorance. It's like calling a dynamic programming algorithm a "brute force" and justify like this: "well, it needs to compute a lot of stuff before coming up with the solution".

Fair enough: "brute force" is relatively easy to program and guaranteed to bring cumulative results -- as opposed to the complex, math-intensive, not well-defined, speculative, failure-prone plan-based approach (the only nagging rub being that our mind is somehow capable of it). Easy choice, brute force wins. And, in the process we have also learned a lot about AI.

However, now we might reach a plateau with the brute force approach: 32 vs 36 ply does not make that much of a difference when reaching the realm of "human strategy", after having conquered "human tactics" (even if still lots of innovative potential in the eval function). Thus, totally different approaches might become interesting again, as only they might offer the chance of overcoming the plateau. And, in the process we might also learn some more about AI. Well, only if we are still interested enough in the problem to be ready to invest enormous [intellectual] resources, with no guarantee of success, lest market success. That is, if we, and our sponsors, still have the spirit of inquiry, curiosity and exploration.

In the last rounds of the Corus event, Kramnik and Shirov face a few of the same opponents. Today, Shirov was able to draw as Black vs. Ivanchuk and Kramnik and Ivanchuk will face off tomorrow. Here's the line up for these two with my result guesses:
Round 10 - Anand-Shirov(1/2); Kramnik-Ivanchuk(1/2)
Round 11 - Shirov-Kramnik(0-1) (Kramnik has, for the last several years, been a very difficult opponent for Shirov)
Round 12 - Karjakin-Shirov(1/2); Anand-Kramnik(1/2)
Round 13 - Shirov-Dominguez(1-0); Kramnik-Karjakin(1/2)
Though I'm hopeful that Shirov can pull it off by going through rounds 10 through 12 undefeated and win in the last round, Kramnik will take 1st place if he defeats Shirov with the black pieces.

I think Shirov is more likely to lose with black against Anand - or even Karjakin. Kramnik has 2 wins with black already, and given how rarely he won with it until recently, chances are he won't get the 3rd win against Shirov. Actually, I think Shirov has greater chances to win than to lose in their head to head game. I think Kramnik's best chance for a win is his remaining white games.

I disagree with those disagreeing with Garry; I think his points are very well taken. I "J'adoube" his comments slightly and say more about "brute force" here: http://www.thechessmind.net/blog/2010/1/22/garry-kasparov-reviews-chess-metaphors-artificial-intelligen.html#comments Otherwise, I basically agree with "ra" and "Computer Scientist" in this thread.

If I recall correctly that it was Bill Hartston, he described to me in the early 1980s his efforts at a "Human Chess Knowledge" approach to computer chess---while acknowledging that optimized search might beat him out. The fact that sophisticated algorithmics and chess evaluation functions go into today's engines still leaves GK's basic point intact. The point by "ra" that a depth plateau might bring "HCK" back is interesting, but Vas Rajlich posted before Rybka 3 that extra ply were still bringing him almost-undiminished returns.

One problem with Walter Faxon's idea is, how do you verify nodes/sec. rates? Rybka famously uses a different system from Fritz---and is alleged to "obfuscate" both that and its search depth anyway. One can, however, already run one's own time-limited and depth-limited matches between engines in various chess GUIs.


In short, Irv: "Plenus stercoris es."

I feel strongly compelled to voice a strenuous objection to your characterization of Garry Kimovich away from the board. You are exposing your ignorance with all the subtlety of a drunken sports fan mooning an ESPN camera.

If you'd listened to Garry Kimovich's lectures or read his written material as I have since 1983, you'd have a different opinion, assuming you are a sentient being. I own and have read almost every single book or article written or "ghost-written" by him since that time, and while it's obvious that the writing "style" has changed with different assistance, the essence of his expressions, thoughts and ideas has only matured as he has matured. I feel more than just qualified to express this opinion as I've been a Managing Technical Editor/Writer for the past 15 years.

Go troll your drivel somewhere else!

Hmm, but it's not the style he's criticising, it's the content. One can have a beautiful prose style and also talk utter crap.

Ben,

Your response completely misses the point because you are thinking about this from an over the board perspective, while I'm looking at this from a quality of output perspective. Human beings generally make high quality moves by their study of similar positions. Human beings don't make quality opening moves by some intuitive magic. You can't argue that a computer should be able to generate opening moves unless you have a good model for evaluating opening moves and the positions that result from them. Therefore, it is fair game to allow the computer to memorize a sequence of opening moves, which is what many human beings do. And computers now know how to make reasonable moves when they are not in their opening database, though these moves might be errors in the absolute sense.

The simplest way to show that human beings rely heavily on their pattern recognition to generate good opening moves is to see what happens when it is taken away from them (Fisher Random). There, Chess computers like Rybka slaughter human beings for snacks. If the situation was as you described it, why aren't the computers worse off when they don't have to deal with opening theory?

A plublicity seeking former chess champion and a talented ghost writter, what a nice combination! Who is to blame them?! They have families to feed and the public appears to like their stories. "Pane et circenses" noyb!

noyb writes:

"I feel more than just qualified to express this opinion as I've been a Managing Technical Editor/Writer for the past 15 years."

I'm sure your "qualifications" are as legit as Kasparov's "theories" on history and subjects other than chess :-)

Calling today's programs "brute-force" is another example of ignorance. It's like calling a dynamic programming algorithm a "brute force" and justify like this: "well, it needs to compute a lot of stuff before coming up with the solution".
---------------------------------------------

Exactly. Human beings just look at the board and magically come up with great moves, while computers use brute force. Such a compelling story if this dichotomy were true, but why does everyone dismiss the point that the brain is more complicated than just about any computer chip currently known?

Does Kasparov even know that Rybka is surpassed by now?
The main reason why the new breed beats Fritz is that they don't use "alpha-beta" in book style but prune everywhere, and extend everywhere else. Right to almost call it Shannon B as the term has no real content.

Larry Kaufman claimed that he thought ply depths were closer to multiplicative than additive, so 32 ply versus 26 ply was about the same as 16 ply versus 13 ply at least in the first approximation. Is this called logarithmic?

The current debate is that methods pioneered by Rybka and now used elsewhere tend to avoid disastrous moves by looking more deeply in the primary variation at the cost of missing small improving moves at times. The effect is they rarely change the mind.

A very insightful and well-written piece!
I wish Garry had elaborated a bit more on the "intelligent use of resources" for lack of a better term. I'm referring to the chessbase freestyle tournament comments (e.g., "Weak human + machine + better process was superior to ...".)

I just want the computers turn off the opening and endgame books and use their powers of calculation from move one to the end. Humans limit themselves to not looling at databases during the game, yet computers are ok to do it. If this one principal was applied the humans would still be winning. The rules of chess are violated to allow computers to access opening and ending databases, either allow humans the same or stop using them during the game. Thanks Jimbo

@ Laj

Thank you for your clarification. It's still hard for me to tell if we're talking about the same thing because the point *I* made was about the use of the memory metaphor when setting up Dennett-like rhetorical questions, and the point you made *used* the memory metaphor. If we unpack that awkward metaphor we see that part of what constitutes this "memorizing" is the contingent ability of computers to horde enormous libraries of virgin data in their physical structure - or even enclosed in a different physical structure provided they're connected by a cable (or maybe wireless?) This may be irrelevant to your point. We may be talking past each other, but I am not talking past Dennett. Honestly, when I wrote my post I had Dennett (and his tradition of writing on this topic) in mind. I didn't even think about disagreeing with you in particular. Both your posts were smart and insightful.


I agree with GK's that the most interesting goal in chess AI is
"to develop a program that play[s] chess by thinking like a human, perhaps even by learning the game as a human does. Surely this would be a far more fruitful avenue of investigation than creating, as we are doing, ever-faster algorithms to run on ever-faster hardware.".

But now of course, a human player using computer resources while playing must be simulated too. I wonder whether this is more or less complicated than trying to model the resourceless player.

HS

On the day of Apple's iTablet (-: ? :-)

Finally read the full article, very nice. Whether you agree or not with some of the points he makes, its well expressed, and has some pertinent points in my view, especially with regard to risk and innovation and the lack therein. Could have been even better I think with a more technical articulation of the sentiments contained in the paragraph quoted above by Mig, but excellent reading! Go Gary and Mig!

The problem is well defined: given a position, give me the best possible move. Engines solve this to a very good extend. Now let's whine that the solution does not have close resemblance to human chess.

Engines are stronger, that's the reason.

Top Chess,

It's likely inverse logarithmic or exponential (or maybe factorial).

Ben,

Thanks for the kind words. In this context, I think the importance of memory is more that we have done our homework in advance, and we are carrying that homework into game. Our limitations in acquiring and storing that information are, to me, just as relevant as saying that a car doesn't move heavy loads the same way I do. When many people discuss the inability of computers to generate opening moves without a book, or that they do not generate the best moves without a book, they often down play the trial and error and history of chess that generated the best opening moves. Since a human being can acquire that knowledge for the most part by reading a book, and a computer can't, I think that part of the problem is often unfairly emphasized. This was part of what I remembered Dennett as stressing, but I do think that Dennett often uses analogies and verbose prose in a way that sometimes obfuscates his points.

Forgive me for what I am about to write next if you are a real computer/cognitive scientist.

We can think of memory as being of a function of the form that says, in this specific situation, which we recognize from comparison to internal states, take this specific action to bring about this specific goal state that we also recognize from comparison. If we can't remember enough to harness enough states and relations between them, that's a limitation. But the real issue is how did we generate the information that we memorize? Did we generate it by some process that computers cannot mimic?

Feeble, Irv, feeble.

I am hoping that the humans will get one more chance against the computers. Not so long ago the computers were given all sorts of advantages because they weren’t very good. Now it’s time to return the favor to us humans. I’d like to see the top five humans each play two games, as part of a team, against the computer and may the best moves win. A guy can dream, can’t he?

FYI Andy:

Posted by Larry Kaufman (who worked on Rybka 3) on CCC forum.

QUOTE
"
I hosted something like eight or nine handicap matches between various Rybka versions (on a quad or octal) and Grandmasters at tournament or near-tournament time controls. We beat Ehlvest 5.5-2.5 giving each of the eight white pawns in turn, we beat him 4.5-1.5 giving him all White pieces + double time odds + 3 move limitation on Rybka book + no EGTB, we beat Benjamin 4.5-3.5 giving the eight pawns in turn with alternating colors, we beat Benjamin 6-2 giving draw and White odds in each game, we split 4-4 with Dzindzichashvili giving each of the eight pawns plus the White pieces, we beat Perelshteyn 1.5-0.5 giving 10 to 1 time ratio handicap plus White pieces. Against IM Eugene Meyer we won 3.5-0.5 giving pawn (f7) and two move handicap (!), against FM John Meyer we won 3-1 giving pawn (f7) and three moves handicap, but we lost to him 0-4 giving knight odds. Finally came the big match with (then) 2705 FIDE rated GM Vadim Milov. The terms were that in two games he got only the advantage of the White pieces, in two games he got the traditional pawn and move (f7), and in the other four he got the Exchange (remove a1 and b8). He lost 0.5-1.5 in the White games, he won by the same margin at pawn and move, and he scored one win and three draws at Exchange odds to win by the narrowest possible margin.
Conclusion: In a match today at a FIDE-ratable time control between a 2700 level GM (about the strongest we can expect to get to play such a match for a modest fee) and Rybka 3 or any comparable strength program, I would bet on the program at pawn handicap or even pawn and move except for the traditional f7 (or perhaps g7) handicap, which is more like 1.5 pawns and is too easily analyzed to a huge advantage. The most suitable handicap is the Exchange (a1 for b8). The Milov match showed though that it is tough for the computer because not only is it down the Exchange, but only Black can castle long, which is often advantageous with the b8 knight missing. So for a truly fair match, I would stipulate that Black cannot castle long (since White can't). Either this or the c7 pawn handicap (human getting White) are the most balanced and interesting handicaps for such a match in my opinion. There are other possibilities, such as knight odds vs. f7 odds for example, but the Exchange or the c7 handicap are the "cleanest".
"

What's coming next? GK and Mig on Sanskrit poetry?

Everything has pretty much been said already by playjunior and others.

It's one thing to set new goals which are quantifiable from a chess standpoint in terms of playing strength, but to complain about engines not being designed "correctly" (without any real grasp of whether the proposed design philosophy would actually produce better chess in the long run) seems misguided. This is about as disingenuous as saying engines don't play correctly, using human play as a frame of reference.

If Kasparov truly believes this, surely he has the means to hire a couple competent engineers / programmers to attempt such a design with his help, put Rybka to shame and turn a profit. I'd be very surprised if someone hadn't taken a serious crack at a neural network based approach already. It's simply not so obvious that this is the superior approach just because the problem is complex.

Actually, if we consider that, for certain pattern recognition tasks (such as character recognition) in which NNs have already delivered good results, computer performance still lags human performance despite many years of effort, it may even be tempting to conclude that this isn't the way to go for chess (engines having already surpassed the top GMs).

As for the best and brightest hoosing finance over engineering - citations needed.

15 years ago if there was a tournament with the top 8 players in the world and the commentator was Yasser Seirawan, we really carefully listened to and cared about his opinions. Today many fans listen to their engines more than top GMs. An engine had never beaten a reigning World Champion at classical time controls so the top GMs had more perceived credibility. Those were the days.

I visited this page first time to get info on people search and found it Very Good Job of acknowledgment and a marvelous source of info .........Thanks Admin! http://www.reverse-phone-look-up.net

Twitter Updates

    Follow me on Twitter

     

    Archives

    About this Entry

    This page contains a single entry by Mig published on January 25, 2010 10:46 PM.

    Corus 2010 R8: Here Comes Kramnik was the previous entry in this blog.

    Stupid Time Control Tricks is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.