Mig 
Greengard's ChessNinja.com

Melody Amber 2006

| Permalink | 64 comments

The annual blindfold and rapid spectacular is underway in Monaco. The official site has live games and fine round reports by Dirk Jan ten Geuzendam, the moonlighting editor of New In Chess. The player section (under "info") also has some nice tidbits. Grischuk's apparently serious interest in poker, for one. He played in a big poker tournament in Monaco a week before Amber started.

In the second round Linares winner Aronian beat Topalov in the blindfold after starting with 1.f4 and getting a lousy position. The blindfold games tend to illustrate how harmonious normal GM chess is by usually looking very disjointed. Of course the games aren't much to look at later without adding "for a blindfold game," with the occasional notable exception. Kramnik's immortal blindfold game against Topalov in 2003 is the only non-blunder game that comes to mind. Speaking of, Kramnik withdrew to continue his recovery from his arthritis treatment, according to TWIC. He's replaced by Amber first-timer Grischuk.

64 Comments

How are the blindfold games played?

I looked at the pix on the official site and see players at tables with their names plaques sitting at laptops. Does this have something to do with the blind games?

Yes, they look at an empty board, and click on the squares where the pieces are supposed to be to move.

Yes, John. The players see opponents (most recent) move (only) in their computer screen.

One of the advantages of using the computer system also is that illegal moves cannot be played. This was always a problem in noncomputerized blindfold games, as it meant the arbiter had to also prevent illegal moves.

duif

This may represent something of a change in topic, but hopefully would be of interest. There are some intriguing analogies between programming chess algorithms and problems in econometrics or operations research. This is written, by the way, from the standpoint of an operations researcher who used to play chess many years ago.

The conventional wisdom about chess-playing programs is that they rely on brute force computation, and are less able to strategize. This is often described as the “horizon problem”, i.e., the algorithm simply does not look far enough into the future. But this may actually be a failure of the algorithm to create a long-term strategy. It is comparable to the problem of identifying long-term signals in datasets that are embedded in short-term noise. The problem is usually called “over-fitting the noise”.

Analysis of many time series – weather, stock prices, server usage, etc., indicates that they consist of a series of signals with difference characteristic properties. Some are cycles at different frequencies (seven days, 12 months), while others are better described as nonlinear trends. Further, the short-term noise is not truly random. Rather, the embedding noise often exhibits complex patterns. Analogously, one can think of a chess game as consisting of an underlying strategy embedded in noise – moves that are not directly related to the strategy, or attempts by the opponent to disrupt the strategy by introducing complications.

A good predictive algorithm has to be able to distinguish between the noise and the signals, and to be able to move back on the path implied by the signals even after a period in which the data is subject to massive perturbations. Many chess playing programs do not seem able to do this. In several of the Kasparov versus Deep Blue games, the program either lost sight of the long-term strategy (or failed to develop one), and simply made moves that created tactical threats.

Chess programs are not the only algorithms that are vulnerable to this type of problem. Many highly sophisticated forecasting algorithms – neural networks and State Space models – are extremely vulnerable to over-fitting noise. It is partly for this reason that a good approach to time series analysis is to estimate several equations simultaneously, and then combine them. This often involves a set of equations that define the underlying signals, and additional equations to capture the noise, which is assumed dissipative, so that the forecast eventually reverts back on the signal.

Can this be done with chess programs? In other words, would it be possible to run two parallel algorithms, one concerned with long-term strategy, the other with short-term tactics, and then combine the output of the two, using a third algorithm that assigned relative weights to the long and short term? If this could be done successfully, the computer would be able to strategize more effectively.

- Dave

How often does it happen that GM attempt to make illegal moves in blindfold games?

Regards

Probably, it is not often that a Gm will make an illegal move in a blindfold match. Most Gm's have studied so much chess they can tell you what color a certain square is without ever looking at the board, just tell which sqaure e4 for example. I find it interesting that blindfold play was deemed a health hazard in the former Soviet Union in the 1930's and was forbidden. I would like to know the basis of this. As in the case of simuls where a highly rated player can play many games at once, so, it is the same with blindfold chess. I have read and also have been told to solve tactical problems without the benefit of the board to improve visualization. I must agree with this because I have experienced it first hand.

@ Francisco:
I do not know how often this happens, but I remember Karpov once lost a blindfold game on time while trying to capture with a pawn on g4, but he choose the wrong starting field (f3 or h3, I can't remember, the pawn was on the corresponding field).

I wonder if the ability to play blindfold is indicative of one chess playing potential? I've seen Class A and Expert rated players play a whole game blindfolded, on the other hand, I know Expert rated players who cannot play (a whole game) blindfolded. Are there any master-strength chess players who cannot play (a whole game) blindfolded? (Would they admit it?)

I think I could play a whole game blindfolded but never actually have, mostly for never having the cause or inclination. I think the ability to play blindfold like anything else is an acquired skill, but once you reach master level your ability to visualize positions(although still normally quite poor compared to higher beings) in advance gives you the ability to play blindfold automatically, but I could be wrong.

Its not on topic, apologies, but the word is that the recent Chessbase freestyle tournament was won not by a GM-computer team, but a computer without human help. The argument has been made that GM's with computers are far superior to either alone. Now that has been disproven. Could one say that for all practical purposes, computers have now solved chess.

PD:

I don't think computers have "solved" chess, but it clear that they are every bit as good as the top players and world champions losing matches to computers is no longer considered "shocking news", but kind of expected.

When you hear some players (our own Mig is one of them) claim that "computers don't understand the game" or that "computers can't think" thay are just trying to deny the obvious: computers play every bit as good as any human.

How can computers play so strongly without the capability to "think" or without truly "understanding" the game? Simple: chess is a game of calculation, so computers will naturally excel at it. That also explains why a 2100-rated player can write 2800-rated chess software.

Anyone who says microwave ovens don't understand how to make enchiladas are just trying to deny that microwave ovens make enchiladas every bit as as good as any human.

You are confusing understanding with playing and sporting results. Their results against humans are obvious and no one denies their practical strength. But for analysis and teaching humans to play better they have a long, long way to go. As long as a computer prefers a hideously complicated tactical line that hangs on a thread because it's 0.12 better than a clear advantage with minimal risk, they won't be good teachers. As long as they can't figure out blockades we can't speak of understanding. Human chess takes into account human limitations and common sense.

Anyone who claims that computers understand the game deny the nature of the game as it is played by humans. Everyone knows computers can beat humans because humans can't play an entire game without making a mistake. That the game can be reduced to calculation by computers doesn't mean humans can do the same. As played by computers chess is simply a very different game.

A 1000-rated player can write a 2500-rated software. It's an algorithm.

Regarding Chessbase's "Freestyle" tournament, note that last year's winner was a team of two unknowns from New Hampshire (rated 1300 and 1600, I think), using only off-the-shelf engines (Fritz, Shredder and two others) running simultaneously on 3 or 4 desktop PCs. They beat several teams that included not only GMs and IMs, but may also have had specially tweaked engines. They also beat Hydra.

The lesson I drew from this is that the human contribution to chess skill still matters (as I put it in a lecture I gave), "even at the seemingly super-human level reached by the most powerful computers, running the most advanced chess software."

That is, it appears that this dark-horse team won last year's event because they were better able to integrate the human strengths (the ability to "prune the tree" of variations, by means of intuition, judgment, guidance) with the computers' computational powers.

It's counter-intuitive that such human qualities could be demonstrated by a D player and a B player, but I'd guess this pair was at least somewhat underrated, as well as having worked diligently for months at figuring out how to smoothly coordinate their own efforts with those of their engines; the sort of preparation that the teams with titled players probably did not do.

I haven't looked at this year's Freestyle results yet.

An interesting sidelight: Last spring, shortly before the 2005 initial Freestyle tourney, I stumbled upon a chess message board with a thread devoted to the Dragon Sicilian. Those posting on it were mostly Dragon devotees, who as I'm sure you know tend to be quite passionate about their pet opening.

Someone posted a comment like, "Soltis Variation loses." (that's the Yugoslav Attack sub-variation where Black answers White's h2-h4 with ...h7-h5 -- pioneered in the late 1970s, it is still considered a main line today, perhaps even THE main line of the Yugoslav Attack).

Others on the board started flaming this guy -- demanding that he post some concrete analysis to back up such a seemingly outrageous claim. He refused, saying he was keeping his analysis private until he could use it in practice -- but he kept insisting he had busted the ...h5 variation.

The flaming intensified when other posters asked this guy a few questions about himself and he replied that he was from New Hampshire ("what country is that in?", one of his critics posted) and he was a teenager with a 1600 rating.

He never did post any analysis. But the last laugh was on his critics.

I remembered his handle from that thread (ZackS) and the New Hampshire connection. A month or so later, I fell out of my chair when I read the handle and identify of the dark horse winners of the First PAL Freestyle -- ZackS, from New Hampshire, comprising a B player and a D player, with no other human help. (There was speculation, in writing no less, that Kasparov was secretly on their team, but this was denied by Garry.)

In the post-tournament interview the ZackS players even mentioned being flamed by Dragon enthusiasts on the thread I had seen. They said no one played a Dragon against them in the Freestyle tournament, so their "bust" to the Soltis remains secret.

Anyway, the long and short of the above is that the hierarchy seems to be: 1) the top computers can clearly defeat the top humans, lately even at human-friendly, slow time controls; but 2) a smooth cooperation between man and machine -- what sci-fi writers used to call a CYBORG -- can beat either a top machine on its own, or a top human on his or her own.

This also explains why Kasparov was logically correct when he asserted (after losing his match with Deep Blue) that the IBM team would have obtained an unfair advantage if the team had ever substituted a move chosen by a human for the computer's first move choice.

I'm not saying Garry was right to make the accusation. It was both politically idiotic (it permanently cost the chess world, and Kasparov himself, any further financial support from IBM), and, my understanding is it was proved factually wrong as well. (That is, after much prodding, the IBM team eventually did release Deep Blue's "logs" which Kasparov had demanded during the match, and those logs showed there had been no human intervention).

I'm just saying it makes perfect sense, to someone with a modicum of understanding about chess and computers, to assert that human intervention if it did occur could give Deep Blue an unfair advantage.

I make that point because many people at the time (mostly the lay public, i.e., non-chess people) loudly proclaimed Kasparov's accusation was loony, because, "He's the best player in the world, so how could any other human improve the computer's play?" Such people lacked understanding of chess, computers, or both.

It will be interestig to see how Kramnik fares in the next man v machine obviously if he's recovered. If he has overcome his illness he would be my first choice to bat for homo sapiens sapiens.

Who's talking about computers being or not being "good teachers"?

As for analysis, computers are used by virtually every serious player, so their "analysis" is very good, indeed.

The fact is very clear: computers have a different "understanding" of the game. Their "understanding" is not similar to the average "human understanding", but it is certainly not inferior, as the results have shown.

Perhaps it's time we begin to accept the obvious (GM Soltis said it way before computers were even a factor): chess is 99.99% calculation.

BTW, Mig : you oven-enchilada "analogy" is full of holes, for the simple reason that ovens just do a simple thing, while computers calculate (the essesnce of chess) and judge what the best course of action is in an ever changing situation (the position on the board).

I grant you this: computers don't look at the game the way we do. That's natural. But computers DO understand the game in their own way. After all, chess is 99.99 calculation. Not much else is needed. That's why those "brainless machines" are better than humans. The calculate better. If chess required more complex intelligence, they would not be so strong at it. Perhaps we are the ones approaching the problem the wrong way: we think that chess has elements that are not there, like creativity, artistry, inspiration, etc. Perhaps chess is just a complex abd beautiful little game of calculation and not much intelligence is needed.

tgg,

Mig is right on this one. Inter alia, the nature of Mig's "enchilada" comment also indicates to me that he understands the game at at least a 2200+ level -- contrary to a series of emails that a Mig-basher sent me after I posted my own email address here last week.

Chess may indeed be all calculation, but computer results to date -- as strong as they are -- are not yet sufficient to prove that proposition in its strongest form.

To see how I can say this in the face of the Hydra-Adams match and the annual humans-vs-comps match-tournaments (I forget what that event is called, but 3 comps play a multiple-round robin against 3 or 4 FIDE ex-world Champs, and the comps always score 70-75%), go back and read my lengthy comment, above.

Thus far, based on last year's PAL Freestyle tournament (where Hydra didn't even make it to the Quarter-Finals!) and correspondence play (where comp-only entrants -- who are derisively called "postmen" by other correspondence players because all they do is post the moves their engines pick -- are routinely thrashed by comp-assisted humans), it appears that as far as competitive results are concerned, the strongest chess is produced by a human working in close cooperation with one or more engines, helping to productively direct the engine's awesome calculating power along the most productive branches of the analysis tree.

As another proof that even the best engines are far from perfect, I suggest you familiarize with the work of Dennis Monokroussos (that would be a good idea in any event if you want to improve your chess; advice which holds true even if you are as high as 2400 strength).

In both his blog, and especially, his free online lectures every Monday night on playchess.com (Chessbase's server), Dennis constantly demonstrates the value of trying to improve on the engines' analytical output -- even, and especially, in LONG-WINDED, HIGHLY TACTICAL VARIATIONS.

A typical Monokroussos lecture consists of in-depth analysis of a single important game between world-class opponents, most often played before 1980. He will mention analysis from every important commentator who previously wrote about the particular game, and also will usually mention computer-recommended moves at various points in the game. But he also gives his own analysis (always taking care to identify which lines came from which previous commentator or engine, and which are his own work) ... and you always see at least one, most often more than one, position where Dennis was able to improve on the move or variation that Fritz or another engine recommended.

What was the nature of those emails, Jon Jacobs, if we can know?

The evidence from many fields (including but not limited to chess) supports the view that some combination of human judgment and computer-driven analysis is superior to either one individually. Also, while one can debate the relative shares of strategy and tactics in chess, the ratio of calculation to strategic thinking is probably much less than 99.9 percent. The games analyzed in this website’s newsletters exemplify this – there are some examples of Polgar using unusual maneuvers to exploit dark square weaknesses, and various types of thematic play in Maroczy-bind positions.

I had placed an earlier post on this website in the hope that it might reach some software writers. The basic idea is to write software that uses two (or more) algorithms in parallel. Approaches of this type are becoming increasingly popular in other fields. For instance, there is a long-standing debate in econometrics between advocates of theoretical models and advocates of atheoretical time series methods. Generally, approaches combining both methods will outperform either one alone.

To return to chess, one algorithm would be predominantly strategic, the other purely tactical – and most existing chess algorithms fall into the latter category. Programs that identify thematic strategies in particular positions could in principle be written, and used to select a line of play. Once the selection is made, a second algorithm selects the tactically best move. If a program of this type could be developed, it would probably be able to defeat existing programs.

It is definitely not that hard for any decent player to improve on computer-only analysis. Any player who doesn't agree is either a)pretty weak or b) has simply not spent enough time working with the computers. The more tactical the position the less important human input is--- although positions that call for "real sacrifices" are still notable exceptions.Just because a computer may eventually beat the player in a one-on-one game does not mean that the computer saw every line and every idea better than the human. It simply means that the computer was better at what is invariably most important in chess--- accurate calculation of variations.

tgg, I don't want to get into details of someone else's attacks on Mig made privately to me, but they concerned his public rating data you can look up on the USCF MSA. (My view is that among Mig's 4 tournaments reported there, 2 can be disregarded as obsolete, 1 of the remaining ones is consistent with how a 2200+ player would perform, and the other is much weaker. Only someone with a personal axe to grind -- which I believe the emailer had -- would attack a person's claim to be a master, with nothing more to go on than a single bad tournament.)

Dave50, your idea of using dual algorithms (one tactical, one strategic) sounds interesting. Recent programs are incorporating positional themes such as long-range attacks. For instance, the pawn-storm against a castled king position, for instance. But they don't seem to be consciously employing a dual-algorithm approach.

Despite engines' tactical sophistication, the thematic plan of advancing pawns against an enemy's castled position isn't the sort of the sort of plan they are able to generate spontaneously. The reason is that the kingside pawn storm often doesn't produce any identifiable, tactical benefits (in the form of open files or diagonals for attack) until after the program's horizon ... while the potential pawn weaknesses in its own camp that arise from storming its pawns are visible to the program right away.

I've read that programmers are getting around this problem by flagging certain features of positions (castled on opposite wings plus a closed or stable center), and then awarding extra evaluation points for subsequent positions where one side's pawns are coming closer to the opposing king. With that tweak to its evaluation function, a program is more likely to start launching its K-side pawns up the side of the board where the enemy king lives.

Another example: In my July 2005 Chess Life article on "The Trojan Horse Draw Offer," I gave a position from one of my games that, when I analyzed it with Fritz, the computer recommended the "minority attack" right away, and then proceeded to execute that long-range plan to near perfection. I'm convinced the only way the program could have done that, is that they must have built in a specific feature or subroutine that identifies the typical "minority attack" type of position (a-b-d pawns vs a-b-c-d with the enemy d-pawn sitting on its 4th rank), along with specific instructions about how to execute it.

So, my impression is that, on top of the long-standard incorporation of numerous narrowly-defined strategic features having thematic implications -- such as backward pawns, passed pawns, outposts, etc. -- into engines' evaluation functions, a small but growing number of more complex strategic themes are now getting built in as well. However, I assume this is being done on an ad-hoc basis, by adding these new strategic subroutines to the main engine code -- without the kind of conscious, dual-algorithm approach you refer to.

Since I have zero programming knowledge, I hesitate to say that the approach you recommend sounds promising; but that is my (admittedly uneducated) gut feeling.

Now I know what you mean, Jacob. Yes, Mig's rating has been a topic of much debate in this and other blogs.

Pretty boring stuff...considering that, contrary to your claim, Mig has never claimed to be a master.

Yawn, Mig's USCF rating. Let's see, the first two events of my life when I was in college. Then two events after a six-year layoff. I played seriously from 1992-1997 and reached a 2300 rating in Argentina with over 100 classical and 300 rapid rated games. That this didn't happen in the US seems to really be a big deal to some people. People play chess in other countries too. Even Americans. Or it's just a number they can find online to bash me with. Note this never comes up in the context of the quality of my analysis, only generic bashing. I always suggest that if this is the logic, my Chilean rating should take precedence. Four rapid games in 1996, over 2400 rating! And now, back to the real world.

Chess is no more complicated for a computer than an enchilada is for a microwave. But it was intended to be an ad absurdum argument, sorry if the humor was too complex. Chess programming is still quite interesting at the top end, but a completely idiotic program, relatively speaking, can beat 99.99 percent of humans. CPUs are simply that fast now. Fritz programmer Frans Morsch wrote a good chess program using less memory than the data on a single piece of paper.

Don't confuse method with results. Too many people worship computers because they beat Grandmasters. They watch live GM games with Fritz and accuse GMs of blunders when the eval changes 0.50. This is a fundamental disconnect between the game humans play and a fantasy of objective truth supposedly represented by the computer. Of course they come up with great moves sometimes, and are essential tools, but they also suggest entirely ridiculous things that no sane human would, or should, play.

The effect on fans is depressing. It has also become a refuge for people to drag down the titans of the game. Chess is fantastically difficult but with Fritz some 1400 can pretend he's a genius while watching, or commenting here. It makes me ill to see a wonderfully complex game and watch beginners say Black must play such and such or that X move was wrong. I'm sorry, but chess is an elitist game. I've spent hundreds of hours working with top players and their level is simply breathtaking. We should strive to understand the game and how to make better decisions. It's more instructive and also more entertaining.

Many of today's fans have never watched a live GM game without a computer running. I'm only 36, but I have very fond memories of watching the Short and Anand with Kasparov matches at the chess club in Buenos Aires as the moves came in. A room full of GMs and IMs and dozens of club members watched and discussed each move. Hugely entertaining and instructive. Know-nothings today would sacrifice that for the occasional tactical improvement and a false sense of omnescience.

This reminds me of watching the Topalov-Vallejo game from Linares last month at Playchess. On move 41 everyone was saying that 41..e3 was "just winning" and showing all these nice tactical lines. All I could think was how I would much rather find a nice safe advantage and make sure I didn't get mated! There were several dangerous ideas for White in that line, although the tactics themselves weren't the most difficult. I said I would just play 41..Kc7 to avoid tricks with d6 and get my king to safety. After 15 minutes of thought, Vallejo played ..Kc7 and everyone was up in arms about how he was going to blow the win, etc. (..e3 is -5.43 says Fritz, while ..Kc7 is -2.44) Even a few decent players were critical, as if a 2650 player spending 15 minutes had been oblivious! I don't doubt a Shirov or Kasparov would have played ..e3, mind you, or that it wasn't a better move on some level. But ..Kc7 just made more sense in many ways.

Even very strong young players today talk about how addictive it can be and how easy it is to be dependent on the computer. Kasparov himself has talked about this, about the importance of not letting the computer crush your spirit and analytical edge. For amateurs the computer puts GM chess somewhat closer in reach. This is a good thing to a point. But substituting calculative perfection for human understanding and explanation is a poor swap.

Don't get me wrong, comps are vital tools that have increased our understanding of the game in several ways. I certainly wouldn't send out a newsletter without checking everything with Fritz. But for comments to really be useful I make myself do all my initial comments without it. Occasionally I have to amend remarks thanks to a comp refutation or suggestion, and I often mention that too.

Know-nothings would also believe the greatest instructional books, those by Alekhine and Botvinnik, say, are inferior because they contain mistakes shown by computers. As if the logic and instruction they impart is in any way affected. Horrible.

Good points, Mig.

I agree with virtually everything you say, but it should be pointed out that the other extreme is equally absurd: pretending that computers "can't play" or that computers "don't understand the game". That's a common defense for those who don't really understand the game themselves.

The truth lies in between, I guess: computers are very good at chess and they are very strong "players", as strong as the strongest of humans. And yes, computers can play horrendous moves or embark on stupid plans, but isn't that true of humans, too? Didn't Karpov blunder material and position against Kasparov, in a move that most players over 2300 would have seen during a speed game (I don't remember the exact position, but it involved doubling the rooks on the e file or something like that)? Didn't Kasparov blunder material against a computer?

For every blunder computers play, you can point out very strong moves that computers routinely find that humans routinely miss! So, everything balances in the end. It could even be argued that computers, much like strong players, make very few serious blunders and can very effectively punish their opponents' inaccuracies.

The degree to which computers can play and understand the game can be summarized as follows: ONLY A VERY STRONG PLAYER WITH GREAT UNDERSTANDING OF THE GAME WILL BEAT KASPAROV IN A MATCH.

So far, only computers, Kramnik and Karpov have done it. That tells the story.


Interesting comments Dave.

I'm not too familiar with modelling nonlinear processes, but one point in opposition to your reasoning is that linear system models (e.g. AR models for linear prediction) tend to perform poorly when the model order is chosen too high. More complex (more coefficients) is not always better since noise tends to be included rather than just the signal of interest. Prediction far into the future is very dicey, even when the process has "nice" correlation properties.

This might suggest the best plan is to limit the horizon in view of exploring all (or nearly all) of the realistic possibilities of move sequences. In other words, common sense says chewing up too much CPU time exploring a viable line far into the future will be wasteful unless there is a good chance this line will actually be played. Of course, as you say, forming a long term plan requires looking many moves into the future. I imagine most good chess programs adapt the horizon, and consequently the amount of time taken for each move, according to whether or not they can see something long term.

BTW, without trying to sound like too much of an idiot, I disagree with the ideas that computers don't "form plans" like humans do. Does a plan not involve looking several, possibly many, moves into the future to see if an advantage, material or positional, can be obtained? And, IMHO, if a chess program suggests a ridiculous move then that's a bug in the program (human error), not a limitation of the way the computer "thinks".

Jeff

Please don't compare computers to Karpov and Kramnik! Relentless tactical perfection is simply a different game in a sporting contest. It requires little more than the rules and knowledge of relative material values. Of course they go beyond that, but most programmers agree their creations have at best an 1800 level understanding. By this I mean any knowledge that goes beyond material loss and gain. (Opening books and tablebases also discounted.) Pawn structures, thematic color play, exchange sacrifices, it's still mostly Greek to even the top machines. But you combine 2000-level position play with 3000-level tactical play against humans who get tired and make mistakes and it's barely interesting anymore. Chess is, as the old saying goes, 99% tactics. To even survive humans have to find positions where that last 1% is more important.

Comps rarely play horrible positional blunders anymore, although we can easily find or compose positions in which they are reduced to idiocy. (Game 3 of Kasparov - X3D Fritz, for example.) This isn't true about humans. This is why "understanding" is too much a heavily loaded word. Nobody is saying they can't play chess, only that they play a very different game on many levels. If a comp can play like a 1600, or worse, in some positions, something is fundamentally wrong. GM blunders (Karpov lost in 12 moves once to a one-move fork) are human; we aren't perfect. Computer errors of the X3D Fritz game three type are systemic, consistent, and therefore interesting.

As a last resort, I'm refusing to upgrade my Palm Pilot because I can still easily beat Chess Tiger on it at blitz!

Interesting comments Dave.

I'm not too familiar with modelling nonlinear processes, but one point in opposition to your reasoning is that linear system models (e.g. AR models for linear prediction) tend to perform poorly when the model order is chosen too high. More complex (more coefficients) is not always better since noise tends to be included rather than just the signal of interest. Prediction far into the future is very dicey, even when the process has "nice" correlation properties.

This might suggest the best plan is to limit the horizon in view of exploring all (or nearly all) of the realistic possibilities of move sequences. In other words, common sense says chewing up too much CPU time exploring a viable line far into the future will be wasteful unless there is a good chance this line will actually be played. Of course, as you say, forming a long term plan requires looking many moves into the future. I imagine most good chess programs adapt the horizon, and consequently the amount of time taken for each move, according to whether or not they can see something long term.

BTW, without trying to sound like too much of an idiot, I disagree with the ideas that computers don't "form plans" like humans do. Does a plan not involve looking several, possibly many, moves into the future to see if an advantage, material or positional, can be obtained? And, IMO, if a chess program suggests a ridiculous move then that's a bug in the program (human error), not a limitation of the way the computer "thinks".

Jeff

Mig,

what do you mean computers have "at best an 1800 level understanding"?

I refer to understanding as the side of chess that is not calculation. Evaluation of static factors such as space, structure, color weaknesses, king safety, etc. That is, a human can look at a position and evaluate all these things without knowing whose move it is. Computers have increasingly sophisticated evaluation functions, but you can turn them off (not the material value part of course) and they will still beat GMs at blitz!

We can't say it's forming a plan when they play well and programmer bug when they don't. Watching a comp shuffle rooks back and forth in a closed position is illustrative, not a bug. A plan means goal-based strategy and computers can't do this. They can't imagine a position in the future without seeing every single move that leads up to that position. Humans do this, or at least good ones do. "My position will be winning if I can get a knight to c5" is not the same as seeing every legal move all the way to putting a knight on c5 and seeing that it leads to a concrete win of material. This is most elegantly illustrated the the concept of "never". A human can easily see that a rook and pawn blockade can never be broken by a queen. A computer has to see all the way to a three-time repetition, which can be postponed well beyond its calculation horizon.

A weird example is game six of Kramnik-Fritz in Bahrain. Kramnik resigned in a clearly losing position. Fritz was going to queen by force and Kramnik understood that there was no way Fritz would go in for the continuation if it weren't winning. (The same "believe the computer" disease that helped fell Kasparov in the famous game two against Deep Blue in 1997. Resigning in a drawn position.) Fritz had a massively positive eval, but after I spent a long time on it that night it looked more and more that the reason Fritz loved the position was because it led to a blockaded position it could never win!

http://www.chessbase.com/newsdetail.asp?newsid=558

It's quite possible a GM would have won that position by trying more practical tricks. But Fritz runs headlong into the blockade (queen vs rook). Of course these examples are increasingly rare, alas. But solving the blockade issue won't be trivial.

I couldn't help notice the mention of Karpov losing a game to a simple fork in 12 moves. I saw this game awhile ago and remembered it since the opponent was none other than Larry Christiansen. Good for a laugh.

http://www.chessgames.com/perl/chessgame?gid=1069116

PS - Sorry Karpov

Seems like Karpov got the last laugh anyway in that tournament. He went all the way winning it. The tournament seems to have been a knockout format that year.

So, Mig, once again, what's an "1800 level understanding"?

Your answer talks about the general principles of chess, but doesn't address the real question of what you meant by "1800 level understanding". I guess the expression was used as a generic way of putting computers down, without a single shred of evidence.

Or perhaps what you mean is that computers don't "think" like humans or don't "think" the way you would want them to.

Similar to sitting down to play an unknown guy who seems to make the dumbest moves, but somehow manages to beat you every game. How many losses will it take before you accept that PERHAPS the guy has some approach to the game that makes him a better player than you are?

You warned me against confusing "methods" with "results", but perhaps you should revise your own understanding and begin to accept that this apparently "stupid player" who beats everyone is not that "stupid' after all.

The other possibility is that computers have finally proven what many people have been saying for a long time: chess is a game of pure calculation, where "conventional" intelligence helps, but is not absolutely necessary.

A good experiment:

I go to the Marshall Chess Club in New York City in the middle of an all-master tournament and challenge everyone to play 5-minute blitz for money.

10 GM's accept the challenge without realizing that I'm cheating, because they don't know I have somehow made a device that allows me to not make a single move, but play every move suggested by my pocket fritz.

They are amazed at how easily I beat everyone and begin analyzing the games with me. I use Fritz lines and whenever they suggest some variation, I grab Fritz' best line and "translate" Fritz evaluation:

"If you had gone for that sacrifice on e6, I would have taken the knight, defend the attach by playing Rf5, etc. etc, and I think I am better ('+1.4')in that position, but there is play left... Now I can easily win ('+4.5') by taking your bishop. You can't take back because I would have a back-rank mate in 5 - here it is..."

I leave the club and nobody would doubt my "great understanding" of the game. Nobody would consider me an "1800 player" with "great calculating skill". They would all assume I'm just some unknown strong GM.

Think about it...

tgg,

1800strenght means that a computer knows and incorporates in his evaluation stuff like

- holes
- weak backranks (even if no immediate attack possible)
- passed pawns in the middle game / end game (with different values)
- backward pawns
- open files/diagnoals
- basically all the stuff Jon Jacobs talked about above.

You can ask yourself about what those evaluations of a computer really mean: If someone sacs an exchange and the computer still shows an = and if the computer doesn't show a forced way to win back the material, he must have other components of the position making up for the material. That's the 1800 part of "chess knowledge" that a computer has. And it comprises of open files, backward pawns etc. etc. etc.

But a computer doesn't know what the value of a knight compared to a bishop is in a closed position - simply because a computer can't _define_ what a closed position is.
So, assume a positions where a strong player would happily sac a pawn in order to leave the opponent with both bishops against his knights in a closed position and feel very comfortable, but a computer can't do that because he doesn't know that he even _is_ in a closed position.

There's maybe a bit of "chess wisdom" that takes a 2300+ player to fully understand and handle well and which a computer simply isn't capable of applying (yet).

But anyway, somehow I have the feeling that you won't be convinced by that either...

tgg, Somehow you are managing to ignore the substance of Mig's explanations (and mine as well) even while acknowledging them. His description/definition of what is meant by "1800-level positional understanding" is perfectly clear and coherent, there was nothing remotely evasive about it.

You're also wrong about how the GMs at the Marshall would perceive the hypothetical situation you laid out in your last comment. Most likely their comments after you left would indeed be something along the lines of "1800 player with great calculating skill." (More precisely, they probably would SAY "2300 player" to avoid sounding TOO dismissive of the drubbing they had just taken, while they would be thinking, "1800 player".)

The giveaway in your comment, which relates very directly to some of the points Mig made above, is that you draw no distinction between Fritz's LINES, and Fritz's EVALUATIONS.

All comp evaluations are notoriously inaccurate. This is the root of Mig's complaint above about 1400 players kibitzing GM games under the illusion their computer is improving on the GM's play because it gave his move "+1.46" and considered a different alternative as "+2.83". I have noticed this myself -- most recently, while watching what the kibitizers were saying in the chat window while I did live audio for the final round of the US Championship.

My very rough impression is, the comp advice being quoted by the 1400-level kibitzer is right somewhere around half the time, and is wrong somewhere around half the time.

How can that be, if comps are such awesome calculators? While some of Fritz's LINES do have great value, the truly valuable ones often won't turn out to be the ones the program initially favors. So the program's initial VALUATION of a position, it turns out, may be either based on a less-than-optimal move choice, and/or on the program's evaluation of a "final" position in its main line that turns out to be a wrong evaluation of that final position, because the analysis ended prematurely due to a horizon effect.

(Modern programs no longer suffer from the old, crude version of the horizon problem where a forced mate or immediate material loss might get overlooked because it was X+1 plies away and the software always stopped analyzing after X plies. However, the horizon problem still exists in more subtle forms -- witness Ponomariov - Deep Junior from the man vs machine tournament last year.)

Haven't you ever had this experience: You watch Fritz analyze a position out to 20 plies or so, and when it starts the eval might be "+0.58", and after it's finished with the 20-ply main line it says "+0.89". But then you append that whole line and fast-forward to its final position -- and when you turn Fritz loose on THAT position, lo and behold, the evaluation morphs into "-0.35", or perhaps "0.00" (i.e., there turns out to be a forced drawing line) ?

I've had this experience countless times, when using Fritz and its kin to help me annotate games. That should tell you something.

tgg:
You are making some very interesting points. But I need to side with Mig on this. I believe that among other elements, creativity, artistry and inspiration do exist in Chess. You just have to look at the games played by the great masters. You need all the qualities of a man to play Chess. On the other hand, these computers play Chess on pure calculation. For computers can only calculate. For this reason, computers excel in tactical positions. I saw a game played by my friend (expert) against Shredder. It was a closed positional game. The whole position was blockaded and the computer made moves even the beginner does not make. There are two different things. Calculation and Understanding. Humans have understanding and Computers have calculation.

"Of course the games aren't much to look at later without adding "for a blindfold game," with the occasional notable exception. Kramnik's immortal blindfold game against Topalov in 2003 is the only non-blunder game that comes to mind."

This draw between Kramnik and Ivanchuk in the 1996 Monaco blindfold is awesome. As Matthew Sadler wrote, "if only I could play this well *without* a blindfold!"

http://www.chessgames.com/perl/chessgame?gid=1060678

To Ryan, Albretch:

we will have to agree to disagree on this very interesting topic. To me (and to many others, btw)computers do understand the game and do analyze correctly. That's the reason they have such great results.

Perphaps they don't conform to your idea of what a "thinking player" should be, but their approach to "chess thinking" can't be considered inferior to any human's. If that were the case, humans would destroy them using those "concepts" computers "can't understand". It used to be like that. That's no longer the case and it is bound to get worse for humans, I'm afraid.

Even if we assume that your theories are true and that computers combine 1600-understanding with 3000-tactics, all we can conclude is, then, that and tactics are the most important part of the game. Further reason to think that the "computer approach" is the right one is a legitimate and smart one and that what we consider the "human understanding" is just a crutch to help us overcome our tactical deficiencies.

That said, I appreciate you contributing your ideas. It has been a very interesting thread.

I think that what you are underestimating is that
computers, on average play chess better than any human. By playing better on average I mean comparing who decides on a better move at every move, but in any given game there are many points at which the human will play much better (I think this is basically indisputable). The other point is that although computers play competetive chess better than humans that does not mean that they analyze better than humans. Humans make more severe mistakes and if they are allowed to take back blunders or understand what is going wrong in a line and fix it, they are still better than computers. In other words, I believe that if you sit a GM down at the board and let him try to understand the position and do the same for a computer, the GM's analysis of the position will be closer to the truth. I also believe that a good GM would have good chances to beat any computer in correspondence play. Why not? In some sense, it is just continuity. In blitz, there is no chance, in classical some chance, and in correspondence??

Daniel Pomerleano
http://www.olympicchesscamp.com

Good points, Daniel.

I think your view and mine are not too far apart:

1. On average, computers play stronger than humans. I agree.
2. The are areas where the computer excels (tactics) and there are areas where humans excel (long-range strategy). I agree.

Where it gets murky is when we begin to talk about vague subjects like "analyzing". Sure, a GM will "explain" and "understand" (read: "evaluate") the position in a manner that is easily understandable by humans, putting special emphasis on concepts like weak squares, long-term attacking prospects, etc. A computer will use its own brand of analysis, where preference will be given to tactical elements. As we can tell from the results, the two approaches yield roughly similar results in a regular game under classical time controls between a top human player and a top computer program.

I continue to disagree with the view expressed by some that the computer "understands" the game at the 1800-level. That seems like an argument specifically geared toward discrediting the computer's work. Similar to the usual weak player's argument: "Yes, he beats me 30 games in a row at 5 minutes to 30 seconds every weekend. But blitz is not real chess, and if we ever play a slow game, I'll be the winner".

In short, I DO NOT think computers understand the game better than humans or that humans understand the game better than computers. Each "type" of player brings strenghts and weaknesses to the table. Each type approaches chess in the way that best uses those strenghts.

Results are the only way to begin to determine how efficient each approach is. At the begininig there was no contest - humans were clearly better (25 years ago, for example); today, computers have a small, but clear advantage. The future? Seems to be on the computer side.

That said (and this might come as a surprise to some), I don't even enjoy playing computers; I greatly prefer to sit down to play a regular human being, and yes, I agree with Mig that computers have been misused by weak players.

tgg,

according to your example, you would duly translate your miniFritz's evaluation in everyday talk and make a living fool out of you.
"Play Nxf6 and I answer with ... Qxa2 and I am much better" (because your miniFritz shows you +1,9 - wich ususally is a winning advantage). And your fellow GM would look at you like you're a complete idiot and say "But this is completely drawn!!?!"

If you fancy this, fine.

oh, and another thing:

Imagine we could simply negate the tactical power of a computer, or, in other words - we could enhance the tactical power of the human. This can be done by giving humans access to Fritz5 in a match against a computer (say, Fritz9, which is supposed to have far better chess knowledge than its predecessors, or Hydra, which is simply outcalculating Fritz5 to the extreme).
Then, what would be left over is the chess understanding of the "human kind", right?

And we could ask then: How strong must a player's _chess understanding_ be to beat the best Computers consistently (since they are equally strong on the tactical side by means described above)?

The answer was given last year in the playchess advanced chess Tournament which is about to be held again: Two players, rated 1600 and 1800 beat Hydra (admittedly by use of Shredder etc. - not Fritz5).
Btw: Correspondence GM Nickel beat Hydra also: 2-0. Without help of computers.
Whenever you negate the numbercrunching part of the computer-story, they are still getting beaten by humans just as consistently as they beat us otb.

Excellent point, DP.

Your reference to correspondence play reminds me: I read somewhere, just a few years ago I think it was, that someone had offered a money-bet that no comp could beat him in correspondence play.

The whole proposition was meant to parallel David Levy's famous bet from the 1970s (giving programmers 5 years to come up with a program good enough to beat him).

I don't recall the logistics of the correspondence bet proposal (would the "human" be allowed to use comps himself? if not, how could he be policed?), or who it was that offered it. Presumably, it was some top correspondence chess player.

But the interesting thing was the proponent's explanation of why he offered the bet -- which, to my knowledge, no one ever took him up on. (That in itself tells us something about the man-vs-machine debate, if in fact there was a substantial sum to be earned by beating him.)

He explained that the amount of time that comps require for calculation grows exponentially with the number of plies. So if it needed 0.0001 second to look at and evaluate all legal half-moves from a given position (i.e., analyze one ply), then:
going 2 plies (one full move) might then require 0.0006 second,
calculating everything out to 4 plies might take 0.02 second,
going 6 plies out would consume about 3/4 of a second (0.72),
8 plies would set you back 27 seconds,
10 plies, about 20 minutes,
12 plies, 12 hours (720 minutes),
etc.

So what happens when you expand the time available for analysis, from an average of 3 minutes per move (typical slow time control for top-level over-the-board chess), to 3 days or longer?

The computer's analysis tree grows by slightly more than 4 plies, or 2 full moves, based on the oversimplified assumptions given above.

The human, meanwhile derives FAR more benefit.

Not only would this let the human perform enough manual blunder-checking to all but eliminate any possible short-term tactical oversights up to, say, 6 plies out from each position actually reached in the game. In addition, the human could pursue certain selected lines far beyond the normal ply-length horizons that limit over-the-board calculations for humans and even (for now at least) computers.

To identify the most promising lines to examine in much greater depth, the human would be relying on intuition, judgment and knowledge: qualities that the software lacks.

This was the explanation given...and apparently, believed by those in the know -- since no one came forward with a computer to take that side of the bet.

This also explains what I've called the "Cyborg" (Centaur) factor.

If comps are perfect calculating machines and chess is all calculation -- in essence, that is tgg's whole argument -- then how is it that even a top-level engine's play improves when a human collaborates with it? This is a well established fact (see my discussion of last year's PAL Freestyle tourney earlier in this thread), and it's hard to explain if one insists that humans' supposed areas of superiority are mere illusions (again, as tgg seems to argue).

Recall my earlier remark that back when Kasparov lost to Deep Blue and accused the IBM team of cheating by substituting a human-chosen move for the one chosen by Deep Blue, the people who couldn't understand why that (if it occurred - which apparently it didn't) would give the computer any kind of edge, were either non-chess people, or weak players who didn't know much about chess or computers.

Well, albretch

If you could play like Kasparov you would be very strong indeed.

Unfortunately, we must deal with facts, not IFS.

tgg,

if that the stance you take, if that is all you can counter with, the discussion was over before it even started.
Thanks for clarifying this, making every future response to you utterly obsolete.

Albretch,

sometimes people will not agree on a certain topic. There's no reason to make it personal or be upset about it. As I stated before, this is one of those topics where the best thing is to agree to disagree.

Good luck to you and thanks for your contribution!

tgg,

it doesn't make sense to ask for verifiable, scientific proof (read: facts) when it is clear that neither you or me can produce it. Neither of us is a super GM, neither of us has Hydra in the next room.

So, what's left is using our imagination and trying to construct arguements based on premises we both agree on.
That's the _only_ way to get anywhere in a situation like this and is's completely counterproductive to come up with "I want facts, facts, facts"-statements whenever you get short of arguments as neither you nor me will be able to provide facts of the kind you'd like to see in any acceptable way (neither quality nor quantity).

Besides your "gimme facts!", you have yet to answer any of the above arguments given by Jon Jacob, me or Mig regarding the reason for results in
- GM+machine vs. machine
- "Patzer"(=ZackS as two teens rated 1600/1800) + machine vs. machine
- correspondence-player vs. machine

All those three scenarios do nothing but to counterbalance the numbercrunching effect of chess engines.

- GM+machine vs. machine: is the weakest argument as even you would agree that humans have "some" understanding of chess and that adding this understanding to the computers calculating power is bound to be stronger than the computer alone.

- ZackS vs. Hydra: That's a different story. Off-the-shelf computers beat a 16processor (right? Or was it 32?) moster whose sole purpose was playing chess. If calculation power is calculation power, and since Hydra knows about tree pruning just as well as Fritz does, I guess it's not too much an "advantage" to the ZackS-teens if they have Fritz, Shredder and Junior, each crunching at 0.2-0.4mil-nodes/sec against Hydra with 50mil-nodes/sec. Why did they beat it? PRobably not because Fritz searches deeper. Rather, because of those 1600-1800ELO chess knowledge they had and Hydra did not and about which you choose to maintain that you don't know what we're talking about.

- GM Nickel vs. Hydra. Again: MAtch score 2-0 for Nickel. How in the world did he do that if not because of chess knowledge that prevailed in a situation where shere force didn't matter anymore? Time is not a big factor (see Jacobs little calculation above) in CC.

Albretch

you can get any results you want as long as you select only those scenarios that fit your premise. Let me show you:

#3. GM Nickel vs. Hydra - Why have you chosen this and not GM Adams (a much stronger GM than Nickel) vs. Hydra?

Perhaps being very strong is a disadvantage against computers? (it's supposed to be joke, Albretch)

#2. ZackS vs Hydra: doesn't count. We are talking about humans and machines and how each approaches the game, analyzes, etc. Machine vs. Machine adds nothing to this conversation.

#1. GM + machine vs. Machine. Well, I would assume that a grandmaster playing with the help of a computer will be at least equal to the computer. However, we are comparing humans with machines, not combinations of human/machines, etc. In fact, this type of argument would not hold water, because there was a report of some weak player + machine beating teams of GM + machine.

Go figure...


I will now respectfully ask you to understand I will not answer any more of your posts, because I don't see any further gains. So, feel free to post, but don't expect an answer from me. Once again, thanks for your input.

tgg,

Your latest comment is basically your back-door admission that your initial claim (that computers' superiority over humans in one-on-one matches proves that computers "understand" the game better than humans), has been refuted by the various posts here.

All you did in your points 3, 2 and 1 above is reiterate that comps beat humans in one-on-one matches.

There is no attempt to come to terms with the logical challenge to your argument that is presented by the undisputed fact human + comp combinations beat pure comps. (The Nickel vs Hydra correspondence chess match must be viewed in this light, rather than as a pure man-vs-machine match a la Adams-Hydra, since upon checking the original sources I see that Nickel was allowed to use a comp in his match.)

If comps are perfect calculating machines, and uniquely human qualities are mere illusions that have nothing to do with understanding events on the board at a level deeper than calculation, then Centaurs (man + machine combinations) should not consistently beat pure comps. Rather than being able to contribute positively to a top comp's handling of a position, a human helper should be expected to only muck things up -- if your logic were correct.


Hi guys,

I have to agree for the most part with tgg. Claims to the effect of computers not understanding the subtleties of chess such as knights versus bishops in closed positions, or blockades, are not all that convincing if you admit that computers can (and very likely will) be trained to understand these things.

How was it that humans came to understand the finer points of chess? I would say through learning from experience, and their own as well as others mistakes coupled with their ability to remember the lesson. So if current chess machines don't "learn", e.g. they don't tune their own evaluation functions based on their own and the huge databases of others' results, instead relying on the programmer to tweak it, then that's likely a significant disadvantage, but one that can be corrected.

Learning machines such as neural networks have been widely applied to tasks which were once thought better left to humans, e.g. facial, speech, and character recognition, analysis and diagnosis of medical data, optimization problems involving a large number of different variables, etc. It would be difficult to argue that the concepts developed there couldn't also be applied to chess.

So, when chess programmers incorporate some learning principles with the existing "brute force" computational approach, there could probably be a significant improvement of the computer's apparent "understanding" of chess. Even then, I have no doubt people would find some reason or other to gripe about the way they play.

This is an amazing debate. What is there to discuss?

You all knows that computers beat humans today, but you all know that humans is better at describing a position with words and evaluate it without much calculation.

Is there any point to define if computers have chess understanding or not?

It's the same sort of discussion that IM Watson and IM Aagaard had a while ago. They argued in six months if chess was a game with tactical and positionall rules (example: A rook is good on an open line) or not.

There is of course no answer.

Gentlements please stay in Topic and leave all your non related discussion in a adecuate thread. I want to read about Monaco and not your unimportant nonsense.

Cynical Gripe,

I agree with everything you said, except for your first sentence; the points you make are entirely apart from tgg's points that so many of us here took issue with.

Your point -- that computers likelier than not are capable of attaining the holy grail called "artificial intelligence", and that once chess programmers succeed in applying genuine AI to chess, there will no longer be any legitimate reason for anyone to question how well computers "understand" the game -- is an interesting, quite logical, but -- so far as I can see - NEW element in this whole discussion.

The whole earlier debate revolves around the way chess software works TODAY -- not what it might advance to tomorrow. And knowledgeable people are unanimous that today's chess programs do not employ anything remotely definable as artificial intelligence.

While you're probably right that ultimately that will be done, I think you underestimate the degree of difficulty in getting from here to there (though I'm admittedly not a tech person, so my opinion may not be worth much on that particular point).

If and when it is done, then computers will understand chess better than humans. That's not the case today.

I'm going to ask my microwave to make me an enchilada. I think it will have trouble wrapping it though. Do I think computers can solve chess. Nope, 1) way too many possible positions 2) computers can't get into the head of humans. Psychology is a big part of playing chess. 3) Computers can't think

Well,
Let´s look at it the other way around.
How many chess positions per second can a human brain visualize per second. 2,3,20?
What will happen if you play against a computer limited to analyze that many positions per second...? Probably even I would beat it.

So the gap there would be what I would call "chess understanding".

In the other hand it looks like a pretty interesting (difficult one) problem to build a strong program with such limits in position evaluations.
Now, that computer would have "chess understanidng".

Regards,
Francisco

Why is this tournament officially only "Amber" and no longer "Melody Amber"?

The event was named for the daughter of the sponsor. He's still the sponsor, but she's gone from being a baby to being about 15. So purely a guess, but I would guess that the daughter expressed an opinion one way or the other--either she prefers the name Amber now, or she prefers that the event NOT use her name of Melody.

But again, just a guess.

duif

By the way, the daughter quite oten participates in the ceremonies at the event, so that's not the issue.

http://tinyurl.com/nyqvj

Of course, Duif's personal preference is not to have separate tournaments named after girls. But that's just her.

LOL!

Very cute. But, um, no. No problem with tournaments named after girls. And (after being convinced by Jen Shahade), no problem with private tournaments for girls, by the way. (Amber is a privately organized event.) :)

still me,
duif

man, just played over the blindfold game between Moro and Aronian, and it was just awesome. How these guys can sort out these kinds of tactics without seeing the board... Here's the difference between super GMs and the rest..Moro seems to be in fine form, good news for most Chess fans..

Indeed, a computer as a referee would be the best idea. No one would cheat this way!

Twitter Updates

    Follow me on Twitter

     

    Archives

    About this Entry

    This page contains a single entry by Mig published on March 20, 2006 3:20 AM.

    Poikovsky 2006 was the previous entry in this blog.

    Tournaments a Go-Go is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.