Book Improvement
Moderators: Harvey Williamson, Watchman
Book Improvement
Maybe this is a stupid question, but is there some great impediment to a computing project that tries to develop the opening book? The fact that the problem is so embarrassingly parallelizable makes me think that it would be ideal for distributed computing. I'm sure that someone else must have thought of this idea. Is this being done currently, or is anyone trying to?
Re: Book Improvement
I've been working on this for a couple of years, but the same questions comes back every time. How should one do that ? ( Without talking a couple of decades )chrislipa wrote:Maybe this is a stupid question, but is there some great impediment to a computing project that tries to develop the opening book? The fact that the problem is so embarrassingly parallelizable makes me think that it would be ideal for distributed computing. I'm sure that someone else must have thought of this idea. Is this being done currently, or is anyone trying to?
Analyzing 1 M positions ( not very much) for 10 seconds (not much either) takes over 100 cpudays.
Tony
- Dylan Sharp
- Senior Member
- Posts: 2431
- Joined: Fri Aug 10, 2007 12:07 am
Re: Book Improvement
Huh, you put 100 CPUs to do the job and do it in one day? If a very popular website solicited the help of their users to the task, they'd help and the task would be done faster.Tony wrote:Analyzing 1 M positions ( not very much) for 10 seconds (not much either) takes over 100 cpudays.
Tony
The CAP project tried to give reliable computer evaluations to the most important openings and they were successful to some extent, the problem was that once a new stronger program comes out their evaluations seem to become obsolete. For instance, Rybka 3 discovered that a position that had a CAP evaluation that favored black in reality had black getting mated, but the engine they used couldn't see it.
Re: Book Improvement
I don't think so. The whole point is that during analyses you have access to the positions you already did. Otherwise your 10 sec will never be better than a 10 sec search of individual positions.Dylan Sharp wrote:Huh, you put 100 CPUs to do the job and do it in one day? If a very popular website solicited the help of their users to the task, they'd help and the task would be done faster.Tony wrote:Analyzing 1 M positions ( not very much) for 10 seconds (not much either) takes over 100 cpudays.
Tony
Done this way, you might need 10 times a much positions and 100 times the analyses time and IMO it's still worthless.
The analyses has to be saved as a tree, so that if a program discovers a checkmate at some point, this is minimaxed back and other scores are adjusted as well.
Cheers,
Tony
distributed computing and chess
100 cpus doing work together is not 100 times the efficiency of 1 cpu. it is less than that because the more cpus you have running, the more overhead you have and that needs to be managed.
it's been a long time since i looked at the issue but i am pretty sure there comes a point where adding more cpus does not actually improve output.
it's been a long time since i looked at the issue but i am pretty sure there comes a point where adding more cpus does not actually improve output.
Huh, you put 100 CPUs to do the job and do it in one day?
- turbojuice1122
- Senior Member
- Posts: 2315
- Joined: Thu Aug 23, 2007 9:11 pm