
BGonline.org Forums
Improved Cube Handling in Races
Posted By: Bob Koca
Date: Monday, 16 June 2014, at 4:07 a.m.
In Response To: Improved Cube Handling in Races (Axel Reichert)
Nice use of the optimization software. The new EPC estimate technique looks useful. I disagree with several of the modeling choices though. I think you should have stressed more that the database you tested on hardly has any races longer than 70 pips. Is it easy to test other suggestions on that database?
Here are some comments and questions to specific passages:
Page 4: “in fact in this example Red should double (even after White rolls a double) and White has an optional pass.”
Against a single checker on the six point it is an optional pass but your example has it on the five point.
Page 7: “Adjusting the pip count further for checkers not yet in the home board thus seems an option. In figure 3, both White’s and Red’s pip counts are 70, but White’s winning chances are about 66 %, according to GNU Backgammon. Red needs to move his last checker into his home board before he can start his bearoff, while White starts immediately. Thus a straight pip count needs to be adjusted.”
But the following all contribute to white’s advantage: Most importantly white is on turn. White’s distribution is also better with no checkers on the ace or two points. More subtly Red has a lower variance position than a low wastage race and that hurts the chances of the underdog. You thus have not demonstrated the need for an adjustment due to crossovers.
Page 14:
“Quantifying the effort is more complicated: It is not immediately clear whether an adjusted pip count using a stack penalty of 2 and a stack penalty offset of 2 needs less effort than an adjusted pip count using a stack penalty of 1 and a stack penalty offset of 1. What is clear, though, is that a gap penalty of 0 instead of 1 makes a method simpler, because there are fewer arithmetic operations. Likewise, lower checker penalties and lower stack penalties result in less effort. On the other hand, higher stack penalty offsets are easier (because the penalties themselves have to be applied less often). Of course, using “true gaps” means less effort, as does the use of all the “relative” flags. So for a given method/set of parameters I ignored the signs of all the penalties/bonuses applied by the adjusted pip count and summed up these absolute values for all positions in the endgame database. This number of total adjustments (measured in pips) was used to quantify the effort required by a particular method.”
I am not sure I understand. Are you counting the stack penalty of 2 with offset of 2 as 4 adjustments event though it is two calculations.
Page 16 “The Isight method has the smallest total error (1064), and, consequently, the smallest average error per decision (0.00688 equity).”
There are 280721 in the database (or possibly 50000 if the smaller database is used). Either way how does the 1064 total error cacluate to .00688 average error?
Page 17 “Gap penalties applied only if the other player has a checker there and higher stack penalty offsets save a lot of arithmetic operations.”
But are there instead logical operations that need to be checked?
Page 19 “But perhaps could such an algorithm be found by assessing positional features and adjusting for them (as explained in section 2 on page 4), so that we do not end up with some arbitrary adjusted pip count, but rather with something approximating the EPC as closely as possible? This approach seems to be quite promising and was Joachim Matussek’s idea, see his article mentioned in section 1 on page 3.”
Jean Luc Seret’s paper was 4 years earlier and essentially did the same thing. http://www.bkgm.com/articles/GOL/Dec00/pipples.htm It is significantly more calculation but is more accurate and applicable to a wider class of positions. 85% of the time the EPC estimate from this method has an error less than 0.82
The method that isight came up with looks like a good compromise of effort and accuracy. I think you overestimate though the difficulty of estimating EPC by other methods. The 7n+1 rule, 7 pips for low wastage long races, 10 pips wastage for 12 checker closeout, 9.5 wastage for 12 checker closeout pus spares on 4, 5 and 6, and 13.7 wastage for closout plus spares on 1, 2, 3 go a long way. Also it is easy to get intuition on other adjustments with some practice since gnu and XG give EPCs.
“Indeed there is one, somewhat hidden in Walter Trice’s book, with a denominator of 49/6 and a shift of −3. Combining this criterion with the EPC approximation from above, we get a total error of 1909 (now a total cube error in terms of equity, not an EPC error in terms of pips), much worse than the Keith method (1262) or the Isight method (1064) from section 5 on page 15. The question why this result is so bad needs some investigation.”
I think the result is so bad because you didn’t test it correctly. The 49/6 with shift of 3 was meant to be used only for pip vs rolls or rolls vs pips positions.
Page 22
“Since I did not want to memorize and adjust values from a lookup table, I thought about a table published by Tom Keith showing the CPW directly as a function of race length and lead, http://www.bkgm.com/rgb/rgb.cgi? view+1203. My idea was to approximate it using a linear regression … for which the base probability b, the CPW denominator dC, and the (now constant) value of a pip v needed to be determined. Initial tests of fitting Tom Keith’s table with this approach were promising. Using optimization software, this was again an easy task. With the CPW error (absolute difference between the correct CPW as rolled out by GNU Backgammon and its approximation) as an objective to minimize, Isight found the following method…”
The value of a pip is dependent on the race of the length becoming much greater for shorter races. It seems you are forcing a too simple equation by using a constant pip value.
Page 22 “This CPW approximation has an average CPW error of 4.13 percentage points.”
That doesn’t sound very good. Did you also try Kleinmann technique incorporating EPC?
Page 23 “Isight found that with a doubling point of 68 %, a redoubling point of 70 %, and a take point of 76% this CPW approximation gives a total cube error of 1264. This is roughly on par with the Keith method”
This is a place where the length of the race is crucial. As the length gets shorter the take point decreases (from 78.5% for a very long race to 75% for a last roll situation) and the initial and redoubling points also decrease (due to opponent’s lower takepoint and greater volatility). A one rule for all situations has strong inherent limitations.

BGonline.org Forums is maintained by Stick with WebBBS 5.12.