|
BGonline.org Forums
Training a Neural Net (like GNUBG or XG)
Posted By: Ian Shaw In Response To: Training a Neural Net (like GNUBG or XG) (Jake Jacobs)
Date: Tuesday, 27 April 2010, at 6:28 a.m.
All Neural Nets can overtrain, not just backgammon ones. In essence, this means that they get very accurate at evaluating the positions they train on, but are poor at very similar positions thay they have not seen. That is, they lose their ability to "generalize" from specific positions to other positions.
Think of a curve on a graph where they y-axis is the equity of each position, and. Training a Neural Net aims to draw a smooth line bewteen all the points marking positions on which the net has been trained. This will generalize to all the similar poistions it encounters. If a net is overtrained, the line will zig-zag to hit all the points, but it may be all over the place in between points. Gnubg is a graph with 250 x-axes (the network inputs) and 5 y-axes (the percent of wins, gammmons and backgammons won and lost).
To avoid this we have a benchmark database of about 100,000 contact positions that is not used for training, but only for testing a trained or partially-trained net to see if it is on the right track. Some of you helped roll this out a few years ago.
|
BGonline.org Forums is maintained by Stick with WebBBS 5.12.