[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

BGonline.org Forums

XG training and NN question

Posted By: Tom Keith
Date: Sunday, 1 August 2010, at 1:48 p.m.

In Response To: XG training and NN question (Timothy Chow)

One way to think of a neural net is a sort of lossy compression of equities for every position. But the capacity of a neural network is a tiny fraction of all the possible backgammon positions, so you need to concentrate the net only on positions that have a reasonable chance of appearing in a game.

The question is how do you define "reasonable chance of appearing in a game". An easy way to do it is have a bot play against itself many times and see what positions come up. This leads to very few "deep backgames" of the sort you are thinking of.

You could modify the bot to encourage it to play more backgames and thereby force more deep backgames in bot-vs-bot contests. But this leads to many, many more positions (including all the followup positions that can result). Depending on how aggressively you force the bot off its normal track, you will end up with millions or billions of times as many positions for the neural net to learn. Even then, there will be even deeper backgames that the bot does not understand.

So the fundamental problem is that there are just too many positions for a NN to learn.

Messages In This Thread

 

Post Response

Your Name:
Your E-Mail Address:
Subject:
Message:

If necessary, enter your password below:

Password:

 

 

[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

BGonline.org Forums is maintained by Stick with WebBBS 5.12.