| |
BGonline.org Forums
XG Update (Crossposted from BG News FB group)
Posted By: Frank Berger In Response To: XG Update (Crossposted from BG News FB group) (Timothy Chow)
Date: Friday, 14 July 2023, at 11:57 a.m.
Thanks for the links. I haven't read the 2nd but followed a discussion a few weeks ago. I assume it is an adversarial approch like you have seen for pictures. I assume it does not need to much work on Deepminds side to improve (but they probably don't care at all).
The first one is quite interesting because I wasn't aware of the ELF project. The most interesting info was on p.9. "p9: System can be trained with 2000 GPUs in 2 weeks (20 block version)" One reason that such models are trained by big companies. I remember that chess was named trivial by some .... "because it took only hours to train a superior AI", just with one fast PC it would have taken over a year (if I didn't miscalculate it).
The most interisting stuff was on page 28/29. The claim that they haven't solved the "ladder" issue. This is a motif in Go where a long sequence of mechanical moves is made (trivial for humans) but difficult for bots. This is very similar to deep backgames and containment situations in that the result shows many moves later. This supports my believe that deep learning / Alpha Zero wont solve open issue just by applying the techique and to give a similar improvement as in chess or Go.
And I don't believe that we need a vastly different architecture, but more of "there's a lot of tweaking that needs to be done to achieve good performance" Too bad that we don't know much about how Xavier trained his AI, this would be very interesting (at least for me :)) If I finally find the time (this autumn/winter is my goal) I'll tackle backgames and containment situations and I don't see any reason why this should be unachievable with the current approaches. At least for training using self play it is not out of reach.
| |
BGonline.org Forums is maintained by Stick with WebBBS 5.12.