Skip to content

Boost Networks

dkappe edited this page Apr 20, 2019 · 6 revisions

A Little History

A little while back the antifish project made a splash by apparently outperforming the equivalent Leela nets. The idea was playing the antifish net (initially just a T30 network) against a dumbed down SF10. This, so went the hypothesis, would allow antifish to outperform SF10, by training against these games at a very low learning rate. And amazingly it seemed to work.

I had a somewhat different hypothesis, however. That was that running supervised learning data from alpha beta engines at very low learning rates could sharpen a network. So I tried my hand at it. Batch size 1024, lr 0.00001 using CCRL data. The sweet spot was 1000 steps.

I tried it on 32930 and my own net, Maddex. It always bumped up the performance by about 30 elo.

The Nets

Performance

Very premilinary at 2+2 on a 1060, but 7000 steps is about the same h2h and vs SF10 at 1+1:

   # PLAYER              :  RATING  ERROR  POINTS  PLAYED   (%)  CFS(%)    W    D    L  D(%)
   1 32930-boost-1000    :       0     30    33.5      60  55.8      77   12   43    5  71.7
   2 32930               :     -25     49    14.0      30  46.7      78    3   22    5  73.3
   3 11258               :     -62     50    12.5      30  41.7     ---    2   21    7  70.0

White advantage = 78.53 +/- 24.80
Draw rate (equal opponents) = 82.20 % +/- 7.64