DevPage: Moritz Willig

CrazyAra

CrazyAra is an open-source neural network based engine for the chess variant Crazyhouse [1]. The engine was written by Johannes Czech, Alena Beyer and me during a semester project at the Technische Universität Darmstadt as part of course "Deep Learning: Architectures & Methods" in the summer semester 2018 . At the time of writing this article the engine stays at an elo of 2594 at lichess.org [2].
The engine is inspired by the Alpha-(Go)-Zero papers of Silver, Hubert, Schrittwieser et al [3].

The neural network got trained in a superived fashion using the lichess.org data set [4]. Out of the games played between January 2016 to June 2018, we selected all matches where both players had an elo of 2000 or above, resulting in 569,537 games to train on. Further details about the training can be found in the project wiki [5].
Additionally the model architecture from the original paper was adapted to get a better overall performance and improve training convergence. We combined multiple well-known techniques to fit the requirements for the problem of predicting crazyhouse chess moves. The exact architecture of our 'RISE'-Network (ResneXt - Inception - Squeeze-Excitation) is described on the wiki [6].

Additional Links:

[1]
Repository https://github.com/QueensGambit/CrazyAra/
[2]
CrazyAra on lichess https://lichess.org/@/CrazyAra

References

[3] Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, David Silver, Thomas Hubert, Julian Schrittwieser et al. (bibtex)
[4] database.lichess.org/
[5] https://github.com/QueensGambit/CrazyAra/wiki/Supervised-training
[6] https://github.com/QueensGambit/CrazyAra/wiki/Model-architecture

BibTex

@article{DBLP:journals/corr/abs-1712-01815,
  author    = {David Silver and
               Thomas Hubert and
               Julian Schrittwieser and
               Ioannis Antonoglou and
               Matthew Lai and
               Arthur Guez and
               Marc Lanctot and
               Laurent Sifre and
               Dharshan Kumaran and
               Thore Graepel and
               Timothy P. Lillicrap and
               Karen Simonyan and
               Demis Hassabis},
  title     = {Mastering Chess and Shogi by Self-Play with a General Reinforcement
               Learning Algorithm},
  journal   = {CoRR},
  volume    = {abs/1712.01815},
  year      = {2017},
  url       = {http://arxiv.org/abs/1712.01815},
  archivePrefix = {arXiv},
  eprint    = {1712.01815},
  timestamp = {Mon, 13 Aug 2018 16:46:01 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1712-01815},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
For terms of usage / impressum of this website see Impressum.
Hosted on uberspace: