This is a test repository to learn about Rust, Neural Networks and Reinforcement Learning. The Neural Network implementations have some real basic optimizations and the forward pass supports parallelization. However, it comes with some design-flawes and it is significantly limited by not supporting GPUs and not supporting any kind of autodiff.
- The following layers have been implemented: Dense, Dropout, Flatten, Reshape
- Convolution_Layer (weight updates work, but don't stack them)
- The following activation functions have been implemented: Softmax, Sigmoid, ReLu, LeakyReLu
- The following loss functions have been implemented: MSE, RMSE, binary_crossentropy, categorical_crossentropy
- The following optimizer have been implemented: SGD (default), Momentum, AdaGrad, RMSProp, Adam
- Networks work for 1d, 2d, or 3d input. Exact input shape has to be given, following layers adjust accordingly.
- Available Datasets: Mnist(-Fashion), Cifar10, Cifar100
- Available Agents: Random, Q-Learning, DQN, Double-DQN
- Available Environments: TicTacToe, Fortress (https://www.c64-wiki.com/wiki/Fortress_(SSI)
MNIST (achieving ~98%) CIFAR10 (achieving ~49%)
TicTacToe (results in an optimal Agent) Fortress (... at least DDQN performs significantly better than a random moving bot)
- Add backpropagation of error to conv_layers
- Improve design
- Add GPU support for matrix-matrix multiplications
- Add some autodiff support.
At least the last two TODOs will probably stay for some time.