Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Changing the learning rate #6

Open
ooocct opened this issue Jun 27, 2021 · 1 comment
Open

Changing the learning rate #6

ooocct opened this issue Jun 27, 2021 · 1 comment

Comments

@ooocct
Copy link

ooocct commented Jun 27, 2021

Hi, Denis.
Thanks for your effort on this lib.
I am trying to use this in my research.
However, it seems the learning rates of GRU and LSTM are all fixed after their initialization.
I want to change them after every epoch.
Could you please give me some advice?
Regards.
Gin

@steckdenis
Copy link
Owner

Hello,

The LSTM and GRU units are built from a set of operations, with every learnable weights in Dense layers (so, there are a few Dense layers as part of an LSTM or GRU unit). In Dense, the learning rate is indeed constant after construction, but you can try to implement a simple learning rate schedule by modifying _learning_rate in Dense::update.

There is unfortunately no way to change the learning rate from outside of nnetcpp, as this is not a feature I envisioned for this (quite minimal) neural network library. By the way, if you need speed, you can have a look at the PyTorch C++ API, that allows you to build neural networks, forward them and autograd them directly from C++. PyTorch implements neural networks based on optimized matrix multiplication routines, while nnetcpp (its Dense layer) uses a matrix-vector product that is less efficient.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants