You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Denis.
Thanks for your effort on this lib.
I am trying to use this in my research.
However, it seems the learning rates of GRU and LSTM are all fixed after their initialization.
I want to change them after every epoch.
Could you please give me some advice?
Regards.
Gin
The text was updated successfully, but these errors were encountered:
The LSTM and GRU units are built from a set of operations, with every learnable weights in Dense layers (so, there are a few Dense layers as part of an LSTM or GRU unit). In Dense, the learning rate is indeed constant after construction, but you can try to implement a simple learning rate schedule by modifying _learning_rate in Dense::update.
There is unfortunately no way to change the learning rate from outside of nnetcpp, as this is not a feature I envisioned for this (quite minimal) neural network library. By the way, if you need speed, you can have a look at the PyTorch C++ API, that allows you to build neural networks, forward them and autograd them directly from C++. PyTorch implements neural networks based on optimized matrix multiplication routines, while nnetcpp (its Dense layer) uses a matrix-vector product that is less efficient.
Hi, Denis.
Thanks for your effort on this lib.
I am trying to use this in my research.
However, it seems the learning rates of GRU and LSTM are all fixed after their initialization.
I want to change them after every epoch.
Could you please give me some advice?
Regards.
Gin
The text was updated successfully, but these errors were encountered: