-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
learning functions more like STDP #25
Comments
@floybix Could this be related to pre-synaptic inhibition. Reading Spratling an co.'s work on "Pre-integration lateral inhibition enhances unsupervised learning", for example. Or Fergal's pre-pooler feedback twist? |
@rcrowder Fascinating reading, thanks (will take me a while to absorb it). The author claims is it just as biologically plausible as the usual post-integration lateral inhibition. But I would like to know what neuroscience experts think of it (and now, a decade after publication). Surely there is evidence on such a fundamental mechanism... Anyway, even if it is not biologically accurate, it may turn out to be computationally useful. I can't see how yet. Not sure what you mean by "Fergal's pre-pooler feedback twist". Is that like a reverse somersault twist from pike position? :) If you mean "prediction-assisted CLA", i.e. biasing column activation towards those with predicted cells, that does not seem to help with the problem I described (sequence learning in a temporal-pooling layer). |
Hi Guys! Felix, I'm curious, what is the state of the art with the "sequence Cheers, On Fri, Aug 21, 2015 at 9:27 PM, Felix Andrews notifications@github.com
With kind regards, David Ray Cortical.io http://cortical.io/ |
@cogmission to be honest I don't know. Numenta people are doing various things probably including this, but as far as I know they don't have a working solution yet. |
We know that synaptic connections are reinforced if the source cell fires just before target (LTP) and punished if the reverse occurs - target fires before source (LTD).
Currently only the LTP part is implemented, by selecting sources active on the previous time step to a target cell.
I am not sure how sequence learning should happen in a pooling layer. Because cells can remain active for many time steps during pooling, the current implementation does not work: it ends up with all active pooling cells reinforcing connections to each other, even though the connections are not predictive in any useful sense.
I propose, as an experiment, changing this to exclude any source cells that were also active on the same time step as target (i.e. current time step when learning). That would allow sequence transitions to be learned in a pooling layer only when the prior state cells turn off.
Once could imagine looking at the following time step for an LTD implementation but that involve deferring learning, which is doable but hopefully not necessary.
The text was updated successfully, but these errors were encountered: