Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

learning functions more like STDP #25

Open
floybix opened this issue Aug 21, 2015 · 5 comments
Open

learning functions more like STDP #25

floybix opened this issue Aug 21, 2015 · 5 comments

Comments

@floybix
Copy link
Member

floybix commented Aug 21, 2015

We know that synaptic connections are reinforced if the source cell fires just before target (LTP) and punished if the reverse occurs - target fires before source (LTD).

Currently only the LTP part is implemented, by selecting sources active on the previous time step to a target cell.

I am not sure how sequence learning should happen in a pooling layer. Because cells can remain active for many time steps during pooling, the current implementation does not work: it ends up with all active pooling cells reinforcing connections to each other, even though the connections are not predictive in any useful sense.

I propose, as an experiment, changing this to exclude any source cells that were also active on the same time step as target (i.e. current time step when learning). That would allow sequence transitions to be learned in a pooling layer only when the prior state cells turn off.

Once could imagine looking at the following time step for an LTD implementation but that involve deferring learning, which is doable but hopefully not necessary.

@rcrowder
Copy link
Member

@floybix Could this be related to pre-synaptic inhibition. Reading Spratling an co.'s work on "Pre-integration lateral inhibition enhances unsupervised learning", for example. Or Fergal's pre-pooler feedback twist?

http://www.inf.kcl.ac.uk/staff/mike/publications.html

@floybix
Copy link
Member Author

floybix commented Aug 22, 2015

@rcrowder Fascinating reading, thanks (will take me a while to absorb it). The author claims is it just as biologically plausible as the usual post-integration lateral inhibition. But I would like to know what neuroscience experts think of it (and now, a decade after publication). Surely there is evidence on such a fundamental mechanism... Anyway, even if it is not biologically accurate, it may turn out to be computationally useful. I can't see how yet.

Not sure what you mean by "Fergal's pre-pooler feedback twist". Is that like a reverse somersault twist from pike position? :) If you mean "prediction-assisted CLA", i.e. biasing column activation towards those with predicted cells, that does not seem to help with the problem I described (sequence learning in a temporal-pooling layer).

@cogmission
Copy link
Member

Hi Guys!

Felix, I'm curious, what is the state of the art with the "sequence
learning in a temporal-pooling layer" you are currently wrestling with?

Cheers,
David

On Fri, Aug 21, 2015 at 9:27 PM, Felix Andrews notifications@github.com
wrote:

@rcrowder https://github.com/rcrowder Fascinating reading, thanks (will
take me a while to absorb it). The author claims is it just as biologically
plausible as the usual post-integration lateral inhibition. But I would
like to know what neuroscience experts think of it (and now, a decade after
publication). Surely there is evidence on such a fundamental mechanism...
Anyway, even if it is not biologically accurate, it may turn out to be
computationally useful. I can't see how yet.

Not sure what you mean by "Fergal's pre-pooler feedback twist". Is that
like a reverse somersault twist from pike position? :) If you mean
"prediction-assisted CLA", i.e. biasing column activation towards those
with predicted cells, that does not seem to help with the problem I
described (sequence learning in a temporal-pooling layer).


Reply to this email directly or view it on GitHub
#25 (comment)
.

With kind regards,

David Ray
Java Solutions Architect

Cortical.io http://cortical.io/
Sponsor of: HTM.java https://github.com/numenta/htm.java

d.ray@cortical.io
http://cortical.io

@floybix
Copy link
Member Author

floybix commented Aug 22, 2015

@cogmission to be honest I don't know. Numenta people are doing various things probably including this, but as far as I know they don't have a working solution yet.

@floybix
Copy link
Member Author

floybix commented Oct 2, 2015

If we take STDP seriously then the potentiation and depression effects should be symmetric:
STDP
(While we are not dealing with individual spikes, HTM cell activation presumably represents some aggregated function of spikes.)

A problem with current LTP-only learning approach is that cells can learn/grow connections to uninformative signals: if a source cell is constantly on, it will be learned. This may be part of why it is so hard to tune the influence of different senses. Every bit/cell that is on is treated equally.

But really I am not sure. Maybe we should learn connections contingent on constant signals just in case the whole regime/context changes later.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants