You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the clever idea with MacroCol SP/inh is that you use the n+1-th dimension to force X cells to cover the (subsets) of the same portion of input space/potential pool.
That's half of it. The other half is to use global inhibition over the N+1-th dimension, and treat everything in the first N dimensions as a totally separate inhibition areas. The clever trick here is to treat every pair of cells which could possibly interact as either:
- very close to each other, use global inhibition within a macro-column or
- very far away from each other, too far away to interact with each other.
The clever trick here is to treat every pair of cells which could possibly interact as either:
- very close to each other, use global inhibition within a macro-column or
- very far away from each other, too far away to interact with each other.
Separate the Inhibition & Topology, so rest of SP can stay the same
2.1 use inhibition mask (nD matrix with 1s for used cells) to allow custom topologies
example
111
101
111
"ring" mask is used to create topology where only nearest neighbours (columns) are included in neighbourhood and inhibition.
This can be used, for example to model chess moves and you can have a topology of Knight's moves https://en.wikipedia.org/wiki/Knight_(chess)
3. remove existing local and global inhibition completely with TopologicalInhibition
first create Deterministic Algorithms #194 deterministic checks for SP's output (using local, global inh; eg. on MNIST, Hotgym examples)
local inh equals topo.inh with size of macrocolumn =1 (aka 1 cell, mini column per macro col) & num macroCols == numColumns
global inh equals to topo.inh where there is just 1 (huge) macro column with size of numCols (mini)columns
these 2 extremes cover local & global inhibition with a single algorithm.
topo inh allows any values in between (the great results in MNIST)
4. proximal (local) and distal horizontal (within SP) connections in SP
this is part of #156 and has a potential to add 2 big features:
A remove the current algorithm for inhibition
B improve recall of partial memories (associative memory, self-organizing maps,..)
A replace inhibition by proximal connections,
when firing (active) the cell will depolarize (inhibit) its proximal connections. After each cell does this (can be fast matrix operation), each cell will have its sum of inhibition, add threshold and we see who is inhibited.
B auto-associative memory (new feature)
Implemented by distal(longer) connections. It demonstrates as follows:
train SP on a set of patterns (say MNIST digits)
(after training) if you hide half of the pattern (other half presented)
SP displays SDR for only 50% of the SDR size
the algorithm usually works thanks to SDRs redundance (distribited representation)
SOM (SP with distal learning) will recall (most) of the original pattern back.
Goal
Improve SP's speed and SDR quality by honoring topology if the input-space in the column-space (SP).
Relevant issues:
MNIST example #242 (comment)
Implementation
2.1 use inhibition mask (nD matrix with 1s for used cells) to allow custom topologies
"ring" mask is used to create topology where only nearest neighbours (columns) are included in neighbourhood and inhibition.
This can be used, for example to model chess moves and you can have a topology of Knight's moves
https://en.wikipedia.org/wiki/Knight_(chess)
3. remove existing local and global inhibition completely with TopologicalInhibition
4. proximal (local) and distal horizontal (within SP) connections in SP
this is part of #156 and has a potential to add 2 big features:
A replace inhibition by proximal connections,
when firing (active) the cell will depolarize (inhibit) its proximal connections. After each cell does this (can be fast matrix operation), each cell will have its sum of inhibition, add threshold and we see who is inhibited.
B auto-associative memory (new feature)
Implemented by distal(longer) connections. It demonstrates as follows:
Summary
The text was updated successfully, but these errors were encountered: