Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

optimise for memory for very large all by all NBLAST #40

Open
jefferis opened this issue Jun 18, 2020 · 0 comments
Open

optimise for memory for very large all by all NBLAST #40

jefferis opened this issue Jun 18, 2020 · 0 comments

Comments

@jefferis
Copy link
Contributor

  • Use a pattern of small (e.g. 100 x 100) blocks that might 10s of seconds / a few minutes to compute
  • this should work better than doing a whole row or column that might have 20-50k neurons.
  • need to implement an x by y nblast function instead of all by all NBLAST for each block (would current NBLAST be ok?)
  • inputs could be neuronlistfh and read in for each process. I suspect that read time will be trivial compared with search time so long as blocks take 10s of seconds to compute. This might work well for memory.
  • ideally we would parallelise across those blocks with progress
  • if doing mean scores, we might want to do forward and reverse scores at the same time since they use the same sets of neurons
  • we might wish to fill a sparse matrix with the results with a threshold
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant