From ce6e30315e36e6529e0f65598eb9b606aa427d36 Mon Sep 17 00:00:00 2001 From: pramodk Date: Fri, 14 Aug 2020 02:09:03 +0200 Subject: [PATCH 01/10] Update README with latest workflow --- README.md | 245 ++++++++++++++++++++++++++++++------------------------ 1 file changed, 135 insertions(+), 110 deletions(-) diff --git a/README.md b/README.md index 341cfa603..60e9a3705 100644 --- a/README.md +++ b/README.md @@ -3,188 +3,213 @@ # CoreNEURON > Optimised simulator engine for [NEURON](https://www.neuron.yale.edu/neuron/) -CoreNEURON is a compute engine for the [NEURON](https://www.neuron.yale.edu/neuron/) simulator optimised for both memory usage and computational speed. Its goal is to simulate large cell networks with minimal memory footprint and optimal performance. +CoreNEURON is a compute engine for the [NEURON](https://www.neuron.yale.edu/neuron/) simulator optimised for both memory usage and computational speed. Its goal is to simulate large cell networks with small memory footprint and optimal performance. -If you are a new user and would like to use CoreNEURON, [this tutorial](https://github.com/nrnhines/ringtest) will be a good starting point to understand complete workflow of using CoreNEURON with NEURON. +## Features / Compatibility +CoreNEURON is designed as library within NEURON simulator and can transparently handle all spiking network simulations including gap junction coupling with the **fixed time step method**. In order to run NEURON model with CoreNEURON, -## Features - -CoreNEURON can transparently handle all spiking network simulations including gap junction coupling with the fixed time step method. The model descriptions written in NMODL need to be thread safe to exploit vector units of modern CPUs and GPUs. The NEURON models must use Random123 random number generator. - +* MOD files should be THREADSAFE +* MOD files must use Random123 random number generator (instead of MCellRan4) +* POINTER variables in MOD files need special handling. Please [open an issue](https://github.com/BlueBrain/CoreNeuron/issues) with an example of MOD file. We will add documentation about this in near future. ## Dependencies -* [CMake 3.0.12+](https://cmake.org) -* [MOD2C](http://github.com/BlueBrain/mod2c) +* [CMake 3.7+](https://cmake.org) * [MPI 2.0+](http://mpich.org) [Optional] -* [PGI OpenACC Compiler >=18.0](https://www.pgroup.com/resources/accel.htm) [Optional, for GPU systems] -* [CUDA Toolkit >=6.0](https://developer.nvidia.com/cuda-toolkit-60) [Optional, for GPU systems] +* [PGI OpenACC Compiler >=19.0](https://www.pgroup.com/resources/accel.htm) [Optional, for GPU support] +* [CUDA Toolkit >=6.0](https://developer.nvidia.com/cuda-toolkit-60) [Optional, for GPU support] ## Installation -This project uses git submodules which must be cloned along with the repository itself: +CoreNEURON is now integrated into latest development version of NEURON simulator. If you are a NEURON user, preferred way to install CoreNEURON is to enable extra build options duriing NEURON installation as follows: -``` -git clone --recursive https://github.com/BlueBrain/CoreNeuron.git +1. Clone latest version of NEUERON: -``` + ``` + git clone https://github.com/neuronsimulator/nrn + cd nrn + ``` -Set the appropriate MPI wrappers for the C and C++ compilers, e.g.: +2. Create a build directory: -```bash -export CC=mpicc -export CXX=mpicxx -``` + ``` + mkdir build + cd build + ``` -And build using: +3. Load modules : Currently CoreNEURON version rely on compiler auto-vectorisation and hence we advise to use Intel/Cray/PGI compiler to get better performance. This constraint will be removed in near future with the integration of [NMODL](https://github.com/BlueBrain/nmodl) project. Load necessary modules on your system, e.g. -```bash -cmake .. -make -j -``` + ``` + module load intel intel-mpi python cmake + ``` +If you are building on Cray system with GNU toolchain, set following environmental variable: -If you don't have MPI, you can disable MPI dependency using the CMake option `-DCORENRN_ENABLE_MPI=OFF`: + ```bash + export CRAYPE_LINK_TYPE=dynamic + ``` -```bash -export CC=gcc -export CXX=g++ -cmake .. -DCORENRN_ENABLE_MPI=OFF -make -j -``` +3. Run CMake with the appropriate [options](https://github.com/neuronsimulator/nrn#build-using-cmake) and additionally enable CoreNEURON with `-DNRN_ENABLE_CORENEURON=ON` option: -And you can run inbuilt tests using: + ```bash + cmake .. \ + -DNRN_ENABLE_CORENEURON=ON \ + -DNRN_ENABLE_INTERVIEWS=OFF \ + -DNRN_ENABLE_RX3D=OFF \ + -DCMAKE_INSTALL_PREFIX=$HOME/install + ``` +If you would like to enable GPU support with OpenACC, make sure to use `-DCORENRN_ENABLE_GPU=ON` option and use PGI compiler with CUDA. +> NOTE : if you see error and re-run CMake command then make sure to remove temorary CMake cache files by deleting build directory (to avoid cached build results). -``` -make test -``` +4. Build and Install : once configure step is done, you can build and install project as: -### About MOD files + ```bash + make -j + make install + ``` -With the latest master branch, the workflow of building CoreNEURON is same as that of NEURON, especially considering the use of **nrnivmodl**. We provide **nrnivmodl-core** for CoreNEURON and you can build **special-core** as: +## Building Model + +Once NEURON is installed with CoreNEURON support, setup `PATH` and `PYTHONPATH ` variables as: -```bash -/install-path/bin/nrnivmodl-core mod-dir +``` +export PYTHONPATH=$HOME/install/lib/python:$PYTHONPATH +export PATH=$HOME/install/bin:$PATH ``` +Like typical NEURON workflow, we have to use `nrnivmodl` to translate MOD files. In addition, we have to use `nrnvmodl-core` to translate mod files for CoreNEURON: -## Building with GPU support +``` +nrnivmodl-core mod_directory +nrnivmodl mod_directory +``` -CoreNEURON has support for GPUs using the OpenACC programming model when enabled with `-DCORENRN_ENABLE_GPU=ON`. Below are the steps to compile with PGI compiler: +If you see any compilation error then one of the mod file is incompatible with CoreNEURON. Please [open an issue](https://github.com/BlueBrain/CoreNeuron/issues) with an example and we can help to fix it. -```bash -module purge -module purge all -module load pgi/18.4 cuda/9.0.176 cmake intel-mpi # change pgi, cuda and mpi modules -export CC=mpicc -export CXX=mpicxx +**NOTE** : If you are building with GPU support, then `nrnivmodl` needs additional flags: -cmake .. -DCORENRN_ENABLE_GPU=ON ``` - -Note that the CUDA Toolkit version should be compatible with PGI compiler installed on your system. Otherwise you have to add extra C/C++ flags. For example, if we are using CUDA Toolkit 9.0 installation but PGI default target is CUDA 8.0 then we have to add : - -```bash --DCMAKE_C_FLAGS:STRING="-O2 -ta=tesla:cuda9.0" -DCMAKE_CXX_FLAGS:STRING="-O2 -ta=tesla:cuda9.0" +nrnivmodl -incflags "-acc" -loadflags "-acc -rdynamic -lrt -Wl,--whole-archive -Lx86_64/ -lcorenrnmech -L$HOME/install/lib -lcoreneuron -lcudacoreneuron -Wl,--no-whole-archive $CUDA_HOME/lib64/libcudart_static.a" . ``` +We are working on fixes to avoid this additional step. -> If there are large functions / procedures in MOD file that are not inlined by compiler, you need to pass additional c/c++ compiler flags: `-Minline=size:1000,levels:100,totalsize:40000,maxsize:4000` +## Running Simulations -You have to run GPU executable with the `--gpu` or `-gpu`. Make sure to enable cell re-ordering mechanism to improve GPU performance using `--cell_permute` option (permutation types : 2 or 1): +With CoreNEURON, existing NEURON models can be run with minimal to no changes. If you have existing NEURON model, we typically need to make following changes: -```bash -mpirun -n 1 ./bin/nrniv-core --mpi --gpu --tstop 100 --datpath ../tests/integration/ring --cell-permute 2 -``` +* Enable cache effficiency : `h.cvode.cache_efficient(1)` +* Enable CoreNEURON : -Note that if your model is using Random123 random number generator, you can't use same executable for CPU and GPU runs. We suggest to build separate executable for CPU and GPU simulations. This will be fixed in future releases. + ``` + from neuron import coreneuron + coreneuron.enable = True + ``` +* Use `psolve` to run simulation after initialization : + ``` + h.stdinit() + pc.psolve(h.tstop) + ``` -## Building on Cray System +Here is simple example of model that run NEURON first followed by CoreNEURON and compares results between NEURON and CoreNEURON: -On a Cray system the user has to provide the path to the MPI library as follows: -```bash -export CC=`which cc` -export CXX=`which CC` -cmake -DMPI_C_INCLUDE_PATH=$MPICH_DIR/include -DMPI_C_LIBRARIES=$MPICH_DIR/lib -make -j -``` +```python +import sys +from neuron import h, gui -## Optimization Flags +# setup model +h('''create soma''') +h.soma.L=5.6419 +h.soma.diam=5.6419 +h.soma.insert("hh") +h.soma.nseg = 3 +ic = h.IClamp(h.soma(.25)) +ic.delay = .1 +ic.dur = 0.1 +ic.amp = 0.3 -* One can specify C/C++ optimization flags specific to the compiler and architecture with `-DCMAKE_CXX_FLAGS` and `-DCMAKE_C_FLAGS` options to the CMake command. For example: +ic2 = h.IClamp(h.soma(.75)) +ic2.delay = 5.5 +ic2.dur = 1 +ic2.amp = 0.3 -```bash -cmake .. -DCMAKE_CXX_FLAGS="-O3 -g" \ - -DCMAKE_C_FLAGS="-O3 -g" \ - -DCMAKE_BUILD_TYPE=CUSTOM -``` +h.tstop = 10 -* By default OpenMP threading is enabled. You can disable it with `-DCORENRN_ENABLE_OPENMP=OFF` -* By default CoreNEURON uses the SoA (Structure of Array) memory layout for all data structures. You can switch to AoS using `-DCORENRN_ENABLE_SOA=OFF`. +# make sure to enable cache efficiency +h.cvode.cache_efficient(1) +pc = h.ParallelContext() +pc.set_gid2node(pc.id()+1, pc.id()) +myobj = h.NetCon(h.soma(0.5)._ref_v, None, sec=h.soma) +pc.cell(pc.id()+1, myobj) -## RUNNING SIMULATION: +# First run NEURON and record spikes +nrn_spike_t = h.Vector() +nrn_spike_gids = h.Vector() +pc.spike_record(-1, nrn_spike_t, nrn_spike_gids) +h.run() -Note that the CoreNEURON simulator depends on NEURON to build the network model: see [NEURON](https://www.neuron.yale.edu/neuron/) documentation for more information. Once you build the model using NEURON, you can launch CoreNEURON on the same or different machine by: +# copy vector as numpy array +nrn_spike_t = nrn_spike_t.to_python() +nrn_spike_gids = nrn_spike_gids.to_python() -```bash -export OMP_NUM_THREADS=2 #set appropriate value -mpiexec -np 2 build/bin/nrniv-core --tstop 10 --datpath /path/to/model/built/by/neuron --mpi -``` +# now run CoreNEURON +from neuron import coreneuron +coreneuron.enable = True +coreneuron.verbose = 0 +h.stdinit() +corenrn_all_spike_t = h.Vector() +corenrn_all_spike_gids = h.Vector() +pc.spike_record(-1, corenrn_all_spike_t, corenrn_all_spike_gids ) +pc.psolve(h.tstop) -[This tutorial](https://github.com/nrnhines/ringtest) provide more information for parallel runs and performance comparison. +# copy vector as numpy array +corenrn_all_spike_t = corenrn_all_spike_t.to_python() +corenrn_all_spike_gids = corenrn_all_spike_gids.to_python() -### Command Line Interface +# check spikes match between NEURON and CoreNEURON +assert(nrn_spike_t == corenrn_all_spike_t) +assert(nrn_spike_gids == corenrn_all_spike_gids) -:warning: :warning: :warning: **In a recent update the command line interface was updated, so please update your scripts accordingly!** +h.quit() +``` -Some details on the new interface: +and run this model as: -The new command line interface is based on CLI11. You can find more details by running `coreneuron_exec --help`. +``` +python test.py +``` -Multiple characters options with single dash (`-gpu`, `-mpi`, `-dt`) are **not** supported anymore. All those options now require a double dash (`--gpu`, `--mpi`, `--dt`), but single characters options still support a single dash (e.g. `-g`). +You can find [HOC example](https://github.com/neuronsimulator/nrn/blob/master/test/coreneuron/test_direct.hoc) here. -The format of the configuration options file has changed, regenerate them if there is any problem. +## Additional Notes -## Results +#### Results -Currently CoreNEURON only outputs spike data as `out.dat` file. +Currently CoreNEURON transfers spikes, voltages and all state variables to NEURON. These variables can be recorded using regular NEURON API (e.g. vector record). -## Running tests +#### Optimization Flags -Once you compile CoreNEURON, unit tests and a ring test will be compiled if Boost is available. You can run tests using +One can specify C/C++ optimization flags specific to the compiler and architecture with `-DCMAKE_CXX_FLAGS` and `-DCMAKE_C_FLAGS` options to the CMake command. For example: ```bash -make test +cmake .. -DCMAKE_CXX_FLAGS="-O3 -g" \ + -DCMAKE_C_FLAGS="-O3 -g" \ + -DCMAKE_BUILD_TYPE=CUSTOM ``` -If you have different mpi launcher, you can specify it during cmake configuration as: +By default OpenMP threading is enabled. You can disable it with `-DCORENRN_ENABLE_OPENMP=OFF` -```bash -cmake .. -DTEST_MPI_EXEC_BIN="mpirun" \ - -DTEST_EXEC_PREFIX="mpirun;-n;2" \ - -DTEST_EXEC_PREFIX="mpirun;-n;2" \ - -DAUTO_TEST_WITH_SLURM=OFF \ - -DAUTO_TEST_WITH_MPIEXEC=OFF \ -``` -You can disable tests using with options: - -``` -cmake .. -CORENRN_ENABLE_UNIT_TESTS=OFF -``` ## License * See LICENSE.txt * See [NEURON](https://www.neuron.yale.edu/neuron/) -* [NMC portal](https://bbp.epfl.ch/nmc-portal/copyright) provides more license information -about ME-type models in testsuite ## Contributors See [contributors](https://github.com/BlueBrain/CoreNeuron/graphs/contributors). - ## Funding CoreNEURON is developed in a joint collaboration between the Blue Brain Project and Yale University. This work has been funded by the EPFL Blue Brain Project (funded by the Swiss ETH board), NIH grant number R01NS11613 (Yale University), the European Union Seventh Framework Program (FP7/20072013) under grant agreement n◦ 604102 (HBP) and the Eu- ropean Union’s Horizon 2020 Framework Programme for Research and Innovation under Grant Agreement n◦ 720270 (Human Brain Project SGA1) and Grant Agreement n◦ 785907 (Human Brain Project SGA2). From b72fa7b365c6ed53fa1bfe350b500e71acfa760e Mon Sep 17 00:00:00 2001 From: pramodk Date: Sun, 16 Aug 2020 09:50:50 +0200 Subject: [PATCH 02/10] Cleanup and integration with existing README --- README.md | 151 +++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 128 insertions(+), 23 deletions(-) diff --git a/README.md b/README.md index 60e9a3705..d40150ea7 100644 --- a/README.md +++ b/README.md @@ -22,9 +22,9 @@ CoreNEURON is designed as library within NEURON simulator and can transparently ## Installation -CoreNEURON is now integrated into latest development version of NEURON simulator. If you are a NEURON user, preferred way to install CoreNEURON is to enable extra build options duriing NEURON installation as follows: +CoreNEURON is now integrated into development version of NEURON simulator. If you are a NEURON user, preferred way to install CoreNEURON is to enable extra build options during NEURON installation as follows: -1. Clone latest version of NEUERON: +1. Clone latest version of NEURON: ``` git clone https://github.com/neuronsimulator/nrn @@ -59,7 +59,7 @@ If you are building on Cray system with GNU toolchain, set following environment -DCMAKE_INSTALL_PREFIX=$HOME/install ``` If you would like to enable GPU support with OpenACC, make sure to use `-DCORENRN_ENABLE_GPU=ON` option and use PGI compiler with CUDA. -> NOTE : if you see error and re-run CMake command then make sure to remove temorary CMake cache files by deleting build directory (to avoid cached build results). +> NOTE : if you see error and re-run CMake command then make sure to remove temporary CMake cache files by deleting `CMakeCache.txt`. 4. Build and Install : once configure step is done, you can build and install project as: @@ -77,42 +77,40 @@ export PYTHONPATH=$HOME/install/lib/python:$PYTHONPATH export PATH=$HOME/install/bin:$PATH ``` -Like typical NEURON workflow, we have to use `nrnivmodl` to translate MOD files. In addition, we have to use `nrnvmodl-core` to translate mod files for CoreNEURON: +Like typical NEURON workflow, you can use `nrnivmodl` to translate MOD files: ``` -nrnivmodl-core mod_directory nrnivmodl mod_directory ``` -If you see any compilation error then one of the mod file is incompatible with CoreNEURON. Please [open an issue](https://github.com/BlueBrain/CoreNeuron/issues) with an example and we can help to fix it. - - -**NOTE** : If you are building with GPU support, then `nrnivmodl` needs additional flags: +In order to enable CoreNEURON support, we have to use `-coreneuron` flag: ``` -nrnivmodl -incflags "-acc" -loadflags "-acc -rdynamic -lrt -Wl,--whole-archive -Lx86_64/ -lcorenrnmech -L$HOME/install/lib -lcoreneuron -lcudacoreneuron -Wl,--no-whole-archive $CUDA_HOME/lib64/libcudart_static.a" . +nrnivmodl -coreneuron mod_directory ``` -We are working on fixes to avoid this additional step. + +If you see any compilation error then one of the mod file might be incompatible with CoreNEURON. Please [open an issue](https://github.com/BlueBrain/CoreNeuron/issues) with an example and we can help to fix it. + ## Running Simulations -With CoreNEURON, existing NEURON models can be run with minimal to no changes. If you have existing NEURON model, we typically need to make following changes: +With CoreNEURON, existing NEURON models can be run with minimal. If you have existing NEURON model, we typically need to make following changes: -* Enable cache effficiency : `h.cvode.cache_efficient(1)` -* Enable CoreNEURON : +1. Enable cache effficiency : `h.cvode.cache_efficient(1)` +2. Enable CoreNEURON : ``` from neuron import coreneuron coreneuron.enable = True ``` -* Use `psolve` to run simulation after initialization : +3. Use `psolve` to run simulation after initialization : ``` h.stdinit() pc.psolve(h.tstop) ``` -Here is simple example of model that run NEURON first followed by CoreNEURON and compares results between NEURON and CoreNEURON: +Here is a simple example of model that run with NEURON first followed by CoreNEURON and compares results between NEURON and CoreNEURON execution: ```python @@ -176,7 +174,7 @@ assert(nrn_spike_gids == corenrn_all_spike_gids) h.quit() ``` -and run this model as: +We can run this model as: ``` python test.py @@ -184,15 +182,15 @@ python test.py You can find [HOC example](https://github.com/neuronsimulator/nrn/blob/master/test/coreneuron/test_direct.hoc) here. -## Additional Notes +## FAQs -#### Results +#### What results are returned by CoreNEURON? -Currently CoreNEURON transfers spikes, voltages and all state variables to NEURON. These variables can be recorded using regular NEURON API (e.g. vector record). +At the end of simulation, CoreNEURON can transfers spikes, voltages, state variables and NetCon weights to NEURON. These variables can be recorded using regular NEURON API (e.g. [Vector.record](https://www.neuron.yale.edu/neuron/static/py_doc/programming/math/vector.html#Vector.record) or [spike_record](https://www.neuron.yale.edu/neuron/static/new_doc/modelspec/programmatic/network/parcon.html#ParallelContext.spike_record)). -#### Optimization Flags +#### How can I poass additional flags to build? -One can specify C/C++ optimization flags specific to the compiler and architecture with `-DCMAKE_CXX_FLAGS` and `-DCMAKE_C_FLAGS` options to the CMake command. For example: +One can specify C/C++ optimization flags specific to the compiler with `-DCMAKE_CXX_FLAGS` and `-DCMAKE_C_FLAGS` options to the CMake command. For example: ```bash cmake .. -DCMAKE_CXX_FLAGS="-O3 -g" \ @@ -200,8 +198,115 @@ cmake .. -DCMAKE_CXX_FLAGS="-O3 -g" \ -DCMAKE_BUILD_TYPE=CUSTOM ``` -By default OpenMP threading is enabled. You can disable it with `-DCORENRN_ENABLE_OPENMP=OFF` +By default, OpenMP threading is enabled. You can disable it with `-DCORENRN_ENABLE_OPENMP=OFF` + +#### GPU enabled build is failing with inlining related errors, what to do? + +If there are large functions / procedures in MOD file that are not inlined by compiler, you may need to pass additional C++ flags to PGI compiler. You can try: + +``` +cmake .. -DCMAKE_CXX_FLAGS="-O2 -Minline=size:1000,levels:100,totalsize:40000,maxsize:4000" \ + -DCORENRN_ENABLE_GPU=ON -DCMAKE_INSTALL_PREFIX=$HOME/install +``` + + +## Developer Build + +##### Building standalone CoreNEURON + +If you want to build standalone CoreNEURON version, first download repository as: + +``` +git clone https://github.com/BlueBrain/CoreNeuron.git + +``` + +Once appropriate modules for compiler, MPI, CMake are loaded, you can build CoreNEURON with: + +```bash +mkdir CoreNeuron/build && cd CoreNeuron/build +cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/install +make -j && make install +``` + +If you don't have MPI, you can disable MPI dependency using the CMake option `-DCORENRN_ENABLE_MPI=OFF`. Once build is successful, you can run tests using: + +``` +make test +``` + +##### Compiling MOD files + +In order to compiler mod files, one can use **nrnivmodl-core** as: + +```bash +/install-path/bin/nrnivmodl-core mod-dir +``` + +This will create `special-core` executable under `` directory. + +##### Building with GPU support + +CoreNEURON has support for GPUs using the OpenACC programming model when enabled with `-DCORENRN_ENABLE_GPU=ON`. Below are the steps to compile with PGI compiler: + +```bash +module purge all +module load pgi/19.4 cuda/10 cmake intel-mpi # change pgi, cuda and mpi modules +cmake .. -DCORENRN_ENABLE_GPU=ON -DCMAKE_INSTALL_PREFIX=$HOME/install +make -j && make install +``` + +Note that the CUDA Toolkit version should be compatible with PGI compiler installed on your system. Otherwise, you have to add extra C/C++ flags. For example, if we are using CUDA Toolkit 9.0 installation but PGI default target is CUDA 8.0 then we have to add : + +```bash +-DCMAKE_C_FLAGS:STRING="-O2 -ta=tesla:cuda9.0" -DCMAKE_CXX_FLAGS:STRING="-O2 -ta=tesla:cuda9.0" +``` + +You have to run GPU executable with the `--gpu` flag. Make sure to enable cell re-ordering mechanism to improve GPU performance using `--cell_permute` option (permutation types : 2 or 1): + +```bash +mpirun -n 1 ./bin/nrniv-core --mpi --gpu --tstop 100 --datpath ../tests/integration/ring --cell-permute 2 +``` + +> Note: that if your model is using Random123 random number generator, you can't use same executable for CPU and GPU runs. We suggest to build separate executable for CPU and GPU simulations. This will be fixed in future releases. + + +##### Running tests with SLURM + +If you have different mpi launcher, you can specify it during cmake configuration as: + +```bash +cmake .. -DTEST_MPI_EXEC_BIN="mpirun" \ + -DTEST_EXEC_PREFIX="mpirun;-n;2" \ + -DTEST_EXEC_PREFIX="mpirun;-n;2" \ + -DAUTO_TEST_WITH_SLURM=OFF \ + -DAUTO_TEST_WITH_MPIEXEC=OFF \ +``` +You can disable tests using with options: + +``` +cmake .. -CORENRN_ENABLE_UNIT_TESTS=OFF +``` + +##### CLI Options + +To see all CLI options for CoreNEURON, see `./bin/nrniv-core -h`. + +##### Formatting CMake and C++ Code + +In order to format code with `cmake-format` and `clang-format` tools, before creating PR, enable below CMake options: + +``` +cmake .. -DCORENRN_CLANG_FORMAT=ON -DCORENRN_CMAKE_FORMAT=ON +make -j +``` + +and now you can use `cmake-format` or `clang-format` targets: +``` +make cmake-format +make clang-format +``` ## License * See LICENSE.txt From 0b600b470382e62cc2dcff003527f9e3b68664d8 Mon Sep 17 00:00:00 2001 From: pramodk Date: Sun, 16 Aug 2020 10:17:02 +0200 Subject: [PATCH 03/10] small change in the description --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index d40150ea7..07f7a4eba 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ # CoreNEURON > Optimised simulator engine for [NEURON](https://www.neuron.yale.edu/neuron/) -CoreNEURON is a compute engine for the [NEURON](https://www.neuron.yale.edu/neuron/) simulator optimised for both memory usage and computational speed. Its goal is to simulate large cell networks with small memory footprint and optimal performance. +CoreNEURON is a compute engine for the [NEURON](https://www.neuron.yale.edu/neuron/) simulator optimised for both memory usage and computational speed. Its goal is to simulate large cell networks with smaller memory footprint and optimal performance. ## Features / Compatibility From c9587fb435ae79e8276cd8ba2b7fea803870edc2 Mon Sep 17 00:00:00 2001 From: pramodk Date: Sun, 16 Aug 2020 20:01:08 +0200 Subject: [PATCH 04/10] Add citation and contribuition information fixes #222 --- README.md | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 07f7a4eba..211062f44 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ # CoreNEURON > Optimised simulator engine for [NEURON](https://www.neuron.yale.edu/neuron/) -CoreNEURON is a compute engine for the [NEURON](https://www.neuron.yale.edu/neuron/) simulator optimised for both memory usage and computational speed. Its goal is to simulate large cell networks with smaller memory footprint and optimal performance. +CoreNEURON is a compute engine for the [NEURON](https://www.neuron.yale.edu/neuron/) simulator optimised for both memory usage and computational speed. Its goal is to simulate large cell networks with small memory footprint and optimal performance. ## Features / Compatibility @@ -308,12 +308,24 @@ make cmake-format make clang-format ``` +### Citation + +If you would like to know more about the the CoreNEURON or would like to cite it then use following paper: + +* Pramod Kumbhar, Michael Hines, Jeremy Fouriaux, Aleksandr Ovcharenko, James King, Fabien Delalondre and Felix Schürmann. CoreNEURON : An Optimized Compute Engine for the NEURON Simulator ([doi.org/10.3389/fninf.2019.00063](https://doi.org/10.3389/fninf.2019.00063)) + + +### Support / Contribuition + +If you see any issue, feel free to [raise a ticket](https://github.com/BlueBrain/CoreNeuron/issues/new). If you would like to improve this library, see [open issues](https://github.com/BlueBrain/CoreNeuron/issues). + +You can see current [contributors here](https://github.com/BlueBrain/CoreNeuron/graphs/contributors). + + ## License * See LICENSE.txt * See [NEURON](https://www.neuron.yale.edu/neuron/) -## Contributors -See [contributors](https://github.com/BlueBrain/CoreNeuron/graphs/contributors). ## Funding From 89b9f93557181e3e532cf59dbd09bfb50680ba63 Mon Sep 17 00:00:00 2001 From: pramodk Date: Sun, 16 Aug 2020 20:08:05 +0200 Subject: [PATCH 05/10] Mention flex, bison depndencies from NEURON. Fixes #190 --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 211062f44..923a38f14 100644 --- a/README.md +++ b/README.md @@ -19,6 +19,7 @@ CoreNEURON is designed as library within NEURON simulator and can transparently * [PGI OpenACC Compiler >=19.0](https://www.pgroup.com/resources/accel.htm) [Optional, for GPU support] * [CUDA Toolkit >=6.0](https://developer.nvidia.com/cuda-toolkit-60) [Optional, for GPU support] +In addition to this, you will need other [NEURON dependencies](https://github.com/neuronsimulator/nrn) like Python, Flex, Bison etc. ## Installation From 0fd891cd263db63959f7bfcfefd19cd21f53c9fe Mon Sep 17 00:00:00 2001 From: pramodk Date: Mon, 17 Aug 2020 18:25:16 +0200 Subject: [PATCH 06/10] Provide link to documentaiton for BBPCOREPOINTER --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 923a38f14..8ceeabb0e 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ CoreNEURON is designed as library within NEURON simulator and can transparently * MOD files should be THREADSAFE * MOD files must use Random123 random number generator (instead of MCellRan4) -* POINTER variables in MOD files need special handling. Please [open an issue](https://github.com/BlueBrain/CoreNeuron/issues) with an example of MOD file. We will add documentation about this in near future. +* POINTER variable needs to be converted to BBCOREPOINTER ([details here](http://bluebrain.github.io/CoreNeuron/index.html)) ## Dependencies * [CMake 3.7+](https://cmake.org) From f67204aaa8460d33d8ee83cef40d9bf89c10176a Mon Sep 17 00:00:00 2001 From: pramodk Date: Mon, 17 Aug 2020 18:31:50 +0200 Subject: [PATCH 07/10] fix typos! thanks to alex\! --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 8ceeabb0e..681ed4909 100644 --- a/README.md +++ b/README.md @@ -90,12 +90,12 @@ In order to enable CoreNEURON support, we have to use `-coreneuron` flag: nrnivmodl -coreneuron mod_directory ``` -If you see any compilation error then one of the mod file might be incompatible with CoreNEURON. Please [open an issue](https://github.com/BlueBrain/CoreNeuron/issues) with an example and we can help to fix it. +If you see any compilation error then one of the mod files might be incompatible with CoreNEURON. Please [open an issue](https://github.com/BlueBrain/CoreNeuron/issues) with an example and we can help to fix it. ## Running Simulations -With CoreNEURON, existing NEURON models can be run with minimal. If you have existing NEURON model, we typically need to make following changes: +With CoreNEURON, existing NEURON models can be run with minimal changes. If you have existing NEURON model, we typically need to make following changes: 1. Enable cache effficiency : `h.cvode.cache_efficient(1)` 2. Enable CoreNEURON : @@ -111,7 +111,7 @@ With CoreNEURON, existing NEURON models can be run with minimal. If you have exi pc.psolve(h.tstop) ``` -Here is a simple example of model that run with NEURON first followed by CoreNEURON and compares results between NEURON and CoreNEURON execution: +Here is a simple example model that runs with NEURON first, followed by CoreNEURON and compares results between NEURON and CoreNEURON execution: ```python @@ -311,7 +311,7 @@ make clang-format ### Citation -If you would like to know more about the the CoreNEURON or would like to cite it then use following paper: +If you would like to know more about the CoreNEURON or would like to cite it then use following paper: * Pramod Kumbhar, Michael Hines, Jeremy Fouriaux, Aleksandr Ovcharenko, James King, Fabien Delalondre and Felix Schürmann. CoreNEURON : An Optimized Compute Engine for the NEURON Simulator ([doi.org/10.3389/fninf.2019.00063](https://doi.org/10.3389/fninf.2019.00063)) From c0845f5a0d6279af908d1ebcf9f749e678892447 Mon Sep 17 00:00:00 2001 From: pramodk Date: Tue, 18 Aug 2020 00:50:42 +0200 Subject: [PATCH 08/10] Address review comments --- README.md | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index 681ed4909..a11c46003 100644 --- a/README.md +++ b/README.md @@ -39,12 +39,17 @@ CoreNEURON is now integrated into development version of NEURON simulator. If yo cd build ``` -3. Load modules : Currently CoreNEURON version rely on compiler auto-vectorisation and hence we advise to use Intel/Cray/PGI compiler to get better performance. This constraint will be removed in near future with the integration of [NMODL](https://github.com/BlueBrain/nmodl) project. Load necessary modules on your system, e.g. +3. Load software dependencies + + Currently CoreNEURON rely on compiler auto-vectorisation and hence we advise to use Intel/Cray/PGI compiler to generate vectorised code. This constraint will be removed in near future with the integration of [NMODL](https://github.com/BlueBrain/nmodl) project. + + HPC systems often use a module system to select software. For example, you can load compiler, cmake, python dependencies using module as: + ``` module load intel intel-mpi python cmake ``` -If you are building on Cray system with GNU toolchain, set following environmental variable: +Note that if you are building on Cray system with GNU toolchain, you have to following environmental variable: ```bash export CRAYPE_LINK_TYPE=dynamic @@ -65,8 +70,7 @@ If you would like to enable GPU support with OpenACC, make sure to use `-DCORENR 4. Build and Install : once configure step is done, you can build and install project as: ```bash - make -j - make install + make -j install ``` ## Building Model @@ -187,9 +191,9 @@ You can find [HOC example](https://github.com/neuronsimulator/nrn/blob/master/te #### What results are returned by CoreNEURON? -At the end of simulation, CoreNEURON can transfers spikes, voltages, state variables and NetCon weights to NEURON. These variables can be recorded using regular NEURON API (e.g. [Vector.record](https://www.neuron.yale.edu/neuron/static/py_doc/programming/math/vector.html#Vector.record) or [spike_record](https://www.neuron.yale.edu/neuron/static/new_doc/modelspec/programmatic/network/parcon.html#ParallelContext.spike_record)). +At the end of simulation CoreNEURON transfers by default : spikes, voltages, state variables, NetCon weights, all Vector.record, and most GUI trajectories to NEURON. These variables can be recorded using regular NEURON API (e.g. [Vector.record](https://www.neuron.yale.edu/neuron/static/py_doc/programming/math/vector.html#Vector.record) or [spike_record](https://www.neuron.yale.edu/neuron/static/new_doc/modelspec/programmatic/network/parcon.html#ParallelContext.spike_record)). -#### How can I poass additional flags to build? +#### How can I pass additional flags to build? One can specify C/C++ optimization flags specific to the compiler with `-DCMAKE_CXX_FLAGS` and `-DCMAKE_C_FLAGS` options to the CMake command. For example: @@ -238,7 +242,7 @@ make test ##### Compiling MOD files -In order to compiler mod files, one can use **nrnivmodl-core** as: +In order to compile mod files, one can use **nrnivmodl-core** as: ```bash /install-path/bin/nrnivmodl-core mod-dir @@ -299,7 +303,7 @@ In order to format code with `cmake-format` and `clang-format` tools, before cre ``` cmake .. -DCORENRN_CLANG_FORMAT=ON -DCORENRN_CMAKE_FORMAT=ON -make -j +make -j install ``` and now you can use `cmake-format` or `clang-format` targets: From 42f2e703fbe0af0d4c90219b312eb07070a7c4cf Mon Sep 17 00:00:00 2001 From: Alexandru Savulescu <46521150+alexsavulescu@users.noreply.github.com> Date: Tue, 18 Aug 2020 11:13:58 +0200 Subject: [PATCH 09/10] Few cosmetic touches --- README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index a11c46003..201e83874 100644 --- a/README.md +++ b/README.md @@ -41,7 +41,7 @@ CoreNEURON is now integrated into development version of NEURON simulator. If yo 3. Load software dependencies - Currently CoreNEURON rely on compiler auto-vectorisation and hence we advise to use Intel/Cray/PGI compiler to generate vectorised code. This constraint will be removed in near future with the integration of [NMODL](https://github.com/BlueBrain/nmodl) project. + Currently CoreNEURON relies on compiler auto-vectorisation and hence we advise to use Intel/Cray/PGI compilers to generate vectorised code. This constraint will be removed in the near future with the integration of [NMODL](https://github.com/BlueBrain/nmodl) project. HPC systems often use a module system to select software. For example, you can load compiler, cmake, python dependencies using module as: @@ -49,7 +49,7 @@ CoreNEURON is now integrated into development version of NEURON simulator. If yo ``` module load intel intel-mpi python cmake ``` -Note that if you are building on Cray system with GNU toolchain, you have to following environmental variable: +Note that if you are building on Cray system with GNU toolchain, you have to set the following environment variable: ```bash export CRAYPE_LINK_TYPE=dynamic @@ -99,7 +99,7 @@ If you see any compilation error then one of the mod files might be incompatible ## Running Simulations -With CoreNEURON, existing NEURON models can be run with minimal changes. If you have existing NEURON model, we typically need to make following changes: +With CoreNEURON, existing NEURON models can be run with minimal changes. For a given NEURON model, we typically need to adjust as follows: 1. Enable cache effficiency : `h.cvode.cache_efficient(1)` 2. Enable CoreNEURON : @@ -207,7 +207,7 @@ By default, OpenMP threading is enabled. You can disable it with `-DCORENRN_ENAB #### GPU enabled build is failing with inlining related errors, what to do? -If there are large functions / procedures in MOD file that are not inlined by compiler, you may need to pass additional C++ flags to PGI compiler. You can try: +If there are large functions / procedures in the MOD file that are not inlined by the compiler, you may need to pass additional C++ flags to PGI compiler. You can try: ``` cmake .. -DCMAKE_CXX_FLAGS="-O2 -Minline=size:1000,levels:100,totalsize:40000,maxsize:4000" \ @@ -315,7 +315,7 @@ make clang-format ### Citation -If you would like to know more about the CoreNEURON or would like to cite it then use following paper: +If you would like to know more about CoreNEURON or would like to cite it, then use the following paper: * Pramod Kumbhar, Michael Hines, Jeremy Fouriaux, Aleksandr Ovcharenko, James King, Fabien Delalondre and Felix Schürmann. CoreNEURON : An Optimized Compute Engine for the NEURON Simulator ([doi.org/10.3389/fninf.2019.00063](https://doi.org/10.3389/fninf.2019.00063)) From 2c198af106647a9a4035a34e491019a1d5d40f06 Mon Sep 17 00:00:00 2001 From: Omar Awile Date: Tue, 18 Aug 2020 17:23:38 +0200 Subject: [PATCH 10/10] Added a number of minor language fixes. --- README.md | 48 ++++++++++++++++++++++++------------------------ 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/README.md b/README.md index 201e83874..598ca7bac 100644 --- a/README.md +++ b/README.md @@ -7,11 +7,11 @@ CoreNEURON is a compute engine for the [NEURON](https://www.neuron.yale.edu/neur ## Features / Compatibility -CoreNEURON is designed as library within NEURON simulator and can transparently handle all spiking network simulations including gap junction coupling with the **fixed time step method**. In order to run NEURON model with CoreNEURON, +CoreNEURON is designed as a library within the NEURON simulator and can transparently handle all spiking network simulations including gap junction coupling with the **fixed time step method**. In order to run a NEURON model with CoreNEURON: * MOD files should be THREADSAFE -* MOD files must use Random123 random number generator (instead of MCellRan4) -* POINTER variable needs to be converted to BBCOREPOINTER ([details here](http://bluebrain.github.io/CoreNeuron/index.html)) +* MOD files must use the Random123 random number generator (instead of MCellRan4) +* POINTER variables need to be converted to BBCOREPOINTER ([details here](http://bluebrain.github.io/CoreNeuron/index.html)) ## Dependencies * [CMake 3.7+](https://cmake.org) @@ -19,13 +19,13 @@ CoreNEURON is designed as library within NEURON simulator and can transparently * [PGI OpenACC Compiler >=19.0](https://www.pgroup.com/resources/accel.htm) [Optional, for GPU support] * [CUDA Toolkit >=6.0](https://developer.nvidia.com/cuda-toolkit-60) [Optional, for GPU support] -In addition to this, you will need other [NEURON dependencies](https://github.com/neuronsimulator/nrn) like Python, Flex, Bison etc. +In addition to this, you will need other [NEURON dependencies](https://github.com/neuronsimulator/nrn) such as Python, Flex, Bison etc. ## Installation -CoreNEURON is now integrated into development version of NEURON simulator. If you are a NEURON user, preferred way to install CoreNEURON is to enable extra build options during NEURON installation as follows: +CoreNEURON is now integrated into the development version of the NEURON simulator. If you are a NEURON user, the preferred way to install CoreNEURON is to enable extra build options during NEURON installation as follows: -1. Clone latest version of NEURON: +1. Clone the latest version of NEURON: ``` git clone https://github.com/neuronsimulator/nrn @@ -41,15 +41,15 @@ CoreNEURON is now integrated into development version of NEURON simulator. If yo 3. Load software dependencies - Currently CoreNEURON relies on compiler auto-vectorisation and hence we advise to use Intel/Cray/PGI compilers to generate vectorised code. This constraint will be removed in the near future with the integration of [NMODL](https://github.com/BlueBrain/nmodl) project. + Currently CoreNEURON relies on compiler auto-vectorisation and hence we advise to use one of Intel, Cray, or PGI compilers to ensure vectorized code is generated. This constraint will be removed in the near future with the integration of the [NMODL](https://github.com/BlueBrain/nmodl) project. - HPC systems often use a module system to select software. For example, you can load compiler, cmake, python dependencies using module as: + HPC systems often use a module system to select software. For example, you can load the compiler, cmake, and python dependencies using module as follows: ``` module load intel intel-mpi python cmake ``` -Note that if you are building on Cray system with GNU toolchain, you have to set the following environment variable: +Note that if you are building on Cray system with the GNU toolchain, you have to set following environment variable: ```bash export CRAYPE_LINK_TYPE=dynamic @@ -64,10 +64,10 @@ Note that if you are building on Cray system with GNU toolchain, you have to set -DNRN_ENABLE_RX3D=OFF \ -DCMAKE_INSTALL_PREFIX=$HOME/install ``` -If you would like to enable GPU support with OpenACC, make sure to use `-DCORENRN_ENABLE_GPU=ON` option and use PGI compiler with CUDA. -> NOTE : if you see error and re-run CMake command then make sure to remove temporary CMake cache files by deleting `CMakeCache.txt`. +If you would like to enable GPU support with OpenACC, make sure to use `-DCORENRN_ENABLE_GPU=ON` option and use the PGI compiler with CUDA. +> NOTE : if the CMake command files, please make sure to delete temporary CMake cache files (`CMakeCache.txt`) before rerunning CMake. -4. Build and Install : once configure step is done, you can build and install project as: +4. Build and Install : once the configure step is done, you can build and install the project as: ```bash make -j install @@ -75,20 +75,20 @@ If you would like to enable GPU support with OpenACC, make sure to use `-DCORENR ## Building Model -Once NEURON is installed with CoreNEURON support, setup `PATH` and `PYTHONPATH ` variables as: +Once NEURON is installed with CoreNEURON support, you need setup setup the `PATH` and `PYTHONPATH ` environment variables as: ``` export PYTHONPATH=$HOME/install/lib/python:$PYTHONPATH export PATH=$HOME/install/bin:$PATH ``` -Like typical NEURON workflow, you can use `nrnivmodl` to translate MOD files: +As in a typical NEURON workflow, you can use `nrnivmodl` to translate MOD files: ``` nrnivmodl mod_directory ``` -In order to enable CoreNEURON support, we have to use `-coreneuron` flag: +In order to enable CoreNEURON support, you must set the `-coreneuron` flag: ``` nrnivmodl -coreneuron mod_directory @@ -191,7 +191,7 @@ You can find [HOC example](https://github.com/neuronsimulator/nrn/blob/master/te #### What results are returned by CoreNEURON? -At the end of simulation CoreNEURON transfers by default : spikes, voltages, state variables, NetCon weights, all Vector.record, and most GUI trajectories to NEURON. These variables can be recorded using regular NEURON API (e.g. [Vector.record](https://www.neuron.yale.edu/neuron/static/py_doc/programming/math/vector.html#Vector.record) or [spike_record](https://www.neuron.yale.edu/neuron/static/new_doc/modelspec/programmatic/network/parcon.html#ParallelContext.spike_record)). +At the end of the simulation CoreNEURON transfers by default : spikes, voltages, state variables, NetCon weights, all Vector.record, and most GUI trajectories to NEURON. These variables can be recorded using regular NEURON API (e.g. [Vector.record](https://www.neuron.yale.edu/neuron/static/py_doc/programming/math/vector.html#Vector.record) or [spike_record](https://www.neuron.yale.edu/neuron/static/new_doc/modelspec/programmatic/network/parcon.html#ParallelContext.spike_record)). #### How can I pass additional flags to build? @@ -219,14 +219,14 @@ cmake .. -DCMAKE_CXX_FLAGS="-O2 -Minline=size:1000,levels:100,totalsize:40000,ma ##### Building standalone CoreNEURON -If you want to build standalone CoreNEURON version, first download repository as: +If you want to build the standalone CoreNEURON version, first download the repository as: ``` git clone https://github.com/BlueBrain/CoreNeuron.git ``` -Once appropriate modules for compiler, MPI, CMake are loaded, you can build CoreNEURON with: +Once the appropriate modules for compiler, MPI, CMake are loaded, you can build CoreNEURON with: ```bash mkdir CoreNeuron/build && cd CoreNeuron/build @@ -234,7 +234,7 @@ cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/install make -j && make install ``` -If you don't have MPI, you can disable MPI dependency using the CMake option `-DCORENRN_ENABLE_MPI=OFF`. Once build is successful, you can run tests using: +If you don't have MPI, you can disable the MPI dependency using the CMake option `-DCORENRN_ENABLE_MPI=OFF`. Once build is successful, you can run tests using: ``` make test @@ -248,7 +248,7 @@ In order to compile mod files, one can use **nrnivmodl-core** as: /install-path/bin/nrnivmodl-core mod-dir ``` -This will create `special-core` executable under `` directory. +This will create a `special-core` executable under `` directory. ##### Building with GPU support @@ -261,7 +261,7 @@ cmake .. -DCORENRN_ENABLE_GPU=ON -DCMAKE_INSTALL_PREFIX=$HOME/install make -j && make install ``` -Note that the CUDA Toolkit version should be compatible with PGI compiler installed on your system. Otherwise, you have to add extra C/C++ flags. For example, if we are using CUDA Toolkit 9.0 installation but PGI default target is CUDA 8.0 then we have to add : +Note that the CUDA Toolkit version should be compatible with the PGI compiler installed on your system. Otherwise, you have to add extra C/C++ flags. For example, if we are using CUDA Toolkit 9.0 installation but PGI default target is CUDA 8.0 then we have to add : ```bash -DCMAKE_C_FLAGS:STRING="-O2 -ta=tesla:cuda9.0" -DCMAKE_CXX_FLAGS:STRING="-O2 -ta=tesla:cuda9.0" @@ -273,12 +273,12 @@ You have to run GPU executable with the `--gpu` flag. Make sure to enable cell r mpirun -n 1 ./bin/nrniv-core --mpi --gpu --tstop 100 --datpath ../tests/integration/ring --cell-permute 2 ``` -> Note: that if your model is using Random123 random number generator, you can't use same executable for CPU and GPU runs. We suggest to build separate executable for CPU and GPU simulations. This will be fixed in future releases. +> Note: If your model is using Random123 random number generator, you cannot use the same executable for CPU and GPU runs. We suggest to build separate executables for CPU and GPU simulations. This will be fixed in future releases. ##### Running tests with SLURM -If you have different mpi launcher, you can specify it during cmake configuration as: +If you have a different mpi launcher (than `mpirun`), you can specify it during cmake configuration as: ```bash cmake .. -DTEST_MPI_EXEC_BIN="mpirun" \ @@ -299,7 +299,7 @@ To see all CLI options for CoreNEURON, see `./bin/nrniv-core -h`. ##### Formatting CMake and C++ Code -In order to format code with `cmake-format` and `clang-format` tools, before creating PR, enable below CMake options: +In order to format code with `cmake-format` and `clang-format` tools, before creating a PR, enable below CMake options: ``` cmake .. -DCORENRN_CLANG_FORMAT=ON -DCORENRN_CMAKE_FORMAT=ON