Haibao Wang, Jun Kai Ho, Fan L. Cheng, Shuntaro C. Aoki, Yusuke Muraki, Misato Tanaka, Yukiyasu Kamitani
To begin, clone the repository on your local machine, using git clone and pasting the url of this project:
git clone https://github.com/KamitaniLab/InterSiteNeuralCodeConversion.git
Step1: Navigate to the base directory and create the Conda environment:
conda env create -f env.yaml
Step2: Activate the environment:
conda activate NCC
To use this project, you'll need to download and organize the required data:
Alternatively, you can use the following commands to download specific data (The data will be automatically extracted and organized into the designated directory):
# In "data" directory:
# To download the training fMRI data:
python download.py fmri_training
# Or to download the test fMRI data:
python download.py fmri_test
# download the DNN features of training images:
python download.py stimulus_feature
To use this project, you'll need to download the required pre-trained decoders from Figshare with the following command:
python download.py pre-trained-decoders
If you prefer to train the decoders yourself (approximately 2 days per subject), detailed instructions and scripts are available in the feature-decoding
directory.
To train the neural code converters using content loss for subject pairs, navigate to the NCC_content_loss
directory and run:
python NCC_train.py --cuda
- Note: Use the
--cuda
flag when running on a GPU server. Omit--cuda
if training on a CPU server.
Training one subject pair usually takes about 15 hours due to the large computational requirements. You can also download the pre-trained converters from Figshare with the following command:
python download.py pre-trained-converters
To train the neural code converters using brain loss for subject pairs, navigate to the NCC_brain_loss
directory and run:
python ncc_train.py
To decode DNN features from converted brain activities (approximately 80 mins per subject pair), use the following commands in the corresponding directory:
-
For content loss-based converters:
python NCC_test.py --cuda
-
For brain loss-based converters:
python ncc_test.py
To reconstruct images from the decoded features:
- Navigate to the
reconstruction
directory. - Follow the provided README and reconstruction demo for detailed instructions on setting up the environment and usage.
- Modify the directory of the decoded features in the script as needed to reconstruct images.
The quantitative evaluations are presented in terms of conversion accuracy, decoding accuracy, and identification accuracy.
To calculate raw correlations for conversion accuracy, navigate to the conversion_accuracy
directory and run:
-
For content loss-based converters:
# pattern correlation python fmri_pattern_corr_content_loss.py # profile correlation python fmri_profile_corr_content_loss.py
-
For brain loss-based converters:
# pattern correlation python fmri_pattern_corr_brain_loss.py # profile correlation python fmri_profile_corr_brain_loss.py
To obtain the normalized correlations and plot the Figure 2E and 2F with the provided result, use the following command:
python plot_figure.py
To calculate decoding accuracy for decoded features, first download the ground truth features of the stimulus images in the data
directory using:
python download.py test_image-true_features
Then, navigate to the decoding_accuracy
directory and run:
python featdec_eval.py
To plot the Figure 3B and 3C with the provided result, use the following command:
python plot_figure.py
To quantitatively evaluate the reconstructed images, please request and download the ground truth stimulus images using this link due to licensing restrictions. Organize the downloaded images in the following directory structure: data/test_image/source
.
Then, navigate to the identification_accuracy
directory and run:
python recon_image_eval.py
python recon_image_eval_dnn.py
To plot the Figure 3F with the provided result, use the following command.
python plot_figure.py
Wang, H., Ho, J. K., Cheng, F. L., Aoki, S. C., Muraki, Y., Tanaka, M., & Kamitani, Y. (2024). Inter-individual and inter-site neural code conversion without shared stimuli. arXiv preprint arXiv:2403.11517.