Skip to content

Latest commit

 

History

History
36 lines (21 loc) · 2.72 KB

File metadata and controls

36 lines (21 loc) · 2.72 KB

The neural representation of visually evoked emotion is high-dimensional, categorical, and distributed across transmodal brain regions

This repository is Python version of the following repositoy: KamitaniLab/EmotionVideoNeuralRepresentation

This repositoy contains the data and some code for reproducing results in our paper: Horikawa, Cowen, Keltner, and Kamitani (2020) The neural representation of visually evoked emotion is high-dimensional, categorical, and distributed across transmodal brain regions. iScience. We investigated the neural representation of visually evoked emotions using fMRI responses to 2185 emotionally evocative short videos generated by Alan S. Cowen and Dacher Keltner (PNAS, 2017).

Dataset

  • The preprocessed fMRI data for five subjects and ratings/features (category, dimension, visual object, and semantic) are available at figshare.
  • The raw fMRI data (bids format) is available at OpenNeuro.

Video stimuli

  • We used 2185 emotion evocative short videos collected by Cowen and Keltner (2017).
  • You can request the videos with emotion ratings from the following URL (https://goo.gl/forms/XErJw9sBeyuOyp5Q2).

Setup and Usage

The following tutorial explains how to set up the environment and perform calculations. Please download this file or open it in Google Colab.

Tutorials.ipynbOpen In Colab

* The calculations performed in the Tutorial are greatly simplified.

Please note that:

  • Running it with the original settings presented in this tutorial does not cover the complete analysis of our paper; nested cross-validation for hyperparameter tuning is omitted from reproduction code for now.
  • Even if it hyperparameter tuning is omitted, to perform analysis following the original setting, you may need to use large-scale computing resources.