This is my attempt at creating a machine learning algorithm to map brainwaves to emotional response using Spotify's emotion metadata, Pytorch's machine learning library, Brainflow's ONNX integration, and VRChat OSC.
Thanks to Nanao Ei for applying their machine learning experience and create a better model!
- Use Spotify to find songs with various emotion metadata
- Listen to track sections and record your eeg data through Brainflow
- Train a machine learning algo using Pytorch
- Export algo as an ONNX model
- Import ONNX model using Brainflow ONNX integration
- Use Brainflow and the ONNX model to extract emotions from yourself
- Send your live emotion data to your avatar via VRChat OSC
Emotion can be described as a 2D space, where the the x axis is positivity/valence of the emotion and the y axis is the energy/arousal of the emotion. For example, relaxed can be described as a positive low energy emotion but excited can be described as a positive high energy emotion. This way of describing emotion results in two floats that can be sent to your VRChat avatar for various emotional animations.
- Install Python
- Install Pip
- Install required libraries with this command:
pip install -r requirements.txt
- Log into Spotify Developer Dashboard
- Create an app, recording its client id and secret
- Set the app's redirect uri to
https://open.spotify.com
- Follow the Authorization Code Flow steps to setup authentication
- Open your spotify client and keep it open
- Run
python get_device_ids.py
command, following the commands to get autheticated. - The script will return device information. Record the device id if your chosen device
- Get the metadata for the Spotify EEG Playlist by running
python get_spotify_metadata.py
(This will take a while, only run this once!)
- Have 2 hours to spare
- Have your Spotify Device ID handy
- Get your EEG headband's board ID: Supported Boards
- Turn on and wear your EEG headband
- Run this command
python record_eeg.py --board-id BOARD_ID --spotify-device-id DEVICE_ID
, replacingBOARD_ID
with your board ID andDEVICE_ID
with your Spotify Device ID - Lay back and listen to the music. The script will automatically play sections of music at random, pausing for 5 seconds in between, and recording your brainwaves.
- Make sure to have completed steps for
Recording EEG Data while listening to Spotify
- Run this command:
python train.py
- Wait for it to finish
- Once finished, a graph will pop up showing the error rate. It should descend over time.
- Close the graph. An onnx model should now be saved in the project folder
- Turn on and wear your EEG headband
- Run the script main.py with your device id:
python main.py --board-id BOARD_ID
- This will now start sending emotion data to VRChat OSC. This will replace the usual osc avatar paramaters. They will still have a range of [-1, 1]
- Emotion Energy =>
/avatar/parameters/osc_relax_avg
- Emotion Positivity =>
/avatar/parameters/osc_focus_avg
- Emotion Energy =>
- Complete the steps above for
Training the model
- Run this command:
jupyter notebook
- Open http://localhost:8888/notebooks/viz-and-modeling.ipynb in a web browser
- Run the cells in the notebook to visualize a UMAP embedding of the EEG signals in
dataset.pkl
. The notebook also contains code for training a random forest regression model, performing cross-validation, and computing training error.