Skip to content

AnimaVR/NeuroSync_Local_API

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

NeuroSync Local API

29/03/2025 Update to model.pth and model.py

  • Increased accuracy (timing and overall face shows more natural movement overall, brows, squint, cheeks + mouth shapes)
  • More smoothness during playback (flappy mouth be gone in most cases, even when speaking quickly)
  • Works better with more voices and styles of speaking.
  • This preview of the new model is a modest increase in capability that requires both model.pth and model.py to be replace with the new versions.

Download the model from Hugging Face

These increases in quality come from better data and removal of "global" positional encoding from the model and staying with ropes positional encoding within the MHA block.

Overview

The NeuroSync Local API allows you to host the audio-to-face blendshape transformer model locally. This API processes audio data and outputs facial blendshape coefficients, which can be streamed directly to Unreal Engine using the NeuroSync Player and LiveLink.

Features:

  • Host the model locally for full control
  • Process audio files and generate facial blendshapes

NeuroSync Model

To generate the blendshapes, you can:

Player Requirement

To stream the generated blendshapes into Unreal Engine, you will need the NeuroSync Player. The Player allows for real-time integration with Unreal Engine via LiveLink.

You can find the NeuroSync Player and instructions on setting it up here:

Visit neurosync.info

Talk to a NeuroSync prototype live on Twitch : Visit Mai

About

Audio to face local inference helper code.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages