An implementation of an all-new foundation model architecture that trains on byte sequences from multiple modalities to handle omni-modal generation of text, video, images and more.
-
Updated
Feb 10, 2025 - Python
An implementation of an all-new foundation model architecture that trains on byte sequences from multiple modalities to handle omni-modal generation of text, video, images and more.
Add a description, image, and links to the gpt100000 topic page so that developers can more easily learn about it.
To associate your repository with the gpt100000 topic, visit your repo's landing page and select "manage topics."