Skip to content

Latest commit

 

History

History
29 lines (19 loc) · 785 Bytes

README.md

File metadata and controls

29 lines (19 loc) · 785 Bytes

Generative Pre-trained Transformer

This repository implements the Decoder part of the Attention is all you need in pure PyTorch. The encoding is done using OpenAI's tiktoken library.

Installation

You'll need pixi or, when you have all the dependencies installed, you can just use pip.

pixi install # super easy
pip install . --no-deps

Usage

gpt train --iterations 5000 --text data/tiny-shakespeare.txt

this will create a model.pt and config.txt file.

You can use those to then generate text using

gpt prompt --model model.pt --config config.txt

which simply generates a fixed amount of tokens given the prompt.