![](https://private-user-images.githubusercontent.com/786476/241106339-2d967cb0-2a18-429b-8303-1257afe15ffc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxMDkwOTYsIm5iZiI6MTczOTEwODc5NiwicGF0aCI6Ii83ODY0NzYvMjQxMTA2MzM5LTJkOTY3Y2IwLTJhMTgtNDI5Yi04MzAzLTEyNTdhZmUxNWZmYy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjA5JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIwOVQxMzQ2MzZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT01Yzc0MDcxOTk4ZjcwNDhmNDgwMGU4MjBkYzRkNjkxMmU2NGU1ZTRhMmY5MTAwMzhjMTEyMWM4YWU5MGM5MmNlJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.fq8dExlryU81NYEWWpSOc54hb5-cbHJfpAyRfMBEZ9Y)
A finetuner1 2 for LLMs on Intel XPU devices, with which you could finetune the openLLaMA-3b model to sound like your favorite book.
conda env create -f env.yml
conda activate pyt_llm_xpu
Warning: OncePyTorch and intel extension for PyTorch is already setup, then install peft without dependencies as peft requires PyTorch 2.0(not supported yet on Intel XPU devices.)
Fetch a book from guttenberg (default: pride and prejudice) and generate the dataset.
python fetch_data.py
python finetune.py --input_data ./book_data.json --batch_size=64 --micro_batch_size=16 --num_steps=300
For inference, you can either provide a input prompt, or the model will take a default prompt
python inference.py --infer
python inference.py --infer --prompt "my prompt"
python inference.py --bench