Welcome to "heron" repository. Heron is a library that seamlessly integrates multiple Vision and Language models, as well as Video and Language models. One of its standout features is its support for Japanese V&L models. Additionally, we provide pretrained weights trained on various datasets.
Demo is available from here: [Demo]
Heron allows you to configure your own V&L models combining various modules. Vision Encoder, Adopter, and LLM can be configured in the configuration file. The distributed learning method and datasets used for training can also be easily configured.
git clone https://github.com/turingmotors/heron.git
cd heron
We recommend using virtual environment to install the required packages. If you want to install the packages globally, use pip install -r requirements.txt
instead.
Using pyenv and Poetry, you can install the required packages as follows:
# install pyenv environment
pyenv install 3.10
pyenv local 3.10
# install packages from pyproject.toml
poetry install
# install local package
pip install --upgrade pip # enable PEP 660 support
pip install -e .
# for development, install pre-commit
pre-commit install
Using Anaconda, you can install the required packages as follows:
conda create -n heron python=3.10 -y
conda activate heron
pip install --upgrade pip # enable PEP 660 support
pip install -r requirements.txt
pip install -e .
# for development, install pre-commit
pre-commit install
To use Llama-2 models, you need to register for the models. First, you request access to the llama-2 models, in Hugging Face page and Meta website.
Please sign-in the Hugging Face account.
huggingface-cli login
Make sure that your environment can use the CUDA toolkit. See also installation-and-features in flash-attention.
To use flash-attention, you need to install following packages.
pip install packaging wheel
pip uninstall -y ninja && pip install ninja --no-cache-dir
pip install flash-attn --no-build-isolation
If flash-atten doesn't work, please install it from the source. (Related issue)
cd /path/to/download
git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
python setup.py install
For learning, use the yaml configuration file under the projects
directory.
For example, the contents of [projects/opt/exp001.yml](. /projects/opt/exp001.yml) has the following contents:
training_config:
per_device_train_batch_size: 2
gradient_accumulation_steps: 4
num_train_epochs: 1
dataloader_num_workers: 16
fp16: true
optim: "adamw_torch"
learning_rate: 5.0e-5
logging_steps: 100
evaluation_strategy: "steps"
save_strategy: "steps"
eval_steps: 4000
save_steps: 4000
save_total_limit: 1
deepspeed: ./configs/deepspeed/ds_config_zero1.json
output_dir: ./output/
report_to: "wandb"
model_config:
fp16: true
pretrained_path: # None or path to model weight
model_type: git_llm
language_model_name: facebook/opt-350m
vision_model_name: openai/clip-vit-base-patch16
num_image_with_embedding: 1 # if 1, no img_temporal_embedding
max_length: 512
keys_to_finetune:
- visual_projection
- num_image_with_embedding
keys_to_freeze: []
use_lora: true
lora:
r: 8
lora_alpha: 32
target_modules:
- q_proj
- k_proj
- v_proj
lora_dropout: 0.01
bias: none
task_type: CAUSAL_LM
dataset_config_path:
- ./configs/datasets/m3it.yaml
training_config
sets the training configuration, model_config
sets the model configuration, and dataset_config_path
sets the dataset configuration.
The following LLM modules are currently supported for model_type
. We plan to add more supported modules in the future.
To start learning, execute the following command.
./scripts/run.sh
GPU is required for learning; we have tested on Ubuntu 20.04, CUDA 11.7.
You can get the pretrained weight form Hugging Face Hub: turing-motors/heron-chat-git-ja-stablelm-base-7b-v1
See also notebooks.
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlamaTokenizer
from heron.models.git_llm.git_japanese_stablelm_alpha import GitJapaneseStableLMAlphaForCausalLM
device_id = 0
# prepare a pretrained model
model = GitJapaneseStableLMAlphaForCausalLM.from_pretrained(
'turing-motors/heron-chat-git-ja-stablelm-base-7b-v1', torch_dtype=torch.float16
)
model.eval()
model.to(f"cuda:{device_id}")
# prepare a processor
processor = AutoProcessor.from_pretrained('turing-motors/heron-chat-git-ja-stablelm-base-7b-v1')
tokenizer = LlamaTokenizer.from_pretrained(
"novelai/nerdstash-tokenizer-v1",
padding_side="right",
additional_special_tokens=["▁▁"],
)
processor.tokenizer = tokenizer
# prepare inputs
url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = f"##human: What is this picture?\n##gpt: "
# do preprocessing
inputs = processor(
text,
image,
return_tensors="pt",
truncation=True,
)
inputs = {k: v.to(f"cuda:{device_id}") for k, v in inputs.items()}
# set eos token
eos_token_id_list = [
processor.tokenizer.pad_token_id,
processor.tokenizer.eos_token_id,
]
# do inference
with torch.no_grad():
out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list)
# print result
print(processor.tokenizer.batch_decode(out)[0])
model | LLM module | adapter | size |
---|---|---|---|
heron-chat-git-ja-stablelm-base-7b-v1 | Japanese StableLM Base Alpha | GIT | 7B |
heron-chat-blip-ja-stablelm-base-7b-v1-llava-620k | Japanese StableLM Base Alpha | BLIP | 7B |
heron-chat-blip-ja-stablelm-base-7b-v1 | Japanese StableLM Base Alpha | BLIP | 7B |
heron-chat-blip-ja-stablelm-base-7b-v0 | Japanese StableLM Base Alpha | BLIP | 7B |
heron-chat-git-ja-stablelm-base-7b-v0 | Japanese StableLM Base Alpha | GIT | 7B |
heron-chat-git-ELYZA-fast-7b-v0 | ELYZA | GIT | 7B |
heron-chat-git-Llama-2-7b-v0 | Llama-2 | GIT | 7B |
heron-preliminary-git-Llama-2-70b-v0 *1 | Llama-2 | GIT | 70B |
*1 This model only applies to pre-training of adapters. |
LLava datasets translated into Japanese.
Evaluation dataset for Heron-Bench.
If you find Heron useful for your research and applications, please cite using this BibTex:
@misc{inoue2024heronbench,
title={Heron-Bench: A Benchmark for Evaluating Vision Language Models in Japanese},
author={Yuichi Inoue and Kento Sasaki and Yuma Ochi and Kazuki Fujii and Kotaro Tanahashi and Yu Yamaguchi},
year={2024},
eprint={2404.07824},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Released under the Apache License 2.0.
- GenerativeImage2Text: The main idia of the model is based on original GIT.
- Llava: This project is learned a lot from the great Llava project.
- GIT-LLM