Enable Multi Layer Perceptron (MLP) selection for projector #25
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Enable Multi Layer Perceptron (MLP) selection for projector
First of all, thank you for creating such an amazing project!
This repository has become very useful for me.
Changes
Now, I have made a modification to the code to allow the projector to be a Multi Layer Perceptron (MLP) when
model_type: git_llm
is selected.Previously, when using
model_type: git_llm
, a single Linear layer was applied as the projector that connects the Vision model and the LLM. However, inspired by LLaVA v1.5 【Liu+'23 Improved Baselines with Visual Instruction Tuning】, I have added code that makes it possible to vary the number of these Linear layers simply by adding an option (mlp_adapter
) inprojects/OOO/OO.yml
undermodel_config
.The main details of the code for changing the projector to an MLP can be understood by looking at
heron/models/mlp_adapter.py
.Furthermore, this code references the github implementation of LLaVA v1.5 ( https://github.com/haotian-liu/LLaVA/blob/785f766fcddc86ffeaa62cd51cf7834a11c04e6d/llava/model/multimodal_projector/builder.py#L33 ).
Also, to maintain compatibility, I've made sure it works the same way as before with the existing
projects/OOO/OO.yml
.For example, if you use
projects/llama/exp001.yml
as it is,As before, a single layer Linear layer will be applied as the projector.
If you want to change the projector to an MLP, add the
mlp_adapter
item tomodel_config
inprojects/llama/exp001.yml
and give it the namemlp2x_gelu
.In the above example, by adding
mlp_adapter: mlp2x_gelu
undermodel_config
, the projector will become a 2-layer MLP, but if you want it to be 3 layers, simply changing tomlp_adapter: mlp3x_gelu
will make it a 3-layer MLP easily!