Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Readme update #843

Merged
merged 1 commit into from
May 23, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 12 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,9 +68,10 @@ An extensible, convenient, and efficient toolbox for finetuning large machine le
- [Quick Start](#quick-start)
- [Setup](#setup)
- [Prepare Dataset](#prepare-dataset)
- [Finetuning (Full)](#finetuning-full)
- [Finetuning (LISA)](#finetuning-lisa)
- [Finetuning (LoRA)](#finetuning-lora)
- [Finetuning](#finetuning)
- [Full Finetuning](#full-finetuning)
- [LISA](#lisa)
- [LoRA](#lora)
- [Inference](#inference)
- [Deployment](#deployment)
- [Evaluation](#evaluation)
Expand Down Expand Up @@ -116,7 +117,10 @@ bash install.sh

Please refer to our [doc](https://optimalscale.github.io/LMFlow/examples/DATASETS.html).

### Finetuning (Full)
### Finetuning

#### Full Finetuning

Full training updates all the parameters to finetune a language model.
Here is an example to finetune a GPT-2 base model.

Expand Down Expand Up @@ -145,7 +149,8 @@ cd data && ./download.sh alpaca && cd -
>```
> </details>

### Finetuning (LISA)
#### LISA

[LISA](https://arxiv.org/abs/2403.17919) is a memory-efficient finetuning algorithm that allows tradeoff between memory and the number of randomly unfreezed layers. This script currently is only tested in single gpus. Please stay tuned for our latest updates :smile:
```sh
cd data && ./download.sh alpaca && cd -
Expand Down Expand Up @@ -174,7 +179,8 @@ cd data && ./download.sh alpaca && cd -
>```
> </details>

### Finetuning (LoRA)
#### LoRA

LoRA is a parameter-efficient finetuning algorithm and is more efficient than full finetuning.
```sh
cd data && ./download.sh alpaca && cd -
Expand Down
Loading