Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

fix(deps): update dependency peft to ^0.14.0 #124

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jul 24, 2024

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
peft ^0.11.1 -> ^0.14.0 age adoption passing confidence

Release Notes

huggingface/peft (peft)

v0.14.0: Version 0.14.0: EVA, Context-aware Prompt Tuning, Bone, and more

Compare Source

Highlights

peft-v0 14 0

New Methods

Context-aware Prompt Tuning

@​tsachiblau added a new soft prompt method called Context-aware Prompt Tuning (CPT) which is a combination of In-Context Learning and Prompt Tuning in the sense that, for each training sample, it builds a learnable context from training examples in addition to the single training sample. Allows for sample- and parameter-efficient few-shot classification and addresses recency-bias.

Explained Variance Adaptation

@​sirluk contributed a new LoRA initialization method called Explained Variance Adaptation (EVA). Instead of randomly initializing LoRA weights, this method uses SVD on minibatches of finetuning data to initialize the LoRA weights and is also able to re-allocate the ranks of the adapter based on the explained variance ratio (derived from SVD). Thus, this initialization method can yield better initial values and better rank distribution.

Bone

@​JL-er added an implementation for Block Affine (Bone) Adaptation which utilizes presumed sparsity in the base layer weights to divide them into multiple sub-spaces that share a single low-rank matrix for updates. Compared to LoRA, Bone has the potential to significantly reduce memory usage and achieve faster computation.

Enhancements

PEFT now supports LoRAs for int8 torchao quantized models (check this and this notebook) . In addition, VeRA can now be used with 4 and 8 bit bitsandbytes quantization thanks to @​ZiadHelal.

Hot-swapping of LoRA adapters is now possible using the hotswap_adapter function. Now you are able to load one LoRA and replace its weights in-place with the LoRA weights of another adapter which, in general, should be faster than deleting one adapter and loading the other adapter in its place. The feature is built so that no re-compilation of the model is necessary if torch.compile was called on the model (right now, this requires ranks and alphas to be the same for the adapters).

LoRA and IA³ now support Conv3d layers thanks to @​jsilter, and @​JINO-ROHIT added a notebook showcasing PEFT model evaluation using lm-eval-harness toolkit.

With the target_modules argument, you can specify which layers to target with the adapter (e.g. LoRA). Now you can also specify which modules not to target by using the exclude_modules parameter (thanks @​JINO-ROHIT).

Changes

  • There have been made several fixes to the OFT implementation, among other things, to fix merging, which makes adapter weights trained with PEFT versions prior to this release incompatible (see #​1996 for details).
  • Adapter configs are now forward-compatible by accepting unknown keys.
  • Prefix tuning was fitted to the DynamicCache caching infrastructure of transformers (see #​2096). If you are using this PEFT version and a recent version of transformers with an old prefix tuning checkpoint, you should double check that it still works correctly and retrain it if it doesn't.
  • Added lora_bias parameter to LoRA layers to enable bias on LoRA B matrix. This is useful when extracting LoRA weights from fully fine-tuned parameters with bias vectors so that these can be taken into account.
  • #​2180 provided a couple of bug fixes to LoKr (thanks @​yaswanth19). If you're using LoKr, your old checkpoints should still work but it's recommended to retrain your adapter.
  • from_pretrained now warns the user if PEFT keys are missing.
  • Attribute access to modules in modules_to_save is now properly and transparently handled.
  • PEFT supports the changes to bitsandbytes 8bit quantization from the recent v0.45.0 release. To benefit from these improvements, we thus recommend to upgrade bitsandbytes if you're using QLoRA. Expect slight numerical differences in model outputs if you're using QLoRA with 8bit bitsandbytes quantization.

What's Changed

New Contributors

Full Changelog: huggingface/peft@v0.13.2...v0.14.0

v0.13.2: : Small patch release

Compare Source

This patch release contains a small bug fix for an issue that prevented some LoRA checkpoints to be loaded correctly (mostly concerning stable diffusion checkpoints not trained with PEFT when loaded in diffusers, #​2144).

Full Changelog: huggingface/peft@v0.13.1...v0.13.2

v0.13.1: : Small patch release

Compare Source

This patch release contains a small bug fix for the low_cpu_mem_usage=True option (#​2113).

Full Changelog: huggingface/peft@v0.13.0...v0.13.1

v0.13.0: : LoRA+, VB-LoRA, and more

Compare Source

peft-v0 13 0

Highlights

New methods

LoRA+

@​kallewoof added LoRA+ to PEFT (#​1915). This is a function that allows to initialize an optimizer with settings that are better suited for training a LoRA adapter.

VB-LoRA

@​leo-yangli added a new method to PEFT called VB-LoRA (#​2039). The idea is to have LoRA layers be composed from a single vector bank (hence "VB") that is shared among all layers. This makes VB-LoRA extremely parameter efficient and the checkpoints especially small (comparable to the VeRA method), while still promising good fine-tuning performance. Check the VB-LoRA docs and example.

Enhancements

New Hugging Face team member @​ariG23498 added the helper function rescale_adapter_scale to PEFT (#​1951). Use this context manager to temporarily increase or decrease the scaling of the LoRA adapter of a model. It also works for PEFT adapters loaded directly into a transformers or diffusers model.

@​ariG23498 also added DoRA support for embedding layers (#​2006). So if you're using the use_dora=True option in the LoraConfig, you can now also target embedding layers.

For some time now, we support inference with batches that are using different adapters for different samples, so e.g. sample 1-5 use "adapter1" and samples 6-10 use "adapter2". However, this only worked for LoRA layers so far. @​saeid93 extended this to also work with layers targeted by modules_to_save (#​1990).

When loading a PEFT adapter, you now have the option to pass low_cpu_mem_usage=True (#​1961). This will initialize the adapter with empty weights ("meta" device) before loading the weights instead of initializing on CPU or GPU. This can speed up loading PEFT adapters. So use this option especially if you have a lot of adapters to load at the same time or if these adapters are very big. Please let us know if you encounter issues with this option, as we may make this the default in the future.

Changes

Safe loading of PyTorch weights

Unless indicated otherwise, PEFT adapters are saved and loaded using the secure safetensors format. However, we also support the PyTorch format for checkpoints, which relies on the inherently insecure pickle protocol from Python. In the future, PyTorch will be more strict when loading these files to improve security by making the option weights_only=True the default. This is generally recommended and should not cause any trouble with PEFT checkpoints, which is why with this release, PEFT will enable this by default. Please open an issue if this causes trouble.

What's Changed

New Contributors

Full Changelog: huggingface/peft@v0.12.0...v0.13.0

v0.12.0: : New methods OLoRA, X-LoRA, FourierFT, HRA, and much more

Compare Source

Highlights

peft-v0 12 0

New methods

OLoRA

@​tokenizer-decode added support for a new LoRA initialization strategy called OLoRA (#​1828). With this initialization option, the LoRA weights are initialized to be orthonormal, which promises to improve training convergence. Similar to PiSSA, this can also be applied to models quantized with bitsandbytes. Check out the accompanying OLoRA examples.

X-LoRA

@​EricLBuehler added the X-LoRA method to PEFT (#​1491). This is a mixture of experts approach that combines the strength of multiple pre-trained LoRA adapters. Documentation has yet to be added but check out the X-LoRA tests for how to use it.

FourierFT

@​Phoveran, @​zqgao22, @​Chaos96, and @​DSAILatHKUST added discrete Fourier transform fine-tuning to PEFT (#​1838). This method promises to match LoRA in terms of performance while reducing the number of parameters even further. Check out the included FourierFT notebook.

HRA

@​DaShenZi721 added support for Householder Reflection Adaptation (#​1864). This method bridges the gap between low rank adapters like LoRA on the one hand and orthogonal fine-tuning techniques such as OFT and BOFT on the other. As such, it is interesting for both LLMs and image generation models. Check out the HRA example on how to perform DreamBooth fine-tuning.

Enhancements

  • IA³ now supports merging of multiple adapters via the add_weighted_adapter method thanks to @​alexrs (#​1701).
  • Call peft_model.get_layer_status() and peft_model.get_model_status() to get an overview of the layer/model status of the PEFT model. This can be especially helpful when dealing with multiple adapters or for debugging purposes. More information can be found in the docs (#​1743).
  • DoRA now supports FSDP training, including with bitsandbytes quantization, aka QDoRA ()#​1806).
  • VeRA has been extended by @​dkopi to support targeting layers with different weight shapes (#​1817).
  • @​kallewoof added the possibility for ephemeral GPU offloading. For now, this is only implemented for loading DoRA models, which can be sped up considerably for big models at the cost of a bit of extra VRAM (#​1857).
  • Experimental: It is now possible to tell PEFT to use your custom LoRA layers through dynamic dispatching. Use this, for instance, to add LoRA layers for thus far unsupported layer types without the need to first create a PR on PEFT (but contributions are still welcome!) (#​1875).

Examples

Changes

Casting of the adapter dtype

Important: If the base model is loaded in float16 (fp16) or bfloat16 (bf16), PEFT now autocasts adapter weights to float32 (fp32) instead of using the dtype of the base model (#​1706). This requires more memory than previously but stabilizes training, so it's the more sensible default. To prevent this, pass autocast_adapter_dtype=False when calling get_peft_model, PeftModel.from_pretrained, or PeftModel.load_adapter.

Adapter device placement

The logic of device placement when loading multiple adapters on the same model has been changed (#​1742). Previously, PEFT would move all adapters to the device of the base model. Now, only the newly loaded/created adapter is moved to the base model's device. This allows users to have more fine-grained control over the adapter devices, e.g. allowing them to offload unused adapters to CPU more easily.

PiSSA

  • Calling save_pretrained with the convert_pissa_to_lora argument is deprecated, the argument was renamed to path_initial_model_for_weight_conversion (#​1828). Also, calling this no longer deletes the original adapter (#​1933).
  • Using weight conversion (path_initial_model_for_weight_conversion) while also using use_rslora=True and rank_pattern or alpha_pattern now raises an error (#​1930). This used not to raise but inference would return incorrect outputs. We also warn about this setting during initialization.

Call for contributions

We are now making sure to tag appropriate issues with the contributions welcome label. If you are looking for a way to contribute to PEFT, check out these issues.

What's Changed


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot force-pushed the renovate/peft-0.x branch 2 times, most recently from 2486bc4 to bf48b38 Compare July 28, 2024 16:14
@renovate renovate bot force-pushed the renovate/peft-0.x branch 3 times, most recently from a597d69 to 4324bc1 Compare August 9, 2024 01:59
@renovate renovate bot force-pushed the renovate/peft-0.x branch 2 times, most recently from edcf061 to fa802bb Compare September 4, 2024 11:42
@renovate renovate bot force-pushed the renovate/peft-0.x branch 2 times, most recently from 8b78c18 to 3c6046c Compare September 16, 2024 15:58
@renovate renovate bot force-pushed the renovate/peft-0.x branch 2 times, most recently from 706a768 to 49ba9fa Compare September 25, 2024 13:33
@renovate renovate bot changed the title fix(deps): update dependency peft to ^0.12.0 fix(deps): update dependency peft to ^0.13.0 Sep 25, 2024
@renovate renovate bot force-pushed the renovate/peft-0.x branch 2 times, most recently from db33d96 to f181ffb Compare November 6, 2024 09:18
@renovate renovate bot force-pushed the renovate/peft-0.x branch from f181ffb to 7ccff93 Compare November 19, 2024 16:06
@renovate renovate bot force-pushed the renovate/peft-0.x branch from 7ccff93 to e58db50 Compare December 6, 2024 14:07
@renovate renovate bot changed the title fix(deps): update dependency peft to ^0.13.0 fix(deps): update dependency peft to ^0.14.0 Dec 6, 2024
@renovate renovate bot force-pushed the renovate/peft-0.x branch from e58db50 to ce3dcc9 Compare December 17, 2024 09:13
@renovate renovate bot force-pushed the renovate/peft-0.x branch from ce3dcc9 to 4dbd4a0 Compare December 28, 2024 13:56
@renovate renovate bot force-pushed the renovate/peft-0.x branch 7 times, most recently from f7604f9 to 96526b9 Compare January 28, 2025 09:49
@renovate renovate bot force-pushed the renovate/peft-0.x branch from 96526b9 to 3630340 Compare January 30, 2025 09:57
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants