Learning how to use, fine-tune and build with LLMs
- Finetuning open-source LLMshttps://www.youtube.com/watch?v=gs-IDg-FoIQ
- How to fine-tune Mistral 7b on your datahttps://www.youtube.com/watch?v=kmkcNVvEz-k
- Fine-Tuning your own Llama2https://www.youtube.com/watch?v=Pb_RGAl75VE&t=14s
- Webinar: How to Fine-Tune LLMs with QLoRA
- Fine-tuning Large Language Models (LLMs) | w/ Example Code
- What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINEDhttps://www.youtube.com/watch?v=KEv-F5UkhxU
- Low-rank Adaptation: LoRA Fine-tuning & QLoRA Explained In-Depthhttps://www.youtube.com/watch?v=t1caDsMzWBk&pp=ygUObG9yYSBhbmQgcWxvcmE%3D
- LoRA explained (and a bit about precision and quantization)https://www.youtube.com/watch?v=t509sv5MT0w&t=18s&pp=ygUObG9yYSBhbmQgcWxvcmE%3D
- Fine-Tune Large LLMs with QLoRA (Free Colab Tutorial)https://www.youtube.com/watch?v=NRVaRXDoI3g
- Understanding 4bit Quantization: QLoRA explained (w/ Colab)https://www.youtube.com/watch?v=TPcXVJ1VSRI
- Understanding: AI Model Quantization, GGML vs GPTQ!
- AWQ for LLM Quantization
- How to Quantize an LLM with GGUF or AWQ
- New Tutorial on LLM Quantization w/ QLoRA, GPTQ and Llamacpp, LLama 2
- PEFT LoRA Explained in Detail - Fine-Tune your LLM on your local GPUhttps://www.youtube.com/watch?v=YVU5wAA6Txo
- Fine-tuning LLMs with PEFT and LoRAhttps://www.youtube.com/watch?v=Us5ZFp16PaU
- Efficient Large Language Model training with LoRA and Hugging Face PEFThttps://www.youtube.com/watch?v=YKCtbIJC3kQ
- EMNLP 2022 Tutorial - "Modular and Parameter-Efficient Fine-Tuning for NLP Models"https://www.youtube.com/watch?v=KoOlcX3XLd4