From 1b71878f7c91375a4d9fb0e43dbc45b01be9e8b7 Mon Sep 17 00:00:00 2001 From: Casper Date: Tue, 11 Jun 2024 09:56:25 +0200 Subject: [PATCH] Update README.md (#497) --- README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index e869c5f2..13dca0b7 100644 --- a/README.md +++ b/README.md @@ -16,11 +16,11 @@

-

- - Sponsored by RunPod - -

+
+

Supported by

+ + RunPod Logo +
AutoAWQ is an easy-to-use package for 4-bit quantized models. AutoAWQ speeds up models by 3x and reduces memory requirements by 3x compared to FP16. AutoAWQ implements the Activation-aware Weight Quantization (AWQ) algorithm for quantizing LLMs. AutoAWQ was created and improved upon from the [original work](https://github.com/mit-han-lab/llm-awq) from MIT.