Skip to content
This repository was archived by the owner on Oct 11, 2024. It is now read-only.

Commit 2752570

Browse files
comaniacpcmoritz
authored andcommitted
[MISC] Remove FP8 warning (vllm-project#5472)
Co-authored-by: Philipp Moritz <pcmoritz@gmail.com>
1 parent 90c237d commit 2752570

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm/config.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -244,7 +244,7 @@ def _verify_quantization(self) -> None:
244244
f"{self.quantization} quantization is currently not "
245245
f"supported in ROCm.")
246246
if (self.quantization
247-
not in ["marlin", "gptq_marlin_24", "gptq_marlin"]):
247+
not in ("fp8", "marlin", "gptq_marlin_24", "gptq_marlin")):
248248
logger.warning(
249249
"%s quantization is not fully "
250250
"optimized yet. The speed can be slower than "

0 commit comments

Comments
 (0)