Skip to content

Commit c74907f

Browse files
comaniacMengqingCao
authored andcommitted
[MISC] Keep chunked prefill enabled by default with long context when prefix caching is enabled (vllm-project#8342)
1 parent 8d461d8 commit c74907f

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

vllm/engine/arg_utils.py

-1
Original file line numberDiff line numberDiff line change
@@ -878,7 +878,6 @@ def create_engine_config(self) -> EngineConfig:
878878
if (is_gpu and not use_sliding_window and not use_spec_decode
879879
and not self.enable_lora
880880
and not self.enable_prompt_adapter
881-
and not self.enable_prefix_caching
882881
and not has_seqlen_agnostic_layers):
883882
self.enable_chunked_prefill = True
884883
logger.warning(

0 commit comments

Comments
 (0)