Skip to content

Commit 4379381

Browse files
schoennenbeckNickLucche
authored andcommitted
[Core] Make raw_request optional in ServingCompletion (vllm-project#12503)
Signed-off-by: Sebastian Schönnenbeck <sebastian.schoennenbeck@comma-soft.com>
1 parent f90d8e8 commit 4379381

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

vllm/entrypoints/openai/serving_completion.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ def __init__(
5858
async def create_completion(
5959
self,
6060
request: CompletionRequest,
61-
raw_request: Request,
61+
raw_request: Optional[Request] = None,
6262
) -> Union[AsyncGenerator[str, None], CompletionResponse, ErrorResponse]:
6363
"""Completion API similar to OpenAI's API.
6464
@@ -137,7 +137,7 @@ async def create_completion(
137137
lora_request=lora_request,
138138
prompt_adapter_request=prompt_adapter_request)
139139

140-
trace_headers = (await
140+
trace_headers = (None if raw_request is None else await
141141
self._get_trace_headers(raw_request.headers))
142142

143143
if isinstance(sampling_params, BeamSearchParams):

0 commit comments

Comments
 (0)