-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Update requirements-hpu.txt for open telemetry tracing support #857
base: habana_main
Are you sure you want to change the base?
Update requirements-hpu.txt for open telemetry tracing support #857
Conversation
44a0d2e
to
df648ad
Compare
@@ -9,3 +9,5 @@ tabulate | |||
setuptools>=61 | |||
setuptools-scm>=8 | |||
vllm-hpu-extension @ git+https://github.com/HabanaAI/vllm-hpu-extension.git@8087a98 | |||
opentelemetry-api |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if we want to add it here, those packages are not needed for other workloads. Also as I look into vllm project repo, they dont add it either to requirements file and leave it to user, also please see:
@kzawora-intel can you comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@michalkuligowski
Both TGI Gaudi and TEI Gaudi have those packages in their requirements.
here are related lines for those packages installation in TGI.
https://github.com/huggingface/tgi-gaudi/blob/habana-main/server/requirements.txt#L40
We also saw those tracing enabled in OPEA once we gave related OTLP endpoint URL correctly.
opea-project/GenAIExamples#1316
However, vLLM doesn't have that packages, so this is a gap for vLLM Gaudi comparing to TGI Gaudi.
Good to have that enabled like TGI Gaudi.
thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand it's in tgi, but it seems that vllm decided to do it this way. Also did you check
I linked in my previous comment? It shows requirements for opentelemetry, so I think that should suffice.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@michalkuligowski
all other instructions in the otel.md could be covered by launch scripts like docker compose yaml fie, but no package installation need to handle inside Dockerfile, so the otel.md won't help when users deploy the vllm directly into their cluster. install those packages won't impact performance. open telemetry won't be enabled without setting those arguments for vllm server.
OPEA Project starts having OpenTelemetry tracing feature as below PR.
opea-project/GenAIExamples#1488
We have TGI/TEI enabled for the Open Telemetry tracing, and would like to also enable for vllm.
Current runtime issue is missing opentelemetry-api package, so have the PR to fix it.