Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Update requirements-hpu.txt for open telemetry tracing support #857

Open
wants to merge 4 commits into
base: habana_main
Choose a base branch
from

Conversation

louie-tsai
Copy link

@louie-tsai louie-tsai commented Feb 21, 2025

OPEA Project starts having OpenTelemetry tracing feature as below PR.
opea-project/GenAIExamples#1488
We have TGI/TEI enabled for the Open Telemetry tracing, and would like to also enable for vllm.
Current runtime issue is missing opentelemetry-api package, so have the PR to fix it.

@louie-tsai louie-tsai force-pushed the open_telemetry_support branch from 44a0d2e to df648ad Compare February 22, 2025 07:36
@@ -9,3 +9,5 @@ tabulate
setuptools>=61
setuptools-scm>=8
vllm-hpu-extension @ git+https://github.com/HabanaAI/vllm-hpu-extension.git@8087a98
opentelemetry-api
Copy link

@michalkuligowski michalkuligowski Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if we want to add it here, those packages are not needed for other workloads. Also as I look into vllm project repo, they dont add it either to requirements file and leave it to user, also please see:

@kzawora-intel can you comment?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michalkuligowski
Both TGI Gaudi and TEI Gaudi have those packages in their requirements.
here are related lines for those packages installation in TGI.
https://github.com/huggingface/tgi-gaudi/blob/habana-main/server/requirements.txt#L40

We also saw those tracing enabled in OPEA once we gave related OTLP endpoint URL correctly.
opea-project/GenAIExamples#1316

However, vLLM doesn't have that packages, so this is a gap for vLLM Gaudi comparing to TGI Gaudi.
Good to have that enabled like TGI Gaudi.

thanks

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand it's in tgi, but it seems that vllm decided to do it this way. Also did you check

I linked in my previous comment? It shows requirements for opentelemetry, so I think that should suffice.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michalkuligowski
all other instructions in the otel.md could be covered by launch scripts like docker compose yaml fie, but no package installation need to handle inside Dockerfile, so the otel.md won't help when users deploy the vllm directly into their cluster. install those packages won't impact performance. open telemetry won't be enabled without setting those arguments for vllm server.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants