Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

enable cpu limit for "robusta-forwarder" service as current config cause cpu hogging #1601

Open
Rajpratik71 opened this issue Oct 22, 2024 · 3 comments · May be fixed by #1602
Open

enable cpu limit for "robusta-forwarder" service as current config cause cpu hogging #1601

Rajpratik71 opened this issue Oct 22, 2024 · 3 comments · May be fixed by #1602

Comments

@Rajpratik71
Copy link
Contributor

Describe the bug

Current helm config for cpu limit of robusta-forwarder caused the pods to consume available cpu of the Node.

To Reproduce

NA

Expected behavior

robusta-forwarder pod should run in specified cpu limits.

Screenshots

pratikraj@Pratiks-MacBook-Pro ~ % 
pratikraj@Pratiks-MacBook-Pro ~ % oc adm top nodes
NAME                             CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
worker7.copper.cp.xxxxx.xxxx.com   42648m       98%    34248Mi         14%       
pratikraj@Pratiks-MacBook-Pro ~ % 
pratikraj@Pratiks-MacBook-Pro ~ % oc adm top po -n robusta
NAME                                 CPU(cores)   MEMORY(bytes)   
robusta-forwarder-6ddb7758f7-xm42p   53400m       502Mi           
robusta-runner-6cb648c696-44sqp      9m           838Mi           
pratikraj@Pratiks-MacBook-Pro ~ % 
pratikraj@Pratiks-MacBook-Pro ~ % oc delete -n robusta po robusta-forwarder-6ddb7758f7-xm42p 
pod "robusta-forwarder-6ddb7758f7-xm42p" deleted
pratikraj@Pratiks-MacBook-Pro ~ % 
pratikraj@Pratiks-MacBook-Pro ~ % oc get po -n robusta    
NAME                                 READY   STATUS    RESTARTS      AGE
robusta-forwarder-6ddb7758f7-b2l69   1/1     Running   0             36s
robusta-runner-6cb648c696-44sqp      2/2     Running   2 (31h ago)   3d22h
pratikraj@Pratiks-MacBook-Pro ~ % 
pratikraj@Pratiks-MacBook-Pro ~ % oc version
Client Version: 4.15.15
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: 4.14.36
Kubernetes Version: v1.27.16+03a907c
pratikraj@Pratiks-MacBook-Pro ~ % 
pratikraj@Pratiks-MacBook-Pro ~ % oc get po -n robusta
NAME                                 READY   STATUS    RESTARTS       AGE
robusta-forwarder-6ddb7758f7-b2l69   1/1     Running   0              17h
robusta-runner-6cb648c696-44sqp      2/2     Running   2 (2d1h ago)   4d16h
pratikraj@Pratiks-MacBook-Pro ~ % 
pratikraj@Pratiks-MacBook-Pro ~ % oc adm top po -n robusta
NAME                                 CPU(cores)   MEMORY(bytes)   
robusta-forwarder-6ddb7758f7-b2l69   29m          286Mi           
robusta-runner-6cb648c696-44sqp      880m         991Mi           
pratikraj@Pratiks-MacBook-Pro ~ % 
pratikraj@Pratiks-MacBook-Pro ~ % oc adm top nodes                                                     
NAME                             CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
worker7.copper.cp.xxxxx.xxxx.com   1284m        2%     34086Mi         14%       
pratikraj@Pratiks-MacBook-Pro ~ % 

Environment Info (please complete the following information):

Client Version: 4.15.15
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: 4.14.36
Kubernetes Version: v1.27.16+03a907c

Additional context

Add any other context about the problem here.

Rajpratik71 added a commit to Rajpratik71/robusta that referenced this issue Oct 22, 2024
…ce as current config cause cpu hogging

Signed-off-by: Pratik Raj <rajpratik71@gmail.com>
@aantn
Copy link
Collaborator

aantn commented Oct 22, 2024

Hi @Rajpratik71, there is nothing wrong with it consuming all the excess cpu on the node! We deliberately don't see limits in line with the approach here - https://home.robusta.dev/blog/stop-using-cpu-limits

That said, if another pod is running on the same node and getting CPU throttled, you can fix it by increasing the CPU request for that other pod.

@Rajpratik71
Copy link
Contributor Author

Hi @Rajpratik71, there is nothing wrong with it consuming all the excess cpu on the node! We deliberately don't see limits in line with the approach here - https://home.robusta.dev/blog/stop-using-cpu-limits

That said, if another pod is running on the same node and getting CPU throttled, you can fix it by increasing the CPU request for that other pod.

@aantn Only In some scenarios it is good to remove limits to avoid throttling in applications.

Otherwise , application without cpu limits will lead to a lot of issues like:

  • Resource Starvation for Critical Application and Critical NON-K8S System Applications
  • Inefficient Resource Allocation
  • Slow response / loading in other application if one application monopolizes CPU resources
  • Noisy Neighbor
  • CPU Hogging
  • Resource Wastes due to misbehaving application
  • Increased cost due to misbehaving application

Same happened here, robusta-forwarder was consuming 50cpu mean all cpu which were available but not expected and required by robusta-forwarder.

@aantn
Copy link
Collaborator

aantn commented Oct 29, 2024

Hi @Rajpratik71 that's weird, it shouldn't consume 50cpu! That is a lot. Are you open to jumping on a call with our team to take a look?

Regarding the limits, probably easier to discuss on a call too if you want to go more in depth.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
2 participants