-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
CPU Usage Spikes #20
Comments
Hm, interesting. Which features of the exporter are you using? The exporter has three collectors: Unfortunately, the README isn't quite up-to-date and this repo probably deserves a proper makeover... But time is my enemy :) I refactored another exporter of mine quite recently so chances are not to bad I can get to this one, as well. |
I dug up the manifest for the arm-exporter daemonset. It looks like it's just running with default flags. containers:
- command:
- /bin/rpi_exporter
- '--web.listen-address=127.0.0.1:9243'
image: 'carlosedp/arm_exporter:latest'
name: arm-exporter
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 50m
memory: 50Mi
securityContext:
privileged: true
- args:
- '--secure-listen-address=$(IP):9243'
- '--upstream=http://127.0.0.1:9243/'
- >-
--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
env:
- name: IP
valueFrom:
fieldRef:
fieldPath: status.podIP Im not super familiar with Kubernetes manifests but how does the container access HypriotOS/armv7: node@node0 in ~
$ which vcgencmd
/usr/bin/vcgencmd |
I think I have an idea now. The pod CPU usage metrics use the I think it's still pretty large for an exporter service. Not sure if you have any benchmarks available to profile the container runtime. |
Unfortunately, no. I think you should play around with turning different collectors and setting the correct path for |
Hi, I wanted to see if there was any insight into why the
arm-exporter
service causing periodic spikes in CPU usage. Below is a screenshot from my grafana instance of k3s deployment where I am filtering the pods by those runningarm-exporter
:My prometheus scrape interval is set to 30s and I can see some spikes are registering peak values 2 data points in a row which means these usage spikes can be happening for over 30s each:
Pod Details:
System Details:
RPi CM3B+ Compute Modules
32 Bit Hypriot OS Version 1.12.3 (Docker 19.03.12, kernel 4.19.97)
Any insight would be appreciated.
The text was updated successfully, but these errors were encountered: