-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Fluentd logs is full of backslash and kibana doesn't show k8s pods logs #2545
Comments
Is this log single line, right? If so, it seems several logs are merged into one. |
No, the log is full of backslashes and there are single lines of actual log and then pages of backslashes but I didn't want to copy all the meaningless backslashes and when I searched for the "error" there wasn't any. |
Any progress on this issue ? I seem to have just hit exactly the same problem I use a slightly different setup using
but otherwise substantially the same. Looking at the logs, it appears to be repeatedly reprocessing the same information, objecting to the format, which generates a new, longer log entry which is then reprocessed .... and around we go. |
I have the same problem after following this tutorial, but using k3s as my kubernetes deployment. If I strip the backslashs I can see something like:
But otherwise it's not even possible to see what is going on:
My fluend.yaml is as follows:
|
Same issue. Does anyone have a solution for this? |
Same issue \\\\ |
If your fluentd logs are growing in backslashes, then your fluentd container is parsing its own logs and recursively generating new logs. Consider creating a fluentd-config.yaml file that is setup to ignore Here is my kind: ConfigMap
apiVersion: v1
metadata:
name: fluentd-config
namespace: kube-logging
labels:
addonmanager.kubernetes.io/mode: Reconcile
data:
containers.input.conf: |-
<source>
@type tail
@id in_tail_container_logs
path /var/log/containers/*.log
exclude_path ["/var/log/containers/fluentd*"]
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
format /^.* (?<source>(stderr|stdout))\ F\ (?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$/
time_format %d/%b/%Y:%H:%M:%S %z
</source>
output.conf: |-
# Enriches records with Kubernetes metadata
<filter kubernetes.**>
type kubernetes_metadata
</filter>
<match **>
type elasticsearch
log_level info
include_tag_key true
host elasticsearch.kube-logging.svc.cluster.local
port 9200
logstash_format true
# Set the chunk limits.
buffer_chunk_limit 2M
buffer_queue_limit 8
flush_interval 5s
# Never wait longer than 5 minutes between retries.
max_retry_wait 30
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
# Use multiple threads for processing.
num_threads 2
</match> Then you will want to update your fluentd DaemonSet. I have had success with the Here's what that looks like: apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-logging
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: gcr.io/google-containers/fluentd-elasticsearch:v2.0.1
env:
- name: FLUENTD_SYSTEMD_CONF
value: "disable"
- name: FLUENTD_ARGS
value: "--no-supervisor -q"
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlogcontainers
mountPath: /var/log/containers
readOnly: true
- name: config
mountPath: /etc/fluent/config.d
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlogcontainers
hostPath:
path: /var/log/containers/
- name: config
configMap:
name: fluentd-config Best of luck! |
Just added new envvar to fluentd-kubernetes-daemonset for this case: |
I see 2 possible concurrent causes:
The pattern not match explains why kibana doesn't see any error message. They're not being sent to your elastic service. Having a proper filter/parser would help on this. |
Is there a good way for fluentd's own logs to be shipped if possible? |
I got this issue as well, because I was using containerd instead of docker. I solved it by putting in the following configuration:
|
@micktg |
For lastest images, use cri parser is better than regexp: https://github.com/fluent/fluentd-kubernetes-daemonset#use-cri-parser-for-containerdcri-o-logs |
I followed a digital ocean tutorial https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes to setup my EFK for kubernetes and faced the same issue. The above answer by @micktg resolved the issue. I added the below in environment variables of my fluentd yaml file, so now my environment variables look like this
|
I found @micktg and @varungupta19 answer solve the problem. |
Thanks, @micktg and @varungupta19. Problem solved. |
adding |
+ prevent Fluentd from parsing its own logs and fix an issue with endless backslashes (fluent/fluentd#2545) + increase chunk limit size + add storage for systemd plugin configuration + add pos_file parameter for the tail sources Change-Id: I7d6e54d2324e437c92e5e8197636bd6c54419167
* Update openstack-helm-infra from branch 'master' to 01e66933b3c2b93c6677c04a00361ceeb70a9634 - [fluentd] Adjust configuration for v1.15 + prevent Fluentd from parsing its own logs and fix an issue with endless backslashes (fluent/fluentd#2545) + increase chunk limit size + add storage for systemd plugin configuration + add pos_file parameter for the tail sources Change-Id: I7d6e54d2324e437c92e5e8197636bd6c54419167
Describe the bug
I set up an EFK stack for gathering my different k8s pods logs based on this tutorial: https://mherman.org/blog/logging-in-kubernetes-with-elasticsearch-Kibana-fluentd/ on a Microk8s single node cluster. Everything is up and working and I can connect kibanna to elasticsearch and see the indexes but in the discovery section of kibana there is no log related to my pods and there are kubelete logs.
When I checked the logs of fluentd I saw that it is full of backslashes:
There are much more backslashes but I just copied this amount to show the log.
Your Environment
fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch
and alsov1.3
but the results were the sameYour Configuration
Based on the tutorial that I mentioned earlier I am using two config files for setting up fluentd:
The text was updated successfully, but these errors were encountered: