-
-
Notifications
You must be signed in to change notification settings - Fork 658
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
bug: Excessive task-run containers (9000+ exited) accumulating overnight in Trigger.dev self-hosted setup #1567
Comments
docker ps -a --filter "status=exited" --filter "name=^task-run" -q | xargs -r docker rm |
Same for me. Any updates? |
I wrote the code above, I just use crontab in which I created a task every 15 minutes that runs this code. |
use crontab. https://www.geeksforgeeks.org/crontab-in-linux-with-examples/ |
why is this closed? Your crontab hack is not a solution! The problem here is that the logs:
missing from actual args |
Provide environment information
System:
OS: Linux 6.5 Ubuntu 22.04.4 LTS 22.04.4 LTS (Jammy Jellyfish)
CPU: (4) x64 unknown
Memory: 14.89 GB / 19.34 GB
Container: Yes
Shell: 5.1.16 - /bin/bash
Binaries:
Node: 22.6.0 - ~/.nvm/versions/node/v22.6.0/bin/node
npm: 10.8.2 - ~/.nvm/versions/node/v22.6.0/bin/npm
bun: 1.1.22 - ~/.bun/bin/bun
Describe the bug
I'm using a self-hosted Trigger.dev stack deployed via Docker following the instructions [here](https://trigger.dev/docs/open-source-self-hosting) and [triggerdotdev/docker](https://github.com/triggerdotdev/docker).
The issue I noticed is that over 9000+ containers with names starting with
task-run
accumulate overnight, all with theexited
status. I assume that Trigger.dev runs tasks in separate containers, but these containers are not being cleaned up automatically.Steps to Reproduce:
task-run*
with anexited
status.Expected Behavior:
Observed Behavior:
task-run
appear in the Docker environment overnight.exited
state, consuming resources and requiring manual cleanup.Environment Details:
Client: Docker Engine - Community
Version: 27.0.3
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.15.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.28.1
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 9410
Running: 12
Paused: 0
Stopped: 9398
Images: 16
Server Version: 27.0.3
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: true
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version: ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.5.11-4-pve
Operating System: Ubuntu 22.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 19.34GiB
Name: bb
ID: 9b2f9f0c-244f-457d-9df9-cf75116be946
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: *
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Additional Information:
--rm
flag)?Temporary Workaround:
Manually running:
This removes all exited containers, but it's not a long-term solution.
Thank you for your support! Any guidance on resolving this container accumulation issue would be greatly appreciated.
Reproduction repo
https://github.com/triggerdotdev/docker
To reproduce
task-run*
with anexited
status.Additional information
The text was updated successfully, but these errors were encountered: