Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

bug: Excessive task-run containers (9000+ exited) accumulating overnight in Trigger.dev self-hosted setup #1567

Open
lpkobamn opened this issue Dec 16, 2024 · 6 comments · May be fixed by #1702

Comments

@lpkobamn
Copy link

lpkobamn commented Dec 16, 2024

Provide environment information

System:
OS: Linux 6.5 Ubuntu 22.04.4 LTS 22.04.4 LTS (Jammy Jellyfish)
CPU: (4) x64 unknown
Memory: 14.89 GB / 19.34 GB
Container: Yes
Shell: 5.1.16 - /bin/bash
Binaries:
Node: 22.6.0 - ~/.nvm/versions/node/v22.6.0/bin/node
npm: 10.8.2 - ~/.nvm/versions/node/v22.6.0/bin/npm
bun: 1.1.22 - ~/.bun/bin/bun

Describe the bug

I'm using a self-hosted Trigger.dev stack deployed via Docker following the instructions [here](https://trigger.dev/docs/open-source-self-hosting) and [triggerdotdev/docker](https://github.com/triggerdotdev/docker).

The issue I noticed is that over 9000+ containers with names starting with task-run accumulate overnight, all with the exited status. I assume that Trigger.dev runs tasks in separate containers, but these containers are not being cleaned up automatically.


Steps to Reproduce:

  1. Deploy Trigger.dev self-hosted using the official Docker setup.
  2. Run tasks continuously for an extended period (e.g., overnight).
  3. Observe the accumulation of containers named task-run* with an exited status.

Expected Behavior:

  • Containers used for tasks should be cleaned up automatically after execution.
  • Exited containers should not accumulate indefinitely.

Observed Behavior:

  • Over 9000+ containers with the prefix task-run appear in the Docker environment overnight.
  • These containers are all in the exited state, consuming resources and requiring manual cleanup.

Environment Details:

Client: Docker Engine - Community
Version: 27.0.3
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.15.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.28.1
Path: /usr/libexec/docker/cli-plugins/docker-compose

Server:
Containers: 9410
Running: 12
Paused: 0
Stopped: 9398
Images: 16
Server Version: 27.0.3
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: true
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version: ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.5.11-4-pve
Operating System: Ubuntu 22.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 19.34GiB
Name: bb
ID: 9b2f9f0c-244f-457d-9df9-cf75116be946
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: *
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false


Additional Information:

  1. Is there an existing configuration to auto-remove containers after task execution (e.g., --rm flag)?
  2. Could this be related to how Trigger.dev handles task container lifecycles?
  3. Are there any recommendations or scripts for automatically cleaning up exited containers?

Temporary Workaround:
Manually running:

docker container prune

This removes all exited containers, but it's not a long-term solution.


Thank you for your support! Any guidance on resolving this container accumulation issue would be greatly appreciated.

Reproduction repo

https://github.com/triggerdotdev/docker

To reproduce

  1. Deploy Trigger.dev self-hosted using the official Docker setup.
  2. Run tasks continuously for an extended period (e.g., overnight).
  3. Observe the accumulation of containers named task-run* with an exited status.

Additional information

2024-12-16_12-09-28

@lpkobamn
Copy link
Author

docker ps -a --filter "status=exited" --filter "name=^task-run" -q | xargs -r docker rm

@try-to-fly
Copy link

Same for me. Any updates?

@lpkobamn
Copy link
Author

Same for me. Any updates?

I wrote the code above, I just use crontab in which I created a task every 15 minutes that runs this code.

@rharkor
Copy link

rharkor commented Feb 8, 2025

Having the same issue here, I need to clear them each day

Image

@lpkobamn
Copy link
Author

Having the same issue here, I need to clear them each day

Image

use crontab.

https://www.geeksforgeeks.org/crontab-in-linux-with-examples/

@smashah
Copy link

smashah commented Feb 12, 2025

why is this closed? Your crontab hack is not a solution!

The problem here is that the --rm flag is correctly mentioned in the log output but it is not actually in the command execution args!

logs:

missing from actual args
https://github.com/triggerdotdev/trigger.dev/blob/a38f713e32731411fb0f8cbcd0bfc5cc2b996bb3/apps/docker-provider/src/index.ts#L115C1-L126C7

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants