-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[v0.10] Backport of [SURE-9061] Jobs are not cleaned up from local cluster #2931
Comments
Additional QAProblemFleet is not deleting the jobs related to Solution
TestingTest a few scenarios so cover all the possible cases
In any test, the job should only stay if it is not successful, otherwise it should be deleted. |
I initially checked it in Screencast.from.08-10-24.10.33.24.webmNot sure what may happen here. |
Tested in Rancher Tested scenarios above described an all were ok.Namely:
Added some other scenarios and all ok as well:
14 scenarios were ok. Minor thing on 15 not an issue, just to be noted. Congratulations @0xavi0 for the fixes here. It seems to work quite well at least in all scenarios tested. Aside from this manual checks we will add several of the tests cases above to our UI automation on: rancher/fleet-e2e#213 |
Backport of #2870
Is there an existing issue for this?
Current Behavior
In Rancher local cluster, for each commit/change in each
GitRepo
, there is aJob
started by Fleet. There is nothing to clean up these Jobs, so you will quickly end up with hundreds of lingeringJob
objects and their completed Pods.I didn't notice this behavior in Fleet 0.9.x, so I assume something in 0.10.x introduced these Jobs. I was assuming this is related to automatic chart dependency update, but setting
disableDependencyUpdate
totrue
doesn't seem to affect.Expected Behavior
Unnecessary Job objects are cleaned up, e.g. by setting some sane default for
.spec.ttlSecondsAfterFinished
: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/Steps To Reproduce
Job
objectsEnvironment
Logs
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: