Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Need some indication of taskruns never leaving pending state before timeouts #779

Open
gabemontero opened this issue Jul 19, 2024 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@gabemontero
Copy link
Contributor

gabemontero commented Jul 19, 2024

Expected Behavior

A TaskRun never leaving Pending state, with its underlying pod started, should have this fact made clear in log storage

Actual Behavior

No such information occurs

Steps to Reproduce the Problem

  1. Define ResourceQuotas such that Pods cannot be started in a namespace
  2. Start a TaskRun with a timeout
  3. Analyze Results after that TaskRun times out

Additional Info

With #699 we fixed the situation in general where if a timeout/cancel occurred, we would still go on to fetch/store the underlying pod logs.

However, in systems with quotas or severe node pressure at the k8s level, TaskRuns can stay stuck in Pending and any created Pods will never get started.

If you see the comments at

// KLUGE: tkn reader.Read() will raise an error if a step in the TaskRun failed and there is no
you'll see the prior observations of tkn making the distinction of errors difficult, and thus, errors with tkn getting logs are ignored.

That is proving unusable for users how may not have access to view events, pods, or etcd entities in general before the attempt to store logs occurs and then the pipelinerun/taskrun are potentially pruned form etcd.

before exiting the streamLogs code needs to confirm if any underlying pods for TaskRuns exist, and if not, store any helpful debug info in what is set to the GRPC UpdateLog call and/or direct S3 storage. In particular

I'll also attach a PR/TR which was timedout/cancelled where the taskrun never left Pending state.

You'll see from the annotations that they go from pending straight to a terminal state, meaning a pod never got associated.

pr-tr.zip

@khrm @sayan-biswas @avinal @enarha FYI / PTAL / WDYT

@gabemontero gabemontero added the kind/bug Categorizes issue or PR as related to a bug. label Jul 19, 2024
@gabemontero
Copy link
Contributor Author

revitalizing #715 would bypass the tkn client issues noted at

// KLUGE: tkn reader.Read() will raise an error if a step in the TaskRun failed and there is no
and allow us to indicate in the stored logs there were no pods to dump.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

1 participant