Design decision - correct approach for costly pod starts #11719
Replies: 2 comments 2 replies
-
Hi @RonaldGalea, thanks for raising these points and very interesting question indeed. Did you find any answers or workarounds that helped you? I have a similar issue that sounds a lot like your points. We are running Argo Workflows in our org and are generally happy with it. However, we have a bunch of use cases where we need to keep the overall runtime of a workflow as low as possible, while the startup time of the containers for tasks participating in that workflow is substancial. If there was a way to ensure that a set of container instances for that type of task is always "hot" would cut our overall runtime substantially, but this is nothing that Argo Workflows supports in any way. So yeah, in a perfect world, we could just ensure that we have a hot pool of pods for complex execution steps and be happy forever. I'd therefore also like to understand what the design decisions were and how others tackled this issue |
Beta Was this translation helpful? Give feedback.
-
There have been multiple discussions regarding the issue of pod reuse, #7144 and #12255 etc. |
Beta Was this translation helpful? Give feedback.
-
Hello,
I wish to better understand the design decisions behind Argo. Specifically, the trade-offs between creating a completely separate pod instance to run each task as opposed to having a warm (but still dynamically scalable) pool of worker pods ready to accept requests (in some form).
Here are the points I can see for each approach.
Pod per task
Pros:
Cons:
Pod worker pool
To keep things simple, let's assume single-threaded workers, so task concurrency per worker is just 1.
Pros:
Cons:
My particular use case
I have a number of services that I need to chain together in some logical way which fits a DAG, so a Workflow Management tool like Argo seems to fit my use case well. However, some of these services have a relatively high start-up time (large container images, preparing machine learning models, etc.), and for these it would be really painful to tear down the warm environment and recreate it constantly.
Argo, as well as other Workflow Management systems that I've seen only support the Pod per task approach - but surely I'm not the only one with a use case that doesn't quite fit that well due to costly start-ups. So I have the following questions:
Edit: The idea of a "warm pool of workers" is very general and well-known, so I find it counter-intuitive that it's just not there for Kubernetes pods. For instance, there could be a lightweight client that listens for requests or messages and calls a user-defined callback. Is there some fundamental reason/limitation why this is not done?
I would be very thankful for insights regarding the above highlighted considerations.
Beta Was this translation helpful? Give feedback.
All reactions