Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Why does kube-aws suck so much? #157

Closed
gamykla opened this issue Dec 13, 2016 · 4 comments
Closed

Why does kube-aws suck so much? #157

gamykla opened this issue Dec 13, 2016 · 4 comments

Comments

@gamykla
Copy link

gamykla commented Dec 13, 2016

There are always problems with getting the cluster working properly. Can't connect to nodes, services fail to start. This project is a flaky mess.

You should provide a stable cluster.yaml to build from that is going to work. minimum instance size requirements, hyperkube that's going to work etc.

@pieterlange
Copy link
Contributor

Hi @jelis, thanks for your feedback.

Can you elaborate on which kube-aws version you experience problems and what specific problems to fix? Keep in mind this is a community project and it's only going to succeed with quality feedback from its users.

With regards to your points:

  • minimum instance size
    Sorry, there is no instance size available smaller than nano at the moment. Kubernetes is in ongoing development and resource requirements may fluctuate during development, in the end scaling is up to the operator. For now i'd recommend m3.medium at the minimum for the controller if you want to prevent flakey bootstraps (due to timeouts and whatnot)

  • hyperkube
    Hyperkube images are provided by the friendly folks at CoreOS and by kubernetes upstream. Keep in mind that you still need to configure it keeping in mind special command parameters that may need to be set/unset/changed between versions. Again, this is something for the cluster operator and/or the kube-aws maintainers. Hyperkube images referred to by kube-aws should work though.

Thanks for playing, please keep the inflammatory titles to yourself for your next contribution and i hope to see you back here 👍

@mumoshu
Copy link
Contributor

mumoshu commented Dec 14, 2016

Hi @jelis, thanks for your feedback and investment on kube-aws.

Adding to what @pieterlange kindly described:

Personally I've ever seen controller node failures like you've mentioned since 0.9.1.
Would you mind trying newer versions or even the latest version of kube-aws?
Let me also clarify that AFAIK no one tested to run Kubernetes 1.5.0 with kube-aws 0.8.3 as you've tried.
kube-aws 0.8.3 is meant to be tested with Kubernetes up to 1.4.3 according to its release note.

Regarding the step 4 of your instruction, choosing the appropriate values for configuration keys in cluster.yaml is important and basically the user is responsible for that.
If you've chosen e.g. too small instance type which the upstream kubernetes doesn't meant to work on, it won't work. That's nothing to do with kube-aws.

If you need the upstream kubernetes to reduce its memory usage, I'd suggest you to raise an another issue in the upstream Kubernetes instead of kube-aws. FYI, though, I'm not sure if it is really pragmatic or realistic to run serious system like kubernetes on top of e.g. t2.nano. Anyways, What can be done on kube-aws side would be emitting a validation error or warning when it is instructed to do something expected to fail.

If you need more validation to strictly forbid users from doing problematic things like that, please don't hesitate to raise an another issue specific for each exact, actionable problem. However, please beware that there seems to be some conflict between users who wants more freedom vs who wants more safeguards. So I'd rather suggest adding a lot of warnings to provide an user a notice or information. But here again, your clear feedback is important as I'm not really sure exactly what users want warnings for 😭

Lastly, this is a community project with only 1 maintainer available thus, fundamentally, how users act can directly affect kube-aws's quality. Thankfully, kube-aws is becoming more and more useful than ever mainly because of huge contributions from its users thus kube-aws really don't seem to suck. So I suggest you to say suck to me rather than kube-aws 😉

@spacepluk
Copy link
Contributor

This is very harsh and doesn't reflect reality in my opinion. I've been using kube-aws for a while and I've had some issues but it's been a great experience so far. If instead of trashing other people's work you tell us about the specific problems you're having, then maybe we can try to help you.

@gamykla
Copy link
Author

gamykla commented Dec 14, 2016

Sorry about how I opened the issue! I was frustrated. I've also had good experiences @spacepluk! (I'm still committed to kube-aws btw!) .. I'll record specifics

@gamykla gamykla closed this as completed Dec 14, 2016
# for free to subscribe to this conversation on GitHub. Already have an account? #.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants