-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
partial deployment of large linode clusters #36
Comments
lowering multiprocessing.dummy.Pool size from 50 to 10 seemed to work, am going to wait a while to see if it works consistently, but it was failing consistently with 80-node linode deployments. Doesn't seem to take much longer to get them running this way. |
I still sometimes get "No open slots for this plan!" but at least there were only 2 linodes not associated with the group, so cleanup was easier. Arghh. Wish there was a way to ask for a set of linodes and either get all of them or none of them. Only way I can see to do this is to do all the creates, and if they all succeed, then do everything else, else delete everything you created right away so you don't get billed for it. Am I missing something? |
I think it should be possible to catch the exception and still apply the label so cleanup is easier. I wonder if retrying linode-launch.py after a minute or so may allow for more availability to be created. OTOH, this seems likea new problem with Linode and worth opening a ticket about. They may be doing some kind of new throttling that is unintentionally messing this up. |
I've been experimenting with some larger linode clusters, and what happens is that sometimes the Linode API rejects a node creation with an error like the one shown at the bottom. I think it means that there is no room at the inn, linode just doesn't have resources at that geographic site to create that many VMs.
My complaint is that this results in a set of linodes that are created but aren't in the display group, so that linode-destroy.py won't clean them up and I have to do this by hand. Sometimes this set can be pretty large. If one linode create fails, the other threads in the pool are aborted before they can add their new linodes to the display group (the string in LINODE_GROUP), since that is a separate call to the linode API.
Is there any way to change linode-launch.py so that a linode isn't created unless it is also added to the group? Cleanup would be simple then - just run linode-destroy.py
The text was updated successfully, but these errors were encountered: