-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Cluster duplicate handler dying from RuntimeError due to dict change while iterating it #160
Comments
This also happens during normal operation and is due to the changed implementation of |
In our case the cluster duplicate handler thread seems to die mid-operation due to the runtime exception but the backtrace is only logged at shutdown. (May be also a thing of caching/delaying stderr in systemd - but it was quite a long time in the one case I observed. When run interactively from the command line the backtrace appears immediately.) |
With python3 the cluster duplicate handler would die from RuntimeErrors due to the items() accessor of the duplicate backlog dict being a view/iterator that doesn't respond well to the dict changing while being iterated. Prevent the RuntimeError by iterating over the items of a copy of the dict while changing the original, similar to what we're doing in the cuckoo job tracke for alomst the same reason already. Fixes scVENUS#160.
With python3 the cluster duplicate handler would die from RuntimeErrors due to the items() accessor of the duplicate backlog dict being a view/iterator that doesn't respond well to the dict changing while being iterated. Prevent the RuntimeError by iterating over the items of a copy of the dict while changing the original, similar to what we're doing in the cuckoo job tracke for alomst the same reason already. Fixes #160.
With python3 the cluster duplicate handler would die from RuntimeErrors due to the items() accessor of the duplicate backlog dict being a view/iterator that doesn't respond well to the dict changing while being iterated. Prevent the RuntimeError by iterating over the items of a copy of the dict while changing the original, similar to what we're doing in the cuckoo job tracke for alomst the same reason already. Fixes #160.
The following traceback has been seen during shutdown at least once with v2.0:
There seems to be some kind of a race in
Queue.shut_down()
between cluster duplicate handler and queue shutdown which is odd because duplicate handler shutdown is the very first thing triggered, so it should not do another cleanup run while the queue is shutting down workers.The text was updated successfully, but these errors were encountered: