-
Notifications
You must be signed in to change notification settings - Fork 293
Can't use redis 4.6.0
without connection pool leakage in worker
#873
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
Should note: Thanks to Redis (server) and arq, despite our worker being down for ~10m our queue kept building and once the worker was online again it powered through them all (short-term CPU spike) |
Since you mentioned this issue in #868, I was curious to see what was going on 🙃 Not sure if it's the source of this specific problem but the following function is problematic IMO: Lines 77 to 83 in 8e9ebe3
A pool is created each time we call Maybe a coincidence but, in 4.6.0, they released the following fix redis/redis-py/pull/2755 which removes the |
Completely agree. It's definitely a problematic (erroneous) implementation. I remember writing it in connection with switching to Arq from Celery a few months ago and that it was a quick hack at the time to get things up and running. Intent was definitely to create a global connection pool 🤦 No, it almost certainly must have been what saved us up until the |
Uh oh!
There was an error while loading. Please reload this page.
Active Redis connections 2023-07-19 (CET)

What happened
4.5.5
to4.6.0
ConnectionError: max number of clients reached
frompolar.worker.create_pool
4.6.0
again after lunch. Definition of insanity... But in my defence, initial deploy occurred during 1) our cron execution + 2) few large repositories triggering syncs. So I thought it might have been a timing issue.Actions
4.5.5
nowredis
(not a priority today)The text was updated successfully, but these errors were encountered: