-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Overall revamp of connection pool, retries and timeouts #3008
Comments
I'm not sure how to fix these problems without breaking the existing users. It seems that many of the problems are intertwined and having a better design and abstraction of connection and connection pools would be the way to go. I understand that the list may look intimidating for the maintainers—I'm not urging to fix them ASAP, but hoping that this list would be a guide to embrace the issues in a more holistic view. |
For a new user it is really quite hard to understand whats going on with the connection Another couple of suggestions.
async with pool.take() as redis:
await redis.set("foo","bar")
Just some thoughts... |
Looking forward to what happens in #3200! 👀 |
Do you have any clues or guidance on this? I am analyzing the same authentication, though in my case it's not just Redis, but also Celery, django-redis, python redis lock... |
With sentinel it becomes a whole other question also, which I cannot seem to find covered in the docs or examples and even reading the source is a bit confusing. Fundamentally what I want is a reliable object (or two if doing readonly) that I can say 'give me an active connection', but getting to that state is pretty hard... Currently I create a sentinel, which in itself is a bit horrific:
I can then use |
Is it possible to get an update on where / if the above issues sit on the roadmap? Is e.g. #3200 still intended to be developed? |
I'm experiencing many issues (already reported here by other people) related with the (blocking) connection pool in the asyncio version.
I had to rollback and pin my dependency version to 4.5.5 currently. (lablup/backend.ai#1620)
Since there are already many reports, here I would instead suggest some high-level design suggestions:
ConnectionError
and/orTimeoutError
to ease writing user-defined retry mechanisms.GET
,SET
, ...) and blocking commands (BLPOP
,XREAD
, ...) should be considered a different condition: active error vs. polling.timeout
cannot exceed client'ssocket_timeout
#2807aiohttp.ClientTimeout
.BlockingConnectionPool
should be the default.ConnectionPool
's defaultmax_connections
should be a more reasonable number.Task was destroyed but it is pending!
#2749redis.client.Redis.__del__
on process exit. #3014BlockingSentinelConnectionPool
.CLIENT SETINFO
mechanism should be generalized.CLIENT SETINFO
breaks the retry semantic.CLIENT SETNAME
orCLIENT NO-EVICT on
?The text was updated successfully, but these errors were encountered: