Skip to content

reids key "conj:xxxxxxx" keep being bigger #340

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
asleea88 opened this issue Nov 12, 2019 · 4 comments
Closed

reids key "conj:xxxxxxx" keep being bigger #340

asleea88 opened this issue Nov 12, 2019 · 4 comments

Comments

@asleea88
Copy link

asleea88 commented Nov 12, 2019

django-cacheops==4.0.4
redis==5.0.4

I have been suffering from memory leak whiling using cachops.

Redis memory keep going down til max memory then fail.

First, It tried to find big keys with --bigkeys option.

The is the biggest key 'conj:live_live:author_id=userdata: (nil)&type=0&status=1' and the below is output of several redis commands.

> type 'conj:live_live:author_id=userdata: (nil)&type=0&status=1'
set

> ttl "conj:live_live:author_id=userdata: (nil)&type=0&status=1"
(integer) 10~30  (TTL time goes down til 10 and it restored to 30 repeatedly.)

> scard 'conj:live_live:author_id=userdata: (nil)&type=0&status=1'
(integer) 72924584  (Keep increasing)

> memory usage 'conj:live_live:author_id=userdata: (nil)&type=0&status=1'
(integer) 6715400040  (Keep increasing)

> srandmember 'conj:live_live:author_id=userdata: (nil)&type=0&status=1' 10
 1) "q:9924b0478d13b838d7802f3e5c6e53da"
 2) "q:a332243c5d22bdb315bb66abe2dce11a"
 3) "q:fed454994fd4a0ed2d9f04210b2506a3"
 4) "q:23541e0a9449e0ee248cd400359925e4"
 5) "q:98656ce9e43abe88c7b8e69daca55316"
 6) "q:c0edd2904725861709a285fbfb1e4b49"
 7) "q:176118ea8beec7a26b3d67ffd7a03d93"
 8) "q:5b790aad24806d6b8112323da4329f23"
 9) "q:e8d312750de930f79491b01d0cc04732"
10) "q:20ac3d19977cd1d6f76102fda17b4efd"
> get "q:9924b0478d13b838d7802f3e5c6e53da"
(nil)  
 > get "q:a332243c5d22bdb315bb66abe2dce11a"
(nil) 
 > get  "q:fed454994fd4a0ed2d9f04210b2506a3"
(nil) 
 > get "q:23541e0a9449e0ee248cd400359925e4"
(nil) 
 > get "q:98656ce9e43abe88c7b8e69daca55316"
(nil) 
 > get "q:c0edd2904725861709a285fbfb1e4b49"
(nil) 
 > get "q:176118ea8beec7a26b3d67ffd7a03d93"
(nil) 
 > get "q:5b790aad24806d6b8112323da4329f23"
(nil) 
 > get "q:e8d312750de930f79491b01d0cc04732"
(nil) 
 > get "q:20ac3d19977cd1d6f76102fda17b4efd"
(nil) 
(seems most of members are null)

I don't know which query makes the situation and how to track the issue more to solve and even I don't understand what is the meaning of each key.
Please give me any suggestions.

@Suor Suor closed this as completed Nov 26, 2019
@Suor Suor reopened this Nov 26, 2019
@Suor
Copy link
Owner

Suor commented Nov 26, 2019

There is an unfinished but working PR with a command to reap those #323. You may try it.

@ambientlight
Copy link

@Suor: curious what does userdata: (nil) stands for? Also seeing bunch of conj:core_somemodel:id=userdata: (nil), is this expected?

@Suor
Copy link
Owner

Suor commented May 19, 2020

This is empty key

Suor added a commit that referenced this issue Feb 25, 2023
The idea is that instead of saving all dependent cache keys in conj set
we put a simple random stamp in those and store a checksum of all
related conj stamps along with cache data.

This makes cache reads more complicated - `MGET` key + conj keys,
validate stamps checksum. However, we no longer need to store
potentially big conj sets and invalidation becomes faster, including
model level invalidation.

It also removes strong link between conj and cache keys, i.e. loss of
conj keys no longer leads to a stale cache, instead we will simply drop
the key on next read. This opens easier way for maxmemory and cluster.

So:
- more friendly to `maxmemory`, even assumes that, see #143
- eliminates issues with big conj sets and long invalidation, see #340,
- `reapconjs` is not needed with it, see #323, #434

Followups:
- docs
- remove `CACHEOPS_LRU` as it's superseeded by this generally
- make insideout default or even drop the old ways?
Suor added a commit that referenced this issue Feb 25, 2023
The idea is that instead of saving all dependent cache keys in conj set
we put a simple random stamp in those and store a checksum of all
related conj stamps along with cache data.

This makes cache reads more complicated - `MGET` key + conj keys,
validate stamps checksum. However, we no longer need to store
potentially big conj sets and invalidation becomes faster, including
model level invalidation.

It also removes strong link between conj and cache keys, i.e. loss of
conj keys no longer leads to a stale cache, instead we will simply drop
the key on next read. This opens easier way for maxmemory and cluster.

So:
- more friendly to `maxmemory`, even assumes that, see #143
- eliminates issues with big conj sets and long invalidation, see #340,
- `reapconjs` is not needed with it, see #323, #434

Followups:
- docs
- remove `CACHEOPS_LRU` as it's superseeded by this generally
- make insideout default or even drop the old ways?
Suor added a commit that referenced this issue Feb 25, 2023
The idea is that instead of saving all dependent cache keys in conj set
we put a simple random stamp in those and store a checksum of all
related conj stamps along with cache data.

This makes cache reads more complicated - `MGET` key + conj keys,
validate stamps checksum. However, we no longer need to store
potentially big conj sets and invalidation becomes faster, including
model level invalidation.

It also removes strong link between conj and cache keys, i.e. loss of
conj keys no longer leads to a stale cache, instead we will simply drop
the key on next read. This opens easier way for maxmemory and cluster.

So:
- more friendly to `maxmemory`, even assumes that, see #143
- eliminates issues with big conj sets and long invalidation, see #340,
  #350, #444
- `reapconjs` is not needed with it, see #323, #434

Followups:
- docs
- remove `CACHEOPS_LRU` as it's superseeded by this generally
- make insideout default or even drop the old ways?
@Suor
Copy link
Owner

Suor commented Feb 25, 2023

Using CACHEOPS_INSIDEOUT = True is a blessed way to solve this now, see Using memory limit

@Suor Suor closed this as completed Feb 25, 2023
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants