-
Notifications
You must be signed in to change notification settings - Fork 106
Set default backends path #924
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Conversation
…arallelism + fix in onnx test to adjust parallelism.
Codecov Report
@@ Coverage Diff @@
## master #924 +/- ##
==========================================
+ Coverage 81.30% 81.77% +0.46%
==========================================
Files 54 54
Lines 8201 8214 +13
==========================================
+ Hits 6668 6717 +49
+ Misses 1533 1497 -36
Continue to review full report at Codecov.
|
…ome configs (COV + slaves)
size_t backends_default_path_len = strlen(dyn_lib_dir_name) + strlen("/backends"); | ||
*backends_path = RedisModule_Alloc(backends_default_path_len + 1); | ||
RedisModule_Assert(sprintf(*backends_path, "%s/backends", dyn_lib_dir_name) > 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RedisModule_CreateStringPrintf?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that better? We don't use HoldString
or something here (so I can't see why using RedisModuleString
object has an advantage...)
src/config/config.c
Outdated
@@ -150,7 +149,7 @@ int Config_SetIntraOperationParallelism(RedisModuleString *num_threads_string) { | |||
int Config_SetModelChunkSize(RedisModuleString *chunk_size_string) { | |||
long long val; | |||
int result = RedisModule_StringToLongLong(chunk_size_string, &val); | |||
if (result != REDISMODULE_OK || val <= 0) { | |||
if (result != REDISMODULE_OK || val <= 0 || val > REDISAI_DEFAULT_MODEL_CHUNK_SIZE) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why the addition of this condition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to limit the chunk_size
, since Redis cannot reply with blobs that are larger than 512MB, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
redis/redis-doc#1653 (comment)
see here. You can configure the server to support bigger values
@@ -28,6 +28,8 @@ typedef enum { RAI_DEVICE_CPU = 0, RAI_DEVICE_GPU = 1 } RAI_Device; | |||
#define REDISAI_INFOMSG_MODEL_EXECUTION_TIMEOUT "Setting MODEL_EXECUTION_TIMEOUT parameter to" | |||
#define REDISAI_INFOMSG_BACKEND_MEMORY_LIMIT "Setting BACKEND_MEMORY_LIMIT parameter to" | |||
|
|||
#define REDISAI_DEFAULT_MODEL_CHUNK_SIZE (511 * 1024 * 1024) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why the addition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To use this constant in Config_SetModelChunkSize
@@ -1424,6 +1429,8 @@ int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) | |||
|
|||
RedisModule_SetModuleOptions(ctx, REDISMODULE_OPTIONS_HANDLE_IO_ERRORS); | |||
|
|||
RedisModule_SubscribeToServerEvent(ctx, RedisModuleEvent_Shutdown, RAI_CleanupModule); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the minimal redis version supporting this event? does it align with our min_redis_version on our ramp files?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, this API is supported from version 6.0.0, which is the minimal version for RedisAI anyway....
@@ -567,7 +567,7 @@ def test_synchronization(self): | |||
|
|||
def launch_redis_and_run_onnx(con, proc_id, pipes): | |||
my_pipe = pipes[proc_id] | |||
port = 6380 + proc_id # Let every subprocess run on a fresh port. | |||
port = 6380 + 30*proc_id # Let every subprocess run on a fresh port (safe distance for RLTEST parallelism). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't you remove RLTest parallelism?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes... But I don't think that this addition is bad, as we might wan't to use the parallelism in the future. WDYT?
Set the backends' default path on module load, so it can be retrieved with AI.CONFIG GET command. Also - pass the module's path to flow tests.