You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, when i continuously call the lambda function, the memory increases call after call.
This is how i'm debugging the code: console.log("imgToTensor: memory before: "+JSON.stringify(tf.memory())); const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3])) console.log("imgToTensor: memory after: "+JSON.stringify(tf.memory()));
The first time i call the function I get this:
imgToTensor: memory before: {"unreliable":true,"numTensors":263,"numDataBuffers":263,"numBytes":47349088}
imgToTensor: memory after: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
The second time i call the function i get the following:
imgToTensor: memory before: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
imgToTensor: memory after: {"unreliable":true,"numTensors":265,"numDataBuffers":265,"numBytes":105978208}
Looks like the statement const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3]))
If you take a look to the "numTensors" property, it's increased after each function call.
After 5 lambda executions my lambda fails with Error: Runtime exited with error: signal: killed
Is there a way to clean the resources from the previous lambda function call?
Thanks!
The text was updated successfully, but these errors were encountered:
Sure!
I'm still investigating another issue.
Jimp is not properly releasing the memory, so in each lambda execution, the total memory consumed is increased by 10MB.
This is the statement causing the issue: const image = await Jimp.read(imgBuffer)
Hi, when i continuously call the lambda function, the memory increases call after call.
This is how i'm debugging the code:
console.log("imgToTensor: memory before: "+JSON.stringify(tf.memory())); const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3])) console.log("imgToTensor: memory after: "+JSON.stringify(tf.memory()));
The first time i call the function I get this:
imgToTensor: memory before: {"unreliable":true,"numTensors":263,"numDataBuffers":263,"numBytes":47349088}
imgToTensor: memory after: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
The second time i call the function i get the following:
imgToTensor: memory before: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
imgToTensor: memory after: {"unreliable":true,"numTensors":265,"numDataBuffers":265,"numBytes":105978208}
Looks like the statement
const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3]))
If you take a look to the "numTensors" property, it's increased after each function call.
After 5 lambda executions my lambda fails with
Error: Runtime exited with error: signal: killed
Is there a way to clean the resources from the previous lambda function call?
Thanks!
The text was updated successfully, but these errors were encountered: