Replies: 1 comment
-
@p-robak97 Tensorflow of the inference server is only used to build the protofiles to allow communication between the application container and the inference server. You can find out the Tensorflow version that is actually in the firmware by looking at the SBOM file that comes when you download an Axis OS version. More details here |
Beta Was this translation helpful? Give feedback.
0 replies
# for free
to join this conversation on GitHub.
Already have an account?
# to comment
-
I have succesfully exported yolov10 model to full int8 quant tflite model using ai-edge-torch. I tired to do the inference using modified pose-estimator-with-flask but i get the following error from acap-runtime:
then in
journalctl
i found that:The problem seems to be in CONCATENATION. I checked locally using tf lite Interpreter and the model works fine with tensorflow>=2.14 but i get the same error with older versions of tensorflow. The error from larod suggests there is tensorflow-lite 2.10.1 version. I tried to build my own acap-runtime image with tensorflow 2.14 but this does not resolve my problem as i'm getting the same error.
Is it possible to upgrade tensorflow lite to >=2.14?
Also i don't understand why the error suggests that tensorflow-lite version is 2.10.1, when in all docker containers there is 2.9.
Environment
Beta Was this translation helpful? Give feedback.
All reactions