Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

tflite to onnx conversion: parse_tflite_graph list index out of range error #2275

Closed
suyash-narain opened this issue Dec 13, 2023 · 1 comment · Fixed by #2278
Closed

tflite to onnx conversion: parse_tflite_graph list index out of range error #2275

suyash-narain opened this issue Dec 13, 2023 · 1 comment · Fixed by #2278
Labels
pending on user response Waiting for more information or validation from user question An issue, pull request, or discussion needs more information

Comments

@suyash-narain
Copy link

suyash-narain commented Dec 13, 2023

I am using audio tflite models sourced from https://chromium.googlesource.com/chromiumos/platform2/+/main/ml_benchmark/model_zoo/README.md#audio-models

This consist of lstm.tflite, seanet_wave.tflite and seanet_stft.tflite models. I am trying to convert these models to ONNX format using tf2onnx. lstm.tflite gets converted to ONNX format easily but when trying seanet_wave.tflite and seanet_stft.tflite models to ONNX, i get the parse_tflite_graph list index out of range error.

log below:

python3 -m tf2onnx.convert --tflite seanet_stft.tflite --output seanet_stft.onnx --opset 18 --verbose --debug
2023-12-13 22:38:12.411742: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2023-12-13 22:38:12.413072: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-12-13 22:38:12.442503: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-12-13 22:38:12.442825: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-12-13 22:38:13.007368: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
:128: RuntimeWarning: 'tf2onnx.convert' found in sys.modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.convert'; this may result in unpredictable behaviour
2023-12-13 22:38:14,081 - WARNING - tf2onnx: IMPORTANT Installed protobuf is not cpp accelerated. Conversion will be extremely slow. See #1557
2023-12-13 22:38:14,081 - INFO - tf2onnx: inputs: None
2023-12-13 22:38:14,081 - INFO - tf2onnx: outputs: None
2023-12-13 22:38:14,083 - INFO - tf2onnx.tfonnx: Using tensorflow=2.12.0, onnx=1.15.0, tf2onnx=1.15.1/37820d
2023-12-13 22:38:14,083 - INFO - tf2onnx.tfonnx: Using opset <onnx, 18>
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/home/linuxbrew/.linuxbrew/Cellar/python@3.11/3.11.5/lib/python3.11/site-packages/tf2onnx/convert.py", line 714, in
main()
File "/home/linuxbrew/.linuxbrew/Cellar/python@3.11/3.11.5/lib/python3.11/site-packages/tf2onnx/convert.py", line 273, in main
model_proto, _ = _convert_common(
^^^^^^^^^^^^^^^^
File "/home/linuxbrew/.linuxbrew/Cellar/python@3.11/3.11.5/lib/python3.11/site-packages/tf2onnx/convert.py", line 168, in _convert_common
g = process_tf_graph(tf_graph, const_node_values=const_node_values,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/linuxbrew/.linuxbrew/Cellar/python@3.11/3.11.5/lib/python3.11/site-packages/tf2onnx/tfonnx.py", line 453, in process_tf_graph
main_g, subgraphs = graphs_from_tflite(tflite_path, input_names, output_names)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/linuxbrew/.linuxbrew/Cellar/python@3.11/3.11.5/lib/python3.11/site-packages/tf2onnx/tflite_utils.py", line 153, in graphs_from_tflite
parse_tflite_graph(tfl_graph, opcodes, model, prefix, tensor_shapes_from_interpreter)
File "/home/linuxbrew/.linuxbrew/Cellar/python@3.11/3.11.5/lib/python3.11/site-packages/tf2onnx/tflite_utils.py", line 485, in parse_tflite_graph
onnx_node = helper.make_node(optype, input_names, output_names, name=output_names[0], **attr)
~~~~~~~~~~~~^^^
IndexError: list index out of range

I am using opset18. I get the same error when using any other opset as well.
I referred the issue: #2055 which is similar to what I am getting, but it failed to give a solution. The model I am using has one CallOnce op as well similar to above issue. Is there a way to convert such tflite models?

thanks

@suyash-narain suyash-narain added the question An issue, pull request, or discussion needs more information label Dec 13, 2023
@fatcat-z
Copy link
Collaborator

fatcat-z commented Dec 14, 2023

No, this is still blocked now.

I can prepare a quick fix to fix this issue, but the conversion will still fail because of #2059. The root cause of #2059 is that those training ops are not able to be converted to any ONNX ops.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
pending on user response Waiting for more information or validation from user question An issue, pull request, or discussion needs more information
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants