Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Handle [] inputs #108

Closed
Tracked by #84
justinchuby opened this issue Jul 24, 2024 · 0 comments · Fixed by #116
Closed
Tracked by #84

Handle [] inputs #108

justinchuby opened this issue Jul 24, 2024 · 0 comments · Fixed by #116

Comments

@justinchuby
Copy link
Owner

PyTorch ONNX Conversion Error Report

✅ Obtain model graph with `torch.export.export`
✅ Translate the graph into ONNX
❌ Run `onnx.checker` on the ONNX model
⚪ Execute the model with ONNX Runtime
⚪ Validate model output accuracy

Error message

Traceback (most recent call last):

  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 1052, in export
    _isolated.safe_call(

  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_isolated.py", line 53, in safe_call
    raise result

onnx.onnx_cpp2py_export.checker.ValidationError: Node (node_Cast_0) has input size 0 not in range [min=1, max=1].

==> Context: Bad node spec for node. Name: node_Cast_0 OpType: Cast

Exported program

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "i64[]"):
            # File: /Users/justinc/Documents/GitHub/torch-onnx/tests/torch_tests/fx_consistency_test.py:1662 in forward, code: return self.operator(*args, **self.kwargs)
            view: "i64[]" = torch.ops.aten.view.default(arg0_1, []);  arg0_1 = None
            return (view,)
            
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='view'), target=None)])
Range constraints: {}

ONNX model:

<
    ir_version=9,
    opset_imports={'': 18, 'pkg.onnxscript.torch_lib.common': 1},
    producer_name='torch',
    producer_version='2.3.1',
    domain=None,
    model_version=None,
>
graph(
    name=main_graph,
    inputs=(
        %"arg0_1"<INT64,[]>
    ),
    outputs=(
        %"view"<INT64,[]>
    ),
) {
    0 |  # node_Cast_0
         %"val_0"<?,?> ⬅️ ::Cast() {to=7}
    1 |  # node_Reshape_1
         %"view"<INT64,[]> ⬅️ ::Reshape(%"arg0_1", %"val_0") {allowzero=0}
    return %"view"<INT64,[]>
}

Analysis

PyTorch ONNX Conversion Analysis

Model Information

The model has 0 parameters and 0 buffers (non-trainable parameters).
Number of parameters per dtype:

defaultdict(<class 'int'>, {})

Number of buffers per dtype:

defaultdict(<class 'int'>, {})

Inputs:

  • arg0_1: TensorMetadata(shape=torch.Size([]), dtype=torch.int64, requires_grad=False, stride=(), memory_format=torch.contiguous_format, is_quantized=False, qparams={})

Outputs:

  • view: TensorMetadata(shape=torch.Size([]), dtype=torch.int64, requires_grad=False, stride=(), memory_format=torch.contiguous_format, is_quantized=False, qparams={})

The FX graph has 3 nodes in total. Number of FX nodes per op:

  • placeholder: 1
  • call_function: 1
  • output: 1

Of the call_function nodes, the counts of operators used are:

  • aten.view.default: 1

ONNX Conversion Information

All operators in the model have registered ONNX decompositions.

This was referenced Jul 25, 2024
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant