-
Notifications
You must be signed in to change notification settings - Fork 364
Closed
Labels
Description
Context
Currently, aten.convolution
converters do not support ITensor
biases, which can cause test failures in CI (example), as the new Dynamo compile path primarily uses ITensor
objects for general tensors throughout computation.
TensorRT/py/torch_tensorrt/fx/converters/acc_ops_converters.py
Lines 202 to 208 in d3a47c4
# and bias being ITensor is not supported in TensorRT api | |
# right now | |
if kwargs["bias"] is not None and not isinstance(kwargs["bias"], torch.Tensor): | |
raise RuntimeError( | |
f"linear {name} has bias of type {type(kwargs['bias'])}, Expect Optional[Tensor]" | |
) | |
bias = to_numpy(kwargs["bias"]) # type: ignore[arg-type] |
Proposed Solution
Allow ITensor
biases for aten.convolution
ops in the same way that the kernel weights can be ITensor
objects. See IConvolutionLayer
for further information on TensorRT convolution layers.