Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Questions about Conversion of torch model to tensorRT model #17

Open
huangshilong911 opened this issue May 6, 2024 · 1 comment
Open

Comments

@huangshilong911
Copy link

Hi,The following error occurs when performing the conversion of a torch model to a tensorRT model:TypeError: forward() missing 1 required positional argument: 'multimask_output'.
But I trained the model exactly according to the readme.
Could you please help me out with this?

(sam0) jetson@ubuntu:~/Workspace/aicam/ircamera$ python3 convert-sam-trtpth.py
Traceback (most recent call last):
File "convert-sam-trtpth.py", line 14, in
model_trt = torch2trt(model, [x], fp16_mode=True)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch2trt-0.5.0-py3.8.egg/torch2trt/torch2trt.py", line 558, in torch2trt
outputs = module(*inputs)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1111, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
TypeError: forward() missing 1 required positional argument: 'multimask_output'

@huangshilong911 huangshilong911 closed this as not planned Won't fix, can't repro, duplicate, stale May 11, 2024
@huangshilong911
Copy link
Author

I describe the problem in more detail at this link:

NVIDIA-AI-IOT/torch2trt#926 (comment)

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant