-
Notifications
You must be signed in to change notification settings - Fork 364
small fix: Index validator enable int64 #2642
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Conversation
- Repair test case
1b13159
to
5501ace
Compare
@@ -43,7 +43,7 @@ def forward(self, x: torch.Tensor, y: torch.Tensor): | |||
# For the default settings, we can simply call torch.compile | |||
# with the backend "torch_tensorrt", and run the model on an | |||
# input to cause compilation, as so: | |||
optimized_model = torch.compile(model, backend="torch_tensorrt") | |||
optimized_model = torch.compile(model, backend="torch_tensorrt", dynamic=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason why you explicitly set dynamic=False
instead of None
here ?
@@ -61,6 +61,7 @@ | |||
optimized_model = torch.compile( | |||
model, | |||
backend="torch_tensorrt", | |||
dynamic=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
^ same question here
@peri044 - when TensorRT/py/torch_tensorrt/_compile.py Lines 251 to 254 in e38a7f3
When users call torch.compile directly, however, they need to specify dynamic=False to avoid errors.
|
Description
truncate_long_and_double
in Dynamo #2590, constant index Tensors are typeint64
before being frozen. This workaround enables valid usecases while Refactortruncate_long_and_double
in Dynamo #2590 is being implementedType of change
Checklist: