Skip to content

Commit 36823e8

Browse files
committed
fix: Update file and reference naming for new API
1 parent 9a91fb8 commit 36823e8

5 files changed

+30
-24
lines changed

docsrc/index.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -70,9 +70,9 @@ Tutorials
7070

7171
tutorials/serving_torch_tensorrt_with_triton
7272
tutorials/notebooks
73-
tutorials/_rendered_examples/dynamo/dynamo_compile_resnet_example
74-
tutorials/_rendered_examples/dynamo/dynamo_compile_transformers_example
75-
tutorials/_rendered_examples/dynamo/dynamo_compile_advanced_usage
73+
tutorials/_rendered_examples/dynamo/torch_compile_resnet_example
74+
tutorials/_rendered_examples/dynamo/torch_compile_transformers_example
75+
tutorials/_rendered_examples/dynamo/torch_compile_advanced_usage
7676

7777
Python API Documenation
7878
------------------------

examples/dynamo/README.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
1-
.. _dynamo_compile:
1+
.. _torch_compile:
22

33
Dynamo / ``torch.compile``
44
----------------------------
55

66
Torch-TensorRT provides a backend for the new ``torch.compile`` API released in PyTorch 2.0. In the following examples we describe
77
a number of ways you can leverage this backend to accelerate inference.
88

9-
* :ref:`dynamo_compile_resnet`: Compiling a ResNet model using the Dynamo Compile Frontend for ``torch_tensorrt.compile``
9+
* :ref:`torch_compile_resnet`: Compiling a ResNet model using the Torch Compile Frontend for ``torch_tensorrt.compile``
1010
* :ref:`torch_compile_transformer`: Compiling a Transformer model using ``torch.compile``
11-
* :ref:`dynamo_compile_advanced_usage`: Advanced usage including making a custom backend to use directly with the ``torch.compile`` API
11+
* :ref:`torch_compile_advanced_usage`: Advanced usage including making a custom backend to use directly with the ``torch.compile`` API

examples/dynamo/dynamo_compile_advanced_usage.py renamed to examples/dynamo/torch_compile_advanced_usage.py

+3-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
"""
2-
.. _dynamo_compile_advanced_usage:
2+
.. _torch_compile_advanced_usage:
33
4-
Dynamo Compile Advanced Usage
4+
Torch Compile Advanced Usage
55
======================================================
66
77
This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
@@ -11,6 +11,7 @@
1111
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1212

1313
import torch
14+
import torch_tensorrt
1415

1516
# %%
1617

examples/dynamo/dynamo_compile_resnet_example.py renamed to examples/dynamo/torch_compile_resnet_example.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
"""
2-
.. _dynamo_compile_resnet:
2+
.. _torch_compile_resnet:
33
4-
Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
4+
Compiling ResNet using the Torch-TensorRT Dynamo Backend
55
==========================================================
66
7-
This interactive script is intended as a sample of the `torch_tensorrt.compile` workflow with `torch.compile` on a ResNet model."""
7+
This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
88

99
# %%
1010
# Imports and Model Definition
@@ -57,8 +57,8 @@
5757
)
5858

5959
# %%
60-
# Equivalently, we could have run the above via the convenience frontend, as so:
61-
# `torch_tensorrt.compile(model, ir="dynamo_compile", inputs=inputs, ...)`
60+
# Equivalently, we could have run the above via the torch.compile frontend, as so:
61+
# `optimized_model = torch.compile(model, backend="torch_tensorrt", options={"enabled_precisions": enabled_precisions, ...}); optimized_model(*inputs)`
6262

6363
# %%
6464
# Inference

examples/dynamo/dynamo_compile_transformers_example.py renamed to examples/dynamo/torch_compile_transformers_example.py

+16-11
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
Compiling a Transformer using torch.compile and TensorRT
55
==============================================================
66
7-
This interactive script is intended as a sample of the `torch_tensorrt.compile` workflow with `torch.compile` on a transformer-based model."""
7+
This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a transformer-based model."""
88

99
# %%
1010
# Imports and Model Definition
@@ -45,24 +45,29 @@
4545
torch_executed_ops = {}
4646

4747
# %%
48-
# Compilation with `torch_tensorrt.compile`
48+
# Compilation with `torch.compile`
4949
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5050

51+
# Define backend compilation keyword arguments
52+
compilation_kwargs = {
53+
"enabled_precisions": enabled_precisions,
54+
"debug": debug,
55+
"workspace_size": workspace_size,
56+
"min_block_size": min_block_size,
57+
"torch_executed_ops": torch_executed_ops,
58+
}
59+
5160
# Build and compile the model with torch.compile, using Torch-TensorRT backend
52-
optimized_model = torch_tensorrt.compile(
61+
optimized_model = torch.compile(
5362
model,
54-
ir="torch_compile",
55-
inputs=inputs,
56-
enabled_precisions=enabled_precisions,
57-
debug=debug,
58-
workspace_size=workspace_size,
59-
min_block_size=min_block_size,
60-
torch_executed_ops=torch_executed_ops,
63+
backend="torch_tensorrt",
64+
options=compilation_kwargs,
6165
)
66+
optimized_model(*inputs)
6267

6368
# %%
6469
# Equivalently, we could have run the above via the convenience frontend, as so:
65-
# `torch_tensorrt.compile(model, ir="dynamo_compile", inputs=inputs, ...)`
70+
# `torch_tensorrt.compile(model, ir="torch_compile", inputs=inputs, **compilation_kwargs)`
6671

6772
# %%
6873
# Inference

0 commit comments

Comments
 (0)