You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: tests/README.md
+28-1
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,37 @@ The goal of Converter tests are to tests individual converters againsts specific
6
6
7
7
Module tests are designed to test the compiler against common network architectures and verify the integration of converters together into a single engine.
8
8
9
+
In addition to the above, we have lowering tests (`//core/lowering`) which test the functionality of lowering passes and partitioning tests (`//core/partitioning `) which test different cases of torch fallback on test networks.
10
+
9
11
You can run the whole test suite with bazel. But be aware you may exhaust GPU memory (this may be seen as a cuDNN initialization error) running them naively, you therefore may need to limit the number of concurrent tests. Also because the inputs to tests are random it may make sense to run tests a few times.
10
12
11
-
Here are some settings that work well the current test suite on a TITAN V.
13
+
Here are some settings that we usually test with:
12
14
13
15
```
14
16
bazel test //tests --compilation_mode=dbg --test_output=errors --jobs=4 --runs_per_test=5
15
17
```
18
+
19
+
`--runs_per_test` is optional and can be performed to check if numerical issues in outputs persist across multiple runs.
20
+
21
+
`--jobs=4` is useful and is sometimes required to prevent too many processes to use GPU memory and cause CUDA out of memory issues.
22
+
23
+
### Testing using pre-built TRTorch library
24
+
25
+
Currently, the default strategy when we run all the tests (`bazel test //tests`) is to build the testing scripts along with the full TRTorch library (`libtrtorch.so`) from scratch. This can lead to increased testing time and might not be needed incase you already have a pre-built TRTorch library that you want to link against.
26
+
27
+
In order to **not** build the entire TRTorch library and only build the test scripts, please use the following command.
28
+
29
+
```
30
+
bazel test //tests --compilation_mode=dbg --test_output=summary --define trtorch_src=pre_built --jobs 2
31
+
```
32
+
33
+
The flag `--define trtorch_src=pre_built` signals bazel to use pre-compiled library as an external dependency for tests. The pre-compiled library path is defined as a `local_repository` rule in root `WORKSPACE` file (`https://github.com/NVIDIA/TRTorch/blob/master/WORKSPACE`).
34
+
35
+
```
36
+
# External dependency for trtorch if you already have precompiled binaries.
37
+
# This is currently used in pytorch NGC container CI testing.
0 commit comments