This is an autogenerated package constructed using BinaryBuilder.jl
.
The originating build_tarballs.jl
script can be found on Yggdrasil
, the community build tree.
If you have any issue, please report it to the Yggdrasil bug tracker.
For more details about JLL packages and how to use them, see BinaryBuilder.jl
documentation.
The tarballs for TensorRT_jll.jl
have been built from these sources:
- compressed archive: https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.5.0/tars/TensorRT-10.5.0.18.Ubuntu-24.04.aarch64-gnu.cuda-12.6.tar.gz (SHA256 checksum:
c306bb5d01e496fc20728d3a1a30731f6f1a7c33f92a2726ff2cc8e110906683
)
TensorRT_jll.jl
is available for the following platforms:
Linux aarch64 {cuda=12.0, cuda_platform=sbsa, libc=glibc}
(aarch64-linux-gnu-cuda_platform+sbsa-cuda+12.0
)
The following JLL packages are required by TensorRT_jll.jl
:
The code bindings within this package are autogenerated from the following Products
:
LibraryProduct
:libnvinfer
LibraryProduct
:libnvinfer_builder_resource
LibraryProduct
:libnvinfer_dispatch
LibraryProduct
:libnvinfer_lean
LibraryProduct
:libnvinfer_plugin
LibraryProduct
:libnvinfer_vc_plugin
LibraryProduct
:libnvonnxparser
ExecutableProduct
:trtexec