❓ [Question] How do I load the torch tensorRT model on multiple gpus #2319
Labels
bug: triaged [verified]
We can replicate the bug
component: runtime
question
Further information is requested
❓ Question
In TorchServe, we have this concept of workers. In a multi-GPU node, we can assign each GPU to a worker.
I am noticing that tensorRT model is getting loaded on GPU 0 even though we specify the correct GPU ID
for each worker.
torch.jit.load(model_pt_path, map_location=self.device)
How do we load a tensorRT model in a a device id which is not 0 ?
What you have already tried
I have tried loading a torchscript model, Here, it loads on all 4 GPUs
Using
torch.jit.load(model_pt_path, map_location=self.device)
to load the same model on each of the 4 GPUsHave a simpler repro
Environment
conda
,pip
,libtorch
, source): pipAdditional context
The text was updated successfully, but these errors were encountered: