Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

ONNX export Error: RuntimeError: _Map_base::at #37

Open
johnyang-nv opened this issue May 14, 2024 · 2 comments
Open

ONNX export Error: RuntimeError: _Map_base::at #37

johnyang-nv opened this issue May 14, 2024 · 2 comments

Comments

@johnyang-nv
Copy link

I have tried exporting the onnx file of FB-OCC, but I face the following error during tracing at the custom op of QuickCumsumCuda specifically when torch.onnx.export while the feed-forward inference of the model does not have any issue:

File "/FB-BEV/mmdet3d/ops/bev_pool_v2/bev_pool.py", line 102, in forward_dummy
    x = QuickCumsumCuda.apply(depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths)
RuntimeError: _Map_base::at

This is how the error can be reproduced:

  1. I had isolated the custom op QuickCumsumCuda in a separate class function as showin the following for the ease of reproducibility:
class Bev_Pool_v2(torch.nn.Module):
    def __init__(self):
        super(Bev_Pool_v2, self).__init__()
    def forward(self, depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths):
        x = QuickCumsumCuda.apply(depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths)
        x = x.permute(0, 4, 1, 2, 3).contiguous()
        return x
    def forward_dummy(self, data):
        depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths = data
        x = QuickCumsumCuda.apply(depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths)    
        x = x.permute(0, 4, 1, 2, 3).contiguous()
        return x
  1. I generate/feed-forward the random inputs, which does not yield any issue during model inference.
# Random Generations of Inputs
depth = torch.rand(1, 6, 80, 16, 44).cuda()
feat = torch.rand(1, 6, 80, 16, 44).cuda()
ranks_depth = torch.randint(0, 337522, (206988, )).to(dtype=torch.int32).cuda()
ranks_feat = torch.randint(0, 4223, (206988, )).to(dtype=torch.int32).cuda()
ranks_bev = torch.randint(0, 79972, (206988, )).to(dtype=torch.int32).cuda()
bev_feat_shape = (1, 8, 100, 100, 80)
interval_starts = torch.randint(0, 79972, (52815, )).to(dtype=torch.int32).cuda()
interval_lengths = torch.randint(0, 213, (52815, )).to(dtype=torch.int32).cuda()

# Define the model and the input
model = Bev_Pool_v2().eval().cuda()
model.forward = model.forward_dummy
input_ = [depth, feat, ranks_depth, ranks_feat, ranks_bev, 
    bev_feat_shape, interval_starts, interval_lengths]

# Feed the input to the model
model(input_)
print('feed-forward inference is done without errors.')
  1. Yet, the error mentioned above appears when exporting the model.
with torch.no_grad():
    torch.onnx.export(
        model,
        input_,
        'bev_pool_v2_USE.onnx',
        # export_params=True,
        keep_initializers_as_inputs=True,
        do_constant_folding=False,
        verbose=True,
        opset_version=13
    )

Despite exploring various solutions, I have yet to resolve this error.

@chl916185
Copy link

How to solve it?
@johnyang-nv

@royalneverwin
Copy link

You can find the definition of bev_pool_v2's onnx op in BEVDet repo, I used to exporting the onnx file of BEVDepth and it works.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants