Skip to content

RuntimeError: PyTorch convert function for op 'intimplicit' not implemented. #2083

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
Yaodada12 opened this issue Dec 15, 2023 · 9 comments
Closed
Labels
missing layer type Unable to convert a layer type from the relevant framework PyTorch (not traced)

Comments

@Yaodada12
Copy link

RuntimeError: PyTorch convert function for op 'intimplicit' not implemented.

@Yaodada12 Yaodada12 added the bug Unexpected behaviour that should be corrected (type) label Dec 15, 2023
@TobyRoseman
Copy link
Collaborator

I cannot find any documentation about that PyTorch layer. Can you give us some standalone code to reproduce this issue?

@TobyRoseman TobyRoseman added missing layer type Unable to convert a layer type from the relevant framework PyTorch (not traced) and removed bug Unexpected behaviour that should be corrected (type) labels Dec 15, 2023
@artificalaudio
Copy link

can you re-open this? This never got solved:

https://github.com/pytorch/pytorch/blob/2e21cb095a04dccb1900367623b4ca59ddcf26a2/torch/csrc/jit/runtime/register_prim_ops.cpp#L193

that's the intImplicit aten function.

@TobyRoseman
Copy link
Collaborator

@artificalaudio - What does this op do? Can you give me a simple toy model that uses this op?

@artificalaudio
Copy link

artificalaudio commented Feb 16, 2024

@artificalaudio - What does this op do? Can you give me a simple toy model that uses this op?

I've been trying figure this out. For lack of a better term, seems like a background operation that would be casting a tensor to ints? If we just have a look at the code quick:

        TORCH_SELECTIVE_SCHEMA("aten::IntImplicit(Tensor a) -> int"),
        [](Stack& stack) {
          at::Tensor a; // some tensor
          pop(stack, a);
          checkImplicitTensorToNum(a, /*to int*/ true); // check this tensor is int?
          push(stack, a.item<int64_t>()); // a item as 64 int?
        },

I'm an idiot and this is all nonsense to me. But as a guess this has something to do with a get item call, and as a guess would this would ensure it's a int64?

I can share code, and the model in private. Where the coreML conversion fails. Apologies, I can't share publicly yet.

I've searched through the model repo's code, and there's no actual Pytroch operation like that, confusing right? I found that register prim code from the pytorch forum saying here's where the undocumented operations are, and then found intImplicit. This is probably something backend-y to do with getItem at a guess.

I'm really struggling to find where this operation is coming from to begin with.

Should just be script/trace model. Convert to CoreML boom, done right? Would really love to convert to CoreML to bench performance.

Any deeper insights CoreML export can give, like a more verbose debug stream, or tell where that operation came from in the first place? (I have debug=True)

@artificalaudio
Copy link

ok a little more digging just by searching for IntImplicit in the torch code I find this:

if (value_isa_tensor) {
      if (concrete_float) {
        value = graph.insert(aten::FloatImplicit, {value}, {}, loc);
      } else if (concrete_complex) {
        value = graph.insert(aten::ComplexImplicit, {value}, {}, loc);
      } else if (concrete_int) {
        value = graph.insert(aten::IntImplicit, {value}, {}, loc);
      } else if (concrete_number) {
        value = graph.insert(aten::ScalarImplicit, {value}, {}, loc);
      }

https://github.com/pytorch/pytorch/blob/86dedebeafdd7b08d21432cebd7538437d3b7509/torch/csrc/jit/frontend/schema_matching.cpp#L134

the code at the top is:

// implicit conversions
  if (allow_conversions) {

found in jit->frontend->schema_matching

I don't think I'm specifically calling this function in my code. Is this just some part of the jit model?

@TobyRoseman
Copy link
Collaborator

@artificalaudio - thanks for looking into this. Unfortunately, I'm not able to load external PyTorch models, as this is a security risk. We really need a toy model that uses this op; this will be necessary for creating a unit test. Perhaps you could create a copy of your model, then selectively delete things, till you isolate what is using this op.

@artificalaudio
Copy link

@artificalaudio - thanks for looking into this. Unfortunately, I'm not able to load external PyTorch models, as this is a security risk.

This is a totally fair point. I could save the weights as safe tensor, isn't there some sort of safety feature baked into that method?

We really need a toy model that uses this op; this will be necessary for creating a unit test.

Completely understood. I'm not entirely sure how I'd track this down yet.

Perhaps you could create a copy of your model, then selectively delete things, till you isolate what is using this op.

interesting idea. I like it though, a hacker response! I'm not sure I have the patience for that!

Thing is, the model is mental to begin with. It's a retrieval voice conversion model, the Neural Source Filter head. It transforms hubert/wav2vec SSL embeddings into a representation you can synthesise from (like a neural vocoder, but with ssl embeddings instead). So there's a whole whack of stuff in the model, sineGenerators, TextProjection layers, encoder/decoder transformer, a posterior encoder etc.

It's quite a complex and chaotic model to begin with. This is the synthesiser I'm trying to export to CoreML: https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/infer_pack/models.py#L810

962 / 963 looks suspect:

       if skip_head is not None and return_length is not None:
            assert isinstance(skip_head, torch.Tensor)
            assert isinstance(return_length, torch.Tensor)
            head = int(skip_head.item())
            length = int(return_length.item())

https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/infer_pack/models.py#L962

That calling an int implicit by doing: int(skip_head.item()) ?

do cast to int implict like the C style thing int(floatVar) ? and get item?

At a massive guess it's probably that right?

@TobyRoseman
Copy link
Collaborator

Sorry, I'm not familiar with safe tensors. At any rate, having a giant model that reproduces the issue isn't very helpful.

The int method being the source of intimplicit seems quite possible. However, I'm unable to come up with an example where torch.jit.trace doesn't convert that call to a constant. Are you not call torch.jit.trace prior to converting your model to Core ML?

@artificalaudio
Copy link

Sorry, I'm not familiar with safe tensors. At any rate, having a giant model that reproduces the issue isn't very helpful.

The int method being the source of intimplicit seems quite possible. However, I'm unable to come up with an example where torch.jit.trace doesn't convert that call to a constant. Are you not call torch.jit.trace prior to converting your model to Core ML?

Hey, I'm back on it. I have a simple example finally, sorry for delay!

import torch
import coremltools as ct
from infer.lib.infer_pack.modules import WN

model = WN(192, 5, dilation_rate=1, n_layers=16, gin_channels=256, p_dropout=0)
model.remove_weight_norm()
model.eval()

test_x = torch.rand(1, 192, 200)
test_x_mask = torch.rand(1, 1, 200)
test_g = torch.rand(1, 256, 1)

traced_model = torch.jit.trace(model,
  (test_x, test_x_mask, test_g),
  check_trace = True)

x = ct.TensorType(name='x', shape=test_x.shape)
x_mask = ct.TensorType(name='x_mask', shape=test_x_mask.shape)
g = ct.TensorType(name='g', shape=test_g.shape)

mlmodel = ct.converters.convert(traced_model,
    inputs=[x, x_mask, g])

WN from:

https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/infer_pack/modules.py#L188

Now I suspect the problem call/ line is :

n_channels_tensor = torch.IntTensor([self.hidden_channels])

That's the part of the that looks like it's casting, or making an intTensor. That's what I think is could be responsible for this IntImplicit operation call.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
missing layer type Unable to convert a layer type from the relevant framework PyTorch (not traced)
Projects
None yet
Development

No branches or pull requests

3 participants