-
Notifications
You must be signed in to change notification settings - Fork 687
RuntimeError: PyTorch convert function for op 'intimplicit' not implemented. #2083
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
I cannot find any documentation about that PyTorch layer. Can you give us some standalone code to reproduce this issue? |
can you re-open this? This never got solved: that's the intImplicit aten function. |
@artificalaudio - What does this op do? Can you give me a simple toy model that uses this op? |
I've been trying figure this out. For lack of a better term, seems like a background operation that would be casting a tensor to ints? If we just have a look at the code quick:
I'm an idiot and this is all nonsense to me. But as a guess this has something to do with a get item call, and as a guess would this would ensure it's a int64? I can share code, and the model in private. Where the coreML conversion fails. Apologies, I can't share publicly yet. I've searched through the model repo's code, and there's no actual Pytroch operation like that, confusing right? I found that register prim code from the pytorch forum saying here's where the undocumented operations are, and then found intImplicit. This is probably something backend-y to do with getItem at a guess. I'm really struggling to find where this operation is coming from to begin with. Should just be script/trace model. Convert to CoreML boom, done right? Would really love to convert to CoreML to bench performance. Any deeper insights CoreML export can give, like a more verbose debug stream, or tell where that operation came from in the first place? (I have debug=True) |
ok a little more digging just by searching for IntImplicit in the torch code I find this:
the code at the top is:
found in jit->frontend->schema_matching I don't think I'm specifically calling this function in my code. Is this just some part of the jit model? |
@artificalaudio - thanks for looking into this. Unfortunately, I'm not able to load external PyTorch models, as this is a security risk. We really need a toy model that uses this op; this will be necessary for creating a unit test. Perhaps you could create a copy of your model, then selectively delete things, till you isolate what is using this op. |
This is a totally fair point. I could save the weights as safe tensor, isn't there some sort of safety feature baked into that method?
Completely understood. I'm not entirely sure how I'd track this down yet.
interesting idea. I like it though, a hacker response! I'm not sure I have the patience for that! Thing is, the model is mental to begin with. It's a retrieval voice conversion model, the Neural Source Filter head. It transforms hubert/wav2vec SSL embeddings into a representation you can synthesise from (like a neural vocoder, but with ssl embeddings instead). So there's a whole whack of stuff in the model, sineGenerators, TextProjection layers, encoder/decoder transformer, a posterior encoder etc. It's quite a complex and chaotic model to begin with. This is the synthesiser I'm trying to export to CoreML: https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/infer_pack/models.py#L810 962 / 963 looks suspect:
That calling an int implicit by doing: int(skip_head.item()) ? do cast to int implict like the C style thing int(floatVar) ? and get item? At a massive guess it's probably that right? |
Sorry, I'm not familiar with safe tensors. At any rate, having a giant model that reproduces the issue isn't very helpful. The |
Hey, I'm back on it. I have a simple example finally, sorry for delay!
WN from: Now I suspect the problem call/ line is :
That's the part of the that looks like it's casting, or making an intTensor. That's what I think is could be responsible for this IntImplicit operation call. |
RuntimeError: PyTorch convert function for op 'intimplicit' not implemented.
The text was updated successfully, but these errors were encountered: