Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

TTS is stuck with PyTorch 1.12 #183

Closed
TareHimself opened this issue Sep 26, 2022 · 6 comments
Closed

TTS is stuck with PyTorch 1.12 #183

TareHimself opened this issue Sep 26, 2022 · 6 comments
Assignees
Labels
help wanted Extra attention is needed

Comments

@TareHimself
Copy link

Hi, after i send the tts model some text to convert it gets stuck for a while then prints this

C:\**\PycharmProjects\voice\venv\lib\site-packages\torch\nn\modules\module.py:1130: UserWarning: operator () profile_node %433 : int[] = prim::profile_ivalue(%431) does not have profile information (Triggered internally at ..\torch\csrc\jit\codegen\cuda\graph_fuser.cpp:108.) return forward_call(*input, **kwargs) C:\**\PycharmProjects\voice\venv\lib\site-packages\torch\nn\modules\module.py:1130: UserWarning: concrete shape for linear input & weight are required to decompose into matmul + bias (Triggered internally at ..\torch\csrc\jit\codegen\cuda\graph_fuser.cpp:2077.) return forward_call(*input, **kwargs)

The model is https://models.silero.ai/models/tts/en/v3_en.pt at 48000 using en_10

@TareHimself TareHimself added the help wanted Extra attention is needed label Sep 26, 2022
@TareHimself TareHimself changed the title ❓ Questions / Help / Support Stuck doing tts Sep 26, 2022
@snakers4
Copy link
Owner

Hi,

Does this happen with PyTorch 1.12?

@TareHimself
Copy link
Author

torch 1.12.1+cu116

@snakers4 snakers4 changed the title Stuck doing tts Stuck doing TTS with PyTorch 1.12 Sep 26, 2022
@snakers4 snakers4 changed the title Stuck doing TTS with PyTorch 1.12 TTS is stuck with PyTorch 1.12 Sep 26, 2022
@snakers4
Copy link
Owner

A quick fix is to use PyTorch 1.11.
This happens only with PyTorch 1.12.

We checked extensively why this happens, this boils down to one particular (mostly stadard!) layer in our model.
Surprisingly, this was not the case in 1.9, 1.10 and 1.11. This also does not happen with other models with the same layer.
Internally, we decided to wait until 1.13, since we could not find an easy fix and we faced other compiler issues with 1.12.

Also doing this may help with 1.11:

torch._C._jit_set_profiling_mode(False)

@TareHimself
Copy link
Author

Thanks, that worked

@debagos
Copy link

debagos commented Nov 19, 2022

Had the same problem, adding

torch._C._jit_set_profiling_mode(False)

to my code fixed the issue for me as well.

@snakers4
Copy link
Owner

to my code fixed the issue for me as well.

Then it's official, JIT compiler is 100% to blame here.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants