Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Error immediately when trying to generate #9

Open
Wehrmachtserdbeere opened this issue Jun 20, 2023 · 5 comments
Open

Error immediately when trying to generate #9

Wehrmachtserdbeere opened this issue Jun 20, 2023 · 5 comments
Assignees

Comments

@Wehrmachtserdbeere
Copy link

I get these errors after installing PyTorch from here https://pytorch.org/get-started/locally/
I had to get it from there because it gave me errors over having cpu instead of cuda before.

Traceback (most recent call last):
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\routes.py", line 437, in run_predict
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1346, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1074, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\webui.py", line 212, in generate
    wav = initial_generate(melody_boolean, MODEL, text, melody, msr, continue_file, duration, cf_cutoff, sc_text)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\webui.py", line 143, in initial_generate
    wav = MODEL.generate(descriptions=[text], progress=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\musicgen.py", line 163, in generate
    return self._generate_tokens(attributes, prompt_tokens, progress)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\musicgen.py", line 309, in _generate_tokens
    gen_tokens = self.lm.generate(
                 ^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 490, in generate
    next_token = self._sample_next_token(
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 354, in _sample_next_token
    all_logits = model(
                 ^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 253, in forward
    out = self.transformer(input_, cross_attention_src=cross_attention_input)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 657, in forward
    x = self._apply_layer(layer, x, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 614, in _apply_layer
    return layer(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 508, in forward
    self._sa_block(self.norm1(x), src_mask, src_key_padding_mask))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\transformer.py", line 599, in _sa_block
    x = self.self_attn(x, x, x,
        ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 367, in forward
    x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memory_efficient_attention
    return _memory_efficient_attention(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _memory_efficient_attention
    return _memory_efficient_attention_forward(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\__init__.py", line 306, in _memory_efficient_attention_forward
    op = _dispatch_fw(inp)
         ^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\dispatch.py", line 94, in _dispatch_fw
    return _run_priority_list(
           ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\dispatch.py", line 69, in _run_priority_list
    raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
     query       : shape=(2, 1, 16, 64) (torch.float32)
     key         : shape=(2, 1, 16, 64) (torch.float32)
     value       : shape=(2, 1, 16, 64) (torch.float32)
     attn_bias   : <class 'NoneType'>
     p           : 0
`flshattF` is not supported because:
    device=cpu (supported: {'cuda'})
    dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
`tritonflashattF` is not supported because:
    device=cpu (supported: {'cuda'})
    dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
    Operator wasn't built - see `python -m xformers.info` for more info
    triton is not available
`cutlassF` is not supported because:
    device=cpu (supported: {'cuda'})
`smallkF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 32
    unsupported embed per head: 64
@1aienthusiast
Copy link
Owner

try installing torch with

pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu118

@1aienthusiast 1aienthusiast self-assigned this Jun 20, 2023
@Wehrmachtserdbeere
Copy link
Author

Sadly didn't work

Traceback (most recent call last):
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\webui.py", line 15, in <module>
    import torchaudio
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchaudio\__init__.py", line 1, in <module>
    from torchaudio import (  # noqa: F401
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchaudio\_extension\__init__.py", line 43, in <module>
    _load_lib("libtorchaudio")
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchaudio\_extension\utils.py", line 61, in _load_lib
    torch.ops.load_library(path)
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\_ops.py", line 790, in load_library
    ctypes.CDLL(path)
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\ctypes\__init__.py", line 376, in __init__
    self._handle = _dlopen(self._name, mode)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError 127] The specified procedure could not be found```

@1aienthusiast
Copy link
Owner

python 3.9 is recommended, have you done

pip install -r requirements.txt

@Wehrmachtserdbeere
Copy link
Author

Yes, I did. I'm using Python 3.11, as downgrading would wreck my system.

@1aienthusiast
Copy link
Owner

if downgrading wrecks your system consider making a conda environment

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants