Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Make CUDA optional - not all platforms with GPUs support Nvidia. #182

Open
axemaster opened this issue May 3, 2023 · 7 comments
Open

Make CUDA optional - not all platforms with GPUs support Nvidia. #182

axemaster opened this issue May 3, 2023 · 7 comments
Assignees
Labels
enhancement New feature or request

Comments

@axemaster
Copy link

Could not find a similar issue but this is a show stopper for Mac (M2 Pro etc.) Short story: Apple will never support Nvidia again.

The A1111 unofficial worker extension does not have this dependency but broke yesterday for unknown reasons, so I tried to install the official worker. I cannot because of this dependency on Nvidia CUDA toolkit.

Mac is fast enough to contribute to the horde, I have somewhere north of 500k kudos, but right now cannot generate more images. Help?

@db0
Copy link
Member

db0 commented May 3, 2023

Unfortunately we don't have a macos developer or a macos to develop in. Do you know how to make this work yourself?

@axemaster
Copy link
Author

Unfortunately we don't have a macos developer or a macos to develop in. Do you know how to make this work yourself?

A1111 uses PyTorch, the Mac version natively supports Mac GPUs. When I run an image my GPUs get pegged.
So I think my request is an option to use PyTorch without requiring CUDA.

@axemaster
Copy link
Author

The A1111 unofficial worker extension does not have this dependency but broke yesterday for unknown reasons

It's working now without any comments on any of the issues I opened on it. Assuming a horde server-side API fix as they apparently broke... something. Still, I'd rather run the official worker so I'm leaving this open.

@axemaster
Copy link
Author

The 5-11 announcement implies a new (official?) worker is coming. I hope it does not require CUDA!

@db0
Copy link
Member

db0 commented May 13, 2023

You can try it out on the comfy branch

@jug-dev jug-dev mentioned this issue May 26, 2023
@tazlin
Copy link
Member

tazlin commented May 30, 2023

I have seen a couple of requests to use AMD cards.

@tazlin tazlin added the enhancement New feature or request label May 30, 2023
@FredHappyface
Copy link

Hey thank you for this awesome project! I've recently been looking into running the alchemist worker locally. I'm curious on what the blocker for supporting other architectures is? Is this with pytorch? (I can see that there is a CPU only mode but installing this - just by updating the requirements.txt) results in a big sad

Logs
AssertionError: Torch not compiled with CUDA enabled
Exception in thread Thread-7 (_reload_models):
Traceback (most recent call last):
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\threading.py", line 1009, in _bootstrap_inner
    self.run()
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\threading.py", line 946, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\loguru\_logger.py", line 1277, in catch_wrapper
    return function(*args, **kwargs)
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\worker\bridge_data\framework.py", line 218, in _reload_models
    success = model_manager.load(model)
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\model_manager\hyper.py", line 417, in load
    return model_manager.load(
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\model_manager\base.py", line 319, in load
    self.ensure_ram_available()
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\model_manager\base.py", line 234, in ensure_ram_available
    vram_headroom = get_torch_free_vram_mb() - UserSettings.get_vram_to_leave_free_mb()
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\comfy_horde.py", line 176, in get_torch_free_vram_mb
    return round(_comfy_get_free_memory() / (1024 * 1024))
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\_comfyui\comfy\model_management.py", line 380, in get_free_memory
    dev = get_torch_device()
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\_comfyui\comfy\model_management.py", line 141, in get_torch_device
    return torch.cuda.current_device()
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\torch\cuda\__init__.py", line 674, in current_device
    _lazy_init()
  File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

@tazlin tazlin moved this to Under Consideration in AI-Horde Feature Requests Oct 17, 2023
@tazlin tazlin self-assigned this Oct 17, 2023
@tazlin tazlin moved this from Under Consideration to Todo in AI-Horde Feature Requests Dec 17, 2023
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
enhancement New feature or request
Projects
Development

No branches or pull requests

4 participants