-
Notifications
You must be signed in to change notification settings - Fork 2.5k
[bug]: Fresh install always uses cuda for a ROCm compatible AMD GPU #4211
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
I'm having a similar problem using a 7900 XTX even after I installed Invoke with Invoke still fully installs only the Nvidia stack, then launches in cpu only mode |
Using the zip release Installer also installs Nvidia only dependencies, then launches cpu only:
|
The provided pytorch+rocm5.6 package from
The docs recommend installing 5.4.2 and you might have to run it like:
|
The 7900 XTX does not work with rocm 5.4.2
I suspect the ENV vars are doing the heavy lifting here, though one would need to change the specific values for a 7900 XTX in particular, I'll give this a try:
This is not correct. I have installed using this package for text-gen-webui, auto1111, and comfy without any issues |
I have tried to manually modify the create_install.sh and associated files to remove CUDA. I have tried to force ROCM install. I still haven't figured things out. I wish they had the old requirements.txt file. At this point InvokeAI is useless for AMD GPUs. I tried to update installer.py with the following:
|
Same experience for me on my AMD 6750 XT on Pop OS. I tell it I have an AMD card but it installs CUDA then launches in CPU mode. Both automatic and manual installation. InvokeAI 3.4.0post2. |
@cfbauer can you open the developer console in the automatic installation with option 7 and then do |
|
I also have the same issue on Arch with a 6800XT. I installed version v3.4.0post2 from zip, selected AMD GPU during install, yet it launches using CPU. Running |
Upon running
it is properly using my GPU now, however the following error showed up after running the command. Not sure if it would impact anything as all the correct ROCM related packages are installed now. Thanks
|
If that error isn't causing issues, I wouldn't worry about it! |
I'm getting the same error:
Trying to run Invoke now gives me this:
I also tried running what I thought was a compatible version of fsspec but still got the error above telling me I had an incompatible verion:
|
I may have found the culprit for why amdgpus default to cuda. If I find the time, I will test it and report the results back here. |
Any update on this? |
Yup and I have no clue what goes wrong or how it does it. |
Before running 7 Series AMD cards may need this:
Also of note, tweaking the version numbers of the above suggested command runs without errors: disclaimer: I don't have any idea what I'm doing |
Is there an existing issue for this?
OS
Linux
GPU
amd
VRAM
16GB
What version did you experience this issue on?
v3.0.2rc1
What happened?
I used the install script from the latest release, and selected AMD GPU (with ROCm). The script installs perfectly fine, and then I go to run InvokeAI in the graphical web client, and I get this output:
As you can see, it opens up with
cuda AMD Radeon RX 6800 XT
. This card works just fine with A111 and ROCm. I've also edited theinvokeai.yaml
file, as I saw thatxformers
was enabled (isn't available for AMD cards). Here's my current config:Of course, cuda doesn't work with my card and I get all-black output images.
Screenshots
No response
Additional context
This also breaks on the manual install and runs with cuda.
Contact Details
No response
The text was updated successfully, but these errors were encountered: