We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Is your feature request related to a problem? Please describe.
While Intel GPUs acceleration is partially supported with llama.cpp , https://github.com/intel/intel-extension-for-transformers seems to be specifically optimized to run optimized flow for intel.
For example, to run SD: https://www.intel.com/content/www/us/en/developer/articles/technical/stable-diffusion-with-intel-arc-gpus.html
Describe the solution you'd like
Support for https://github.com/intel/intel-extension-for-transformers
Describe alternatives you've considered
Additional context
The text was updated successfully, but these errors were encountered:
#1660 #1689 #1686
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
Is your feature request related to a problem? Please describe.
While Intel GPUs acceleration is partially supported with llama.cpp , https://github.com/intel/intel-extension-for-transformers seems to be specifically optimized to run optimized flow for intel.
For example, to run SD: https://www.intel.com/content/www/us/en/developer/articles/technical/stable-diffusion-with-intel-arc-gpus.html
Describe the solution you'd like
Support for
https://github.com/intel/intel-extension-for-transformers
Describe alternatives you've considered
Additional context
The text was updated successfully, but these errors were encountered: