-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: intel/ipex-llm
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
torch depend on cuda-libs when installed bigdl-core-cpp in linux
#12961
opened Mar 12, 2025 by
shichang00
flash-moe for deepseek portable zip, crash as Illegal instruction.
user issue
#12959
opened Mar 11, 2025 by
ParallelDqn
ollama-0.5.4-ipex-llm-2.2.0b20250220-win Socket address issue
user issue
#12955
opened Mar 8, 2025 by
respwill
Please upgrade to Ollama latest version to support the model split between GPU and CPU.
user issue
#12950
opened Mar 7, 2025 by
baoduy
deepseek-r1 model cannot stream token generation
user issue
#12946
opened Mar 6, 2025 by
junruizh2021
RuntimeError: "qlinear_forward_xpu" not implemented for 'Byte'
user issue
#12938
opened Mar 5, 2025 by
tripzero
Instalntly crashing when trying to run with Ollama
user issue
#12925
opened Mar 4, 2025 by
westcoastdevv
ollama-0.5.4-ipex-llm A770 16G Deepseek-R1:14b Deepseek-R1:32b 配置问题
user issue
#12897
opened Feb 25, 2025 by
XL-Qing
Support for Transformers 4.48+ to Address Security Vulnerabilities
user issue
#12889
opened Feb 24, 2025 by
hkarray
Attempting to run vLLM on CPU results in an error almost immediately.
user issue
#12873
opened Feb 23, 2025 by
HumerousGorgon
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.