-
Notifications
You must be signed in to change notification settings - Fork 3
Issues: FireCoderAI/firecoder
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Crash server with nvidia after update to last llama.cpp build
bug
Something isn't working
#34
opened Apr 17, 2024 by
gespispace
issue with Server (can not be find)
bug
Something isn't working
enhancement
New feature or request
#23
opened Mar 4, 2024 by
weekendkoder
local LLM -- option to select *.gguf models by myself
enhancement
New feature or request
#21
opened Mar 3, 2024 by
weekendkoder
Resume download model and server after an interuption
enhancement
New feature or request
good first issue
Good for newcomers
#9
opened Jan 21, 2024 by
gespispace
Add checking that system is capable for starting this model
enhancement
New feature or request
#8
opened Jan 21, 2024 by
gespispace
Add download speed and ETA
enhancement
New feature or request
good first issue
Good for newcomers
#7
opened Jan 21, 2024 by
gespispace
ProTip!
no:milestone will show everything without a milestone.