Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Deploying LLM in On-Premises Server to Assist Users to Launch Locally in Work Laptop - Web Browser #934

Open
sanket038 opened this issue Mar 18, 2024 · 3 comments

Comments

@sanket038
Copy link

Feature request

I have been searching through a lot of websites and watching youtube videos on how to deploy opensource LLM models locally on a windows server and then it could be further exposed to the users who can interact with the LLM to ask questions using their own laptop's web browser. I believe this could be acheived using openllm however, I am not sure if this is already included in the library.

Motivation

No response

Other

No response

@VISWANATH78
Copy link

Have you find an way @sanket038 . Even i am in search of how to host the openllm from my working server and then making api calls from the server . any idea on hosting the openllm from the server . IF so please help me out.

@euroblaze
Copy link

Try to look at something like Ollama. (And let us know if that's what you seek.)

@VISWANATH78
Copy link

VISWANATH78 commented Apr 8, 2024

do u know the steps to link my custom downloaded model to be linked with ollama and then serve as an api to everyone. where i have deployment an chatbot ui i need to have backend code as the api which can be accessed by entire members.like ui in multiple device piging the server like that. @euroblaze . If you have discord please let me know we can connect send me the invite link to this mail newtech1106@gmail.com.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants