Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Error: Failed to fetch completions: Error processing prompt (see logs with DEBUG>=2): No module named '_posixshmem' #237

Open
Shivp1413 opened this issue Sep 27, 2024 · 2 comments

Comments

@Shivp1413
Copy link

Issue Description

I am currently running the Exo framework on my Windows device. Initially, I encountered compatibility errors because the main.py file was only supported on macOS. However, after modifying main.py, I was able to successfully execute it on Windows.

While using tinychat to send a question to the model, I encountered the following error message:
Error: Failed to fetch completions: Error processing prompt (see logs with DEBUG>=2): No module named '_posixshmem'

Analysis

The error arises because multiprocessing.shared_memory (which internally uses _posixshmem) is not supported on Windows. Instead, Windows uses alternatives like multiprocessing.sharedctypes or multiprocessing.Manager for shared memory operations.

Proposed Solution

To resolve this issue, I need to locate the files in the Exo codebase that are using multiprocessing.shared_memory and replace them with Windows-compatible modules like multiprocessing.Manager.

Steps to Locate the File, which I have already tried, but didnt get any success.

Use Command-Line Search:

  • Use the command-line tools to search for shared_memory or _posixshmem references in the Exo directory:

On Windows:

findstr /s /i "shared_memory _posixshmem" *.py
@Shivp1413
Copy link
Author

I made changes in main.py file, and now its working for windows as well. But I get following error:
Error: Failed to fetch completions: Error processing prompt (see logs with DEBUG>=2): No module named '_posixshmem'
main.zip

@AlexCheema
Copy link
Contributor

Thanks for the detailed issue!

Right now windows isn't natively supported. Once we have llama.cpp support (#183) / PyTorch support (#139) we can support this.

Here is the main issue for windows native support: #186

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants