-
Notifications
You must be signed in to change notification settings - Fork 337
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Documents Fail to Upload in Dashboard, Causing Chat and Search to Fail #1855
Comments
Can you share the r2r container's logs? My initial thought is that there is likely a "fast_llm" missing in your config. Pretty sure the embedding provider also doesn't need to be "ollama/" here either, since you're not using LiteLLM. You may want to try copying this config and replacing the models as needed: https://github.com/SciPhi-AI/R2R/blob/main/py/core/configs/ollama.toml |
Hi @NolanTrem Thank you for your suggestions. I updated the config file as advised, but the issue persists. The document now stays in an "augmenting / pending" status (screenshot attached), and I no longer see the 500 error as before. However, the document doesn't seem to progress beyond this state. Here are the logs from the
Additionally, I successfully tested from litellm import completion
response = completion(
model="ollama/deepseek-r1:14b",
messages=[{ "content": "respond in 20 words. who are you?","role": "user"}],
api_base="http://localhost:11434",
stream=True
)
print(response)
for chunk in response:
print(chunk['choices'][0]['delta']) Output:
For reference, here’s the updated [agent]
system_instruction_name = "rag_agent"
tool_names = ["local_search"]
[agent.generation_config]
model = "ollama/deepseek-r1:14b"
[completion]
provider = "litellm"
concurrent_request_limit = 1
fast_llm = "ollama/deepseek-r1:14b" # used inside R2R for `fast` completions, like document summaries
[completion.generation_config]
model = "ollama/deepseek-r1:14b"
temperature = 0.1
top_p = 1
max_tokens_to_sample = 1_024
stream = false
add_generation_kwargs = { }
[embedding]
provider = "ollama"
base_model = "mxbai-embed-large"
base_dimension = 1_024
batch_size = 128
add_title_as_prefix = true
concurrent_request_limit = 2
[database]
provider = "postgres"
[database.graph_creation_settings]
graph_entity_description_prompt = "graphrag_entity_description"
entity_types = [] # if empty, all entities are extracted
relation_types = [] # if empty, all relations are extracted
fragment_merge_count = 4 # number of fragments to merge into a single extraction
max_knowledge_relationships = 100
max_description_input_length = 65536
generation_config = { model = "ollama/deepseek-r1:14b" } # and other params, model used for relationshipt extraction
automatic_deduplication = false
[database.graph_enrichment_settings]
community_reports_prompt = "graphrag_community_reports"
max_summary_input_length = 65536
generation_config = { model = "ollama/deepseek-r1:14b" } # and other params, model used for node description and graph clustering
leiden_params = {}
[database.graph_search_settings]
generation_config = { model = "ollama/deepseek-r1:14b" }
[orchestration]
provider = "simple"
[ingestion]
vision_img_model = "ollama/llama3.2-vision"
vision_pdf_model = "ollama/llama3.2-vision"
chunks_for_document_summary = 16
document_summary_model = "ollama/deepseek-r1:14b"
automatic_extraction = false
[ingestion.extra_parsers]
pdf = "zerox" Any further insights or suggestions would be greatly appreciated! Best regards, |
I have same error [agent] [agent.generation_config] [completion] [completion.generation_config] [embedding] [database] [database.graph_creation_settings] [database.graph_enrichment_settings] [database.graph_search_settings] [orchestration] [ingestion] [ingestion.extra_parsers] |
What OS are you running? Can you try changing the Ollama port from localhost to |
You might need to play around with you network settings—Linux can be a bit weird around this. It's actually good that your Ollama isn't in a container—those things run prohibitively slow. This SO post might have some good things to try: https://stackoverflow.com/questions/48546124/what-is-the-linux-equivalent-of-host-docker-internal Let me know if any of these work for you—I'd love to add something around this to our docs to prevent other from experiencing the same issue! |
Description:
I'm experiencing an issue where documents fail to upload under the Documents section in the dashboard. As a result, both chat and search functionalities are not working. I will share screenshots of the upload failure and error messages.
Environment:
[ERROR glean_core::metrics::ping] Invalid reason code active for ping usage-reporting
Config File (
my_r2r_local_llm.toml
):Steps to Reproduce:
Screenshots:
Document upload fail:
Chat fail:
Launching R2R:
Please let me know if you need any additional information or logs.
Thank you for your assistance!
The text was updated successfully, but these errors were encountered: