Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Documents Fail to Upload in Dashboard, Causing Chat and Search to Fail #1855

Open
anammari opened this issue Jan 22, 2025 · 6 comments
Open

Comments

@anammari
Copy link

Description:
I'm experiencing an issue where documents fail to upload under the Documents section in the dashboard. As a result, both chat and search functionalities are not working. I will share screenshots of the upload failure and error messages.

Environment:

  • LLMs Used: Ollama with Docker (DeepSeek R1 and mxbai-embed-large)
  • Docker Containers: Seem to be working based on the logs shared.
  • Error Message: [ERROR glean_core::metrics::ping] Invalid reason code active for ping usage-reporting

Config File (my_r2r_local_llm.toml):

[completion]
provider = "litellm"
  [completion.generation_config]
  model = "ollama/deepseek-r1:14b"

[embedding]
provider = "ollama"
base_model = "ollama/mxbai-embed-large:latest"
base_dimension = 1_024
batch_size = 32
add_title_as_prefix = true

Steps to Reproduce:

  1. Attempt to upload documents under the Documents section in the dashboard.
  2. Observe the failure and error messages.

Screenshots:
Document upload fail:

Image

Image

Chat fail:

Image

Launching R2R:

(r2r) ahmad@ahmad-All-Series:~/PD/r2r$ r2r serve --config-path=my_r2r_local_llm.toml --docker
/home/ahmad/.pyenv/versions/3.11.8/envs/r2r/lib/python3.11/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:
* 'fields' has been removed
  warnings.warn(message, UserWarning)
Warning: encountered ImportError: `No module named 'numpy'`, likely due to core dependencies not being installed. This will not affect your use of SDK, but use of `r2r serve` may not be available.
Spinning up an R2R deployment...
Running on 0.0.0.0:7272, with docker=True
Using image: ragtoriches/prod:3.3.29
Pulling Docker images...
Calling `docker-compose -f /home/ahmad/.pyenv/versions/3.11.8/envs/r2r/lib/python3.11/site-packages/cli/utils/../../r2r/compose.yaml --project-name r2r --profile postgres pull`
WARN[0000] The "HUGGINGFACE_API_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GOOGLE_CLIENT_ID" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GOOGLE_CLIENT_SECRET" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GOOGLE_REDIRECT_URI" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GITHUB_CLIENT_ID" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GITHUB_CLIENT_SECRET" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GITHUB_REDIRECT_URI" variable is not set. Defaulting to a blank string. 
[+] Pulling 41/41
 ✔ postgres Pulled                                                                                                                                                                                                                     124.7s 
   ✔ a480a496ba95 Pull complete                                                                                                                                                                                                         71.9s 
   ✔ 894a87c2a602 Pull complete                                                                                                                                                                                                         72.0s 
   ✔ 5c683167ebb4 Pull complete                                                                                                                                                                                                         72.2s 
   ✔ a4a2ff601989 Pull complete                                                                                                                                                                                                         72.3s 
   ✔ f9f18b35445d Pull complete                                                                                                                                                                                                         72.9s 
   ✔ 4341d9f4d10b Pull complete                                                                                                                                                                                                         73.0s 
   ✔ d75b4dfa7494 Pull complete                                                                                                                                                                                                         73.0s 
   ✔ 79531d2c07af Pull complete                                                                                                                                                                                                         73.1s 
   ✔ 38d735e5fe5b Pull complete                                                                                                                                                                                                        119.3s 
   ✔ 021a1a38fd6a Pull complete                                                                                                                                                                                                        119.3s 
   ✔ f79edff05c77 Pull complete                                                                                                                                                                                                        119.4s 
   ✔ 09219b44bd7a Pull complete                                                                                                                                                                                                        119.4s 
   ✔ 9b2fb85f538d Pull complete                                                                                                                                                                                                        119.4s 
   ✔ 1a6cb584f284 Pull complete                                                                                                                                                                                                        119.5s 
   ✔ 4f41ac91f783 Pull complete                                                                                                                                                                                                        119.6s 
   ✔ 1329a6ef5c19 Pull complete                                                                                                                                                                                                        119.7s 
 ✔ r2r-dashboard Pulled                                                                                                                                                                                                                 64.5s 
   ✔ 1f3e46996e29 Pull complete                                                                                                                                                                                                         29.8s 
   ✔ 82c0e88b37ba Pull complete                                                                                                                                                                                                         55.7s 
   ✔ 3f4550e9f92d Pull complete                                                                                                                                                                                                         55.8s 
   ✔ a0178999f9f2 Pull complete                                                                                                                                                                                                         55.8s 
   ✔ 5ce5af33f9f6 Pull complete                                                                                                                                                                                                         55.9s 
   ✔ f847f69240a0 Pull complete                                                                                                                                                                                                         56.7s 
   ✔ da50bdfd33c4 Pull complete                                                                                                                                                                                                         56.8s 
   ✔ adb939d588a9 Pull complete                                                                                                                                                                                                         56.8s 
   ✔ 6fc9c1f6c0b6 Pull complete                                                                                                                                                                                                         59.2s 
   ✔ 0321f7387ecd Pull complete                                                                                                                                                                                                         59.4s 
   ✔ 796a5f34e567 Pull complete                                                                                                                                                                                                         59.4s 
 ✔ r2r Pulled                                                                                                                                                                                                                          172.0s 
   ✔ af302e5c37e9 Pull complete                                                                                                                                                                                                         25.7s 
   ✔ 499051ac08b3 Pull complete                                                                                                                                                                                                         26.0s 
   ✔ 52a8eea45d12 Pull complete                                                                                                                                                                                                         27.0s 
   ✔ bc54bb88b02b Pull complete                                                                                                                                                                                                         27.0s 
   ✔ 450804f88d33 Pull complete                                                                                                                                                                                                         27.6s 
   ✔ 4f4fb700ef54 Pull complete                                                                                                                                                                                                         59.5s 
   ✔ e7adf2105ecf Pull complete                                                                                                                                                                                                         28.3s 
   ✔ 4087764aa42a Pull complete                                                                                                                                                                                                        166.5s 
   ✔ eb4cf681e804 Pull complete                                                                                                                                                                                                        166.6s 
   ✔ 1bae7045eddc Pull complete                                                                                                                                                                                                        166.9s 
   ✔ bc89093e901e Pull complete                                                                                                                                                                                                        167.0s 
Starting Docker Compose setup...
Calling `docker-compose -f /home/ahmad/.pyenv/versions/3.11.8/envs/r2r/lib/python3.11/site-packages/cli/utils/../../r2r/compose.yaml --project-name r2r --profile postgres up -d`
WARN[0000] The "HUGGINGFACE_API_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GOOGLE_CLIENT_ID" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GOOGLE_CLIENT_SECRET" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GOOGLE_REDIRECT_URI" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GITHUB_CLIENT_ID" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GITHUB_CLIENT_SECRET" variable is not set. Defaulting to a blank string. 
WARN[0000] The "GITHUB_REDIRECT_URI" variable is not set. Defaulting to a blank string. 
[+] Running 5/5
 ✔ Network r2r_r2r-network        Created                                                                                                                                                                                                0.1s 
 ✔ Volume "postgres_data"         Created                                                                                                                                                                                                0.0s 
 ✔ Container r2r-postgres-1       Started                                                                                                                                                                                                7.0s 
 ✔ Container r2r-r2r-1            Started                                                                                                                                                                                                5.7s 
 ✔ Container r2r-r2r-dashboard-1  Started                                                                                                                                                                                                6.9s 
Waiting for all services to become healthy...
Navigating to R2R application at http://localhost:7273.
(r2r) ahmad@ahmad-All-Series:~/PD/r2r$ [ERROR glean_core::metrics::ping] Invalid reason code active for ping usage-reporting

Please let me know if you need any additional information or logs.

Thank you for your assistance!

@NolanTrem
Copy link
Collaborator

Can you share the r2r container's logs? docker ps to see the container ID then docker logs --follow <r2r container id>

My initial thought is that there is likely a "fast_llm" missing in your config. Pretty sure the embedding provider also doesn't need to be "ollama/" here either, since you're not using LiteLLM. You may want to try copying this config and replacing the models as needed: https://github.com/SciPhi-AI/R2R/blob/main/py/core/configs/ollama.toml

@anammari
Copy link
Author

Hi @NolanTrem

Thank you for your suggestions. I updated the config file as advised, but the issue persists. The document now stays in an "augmenting / pending" status (screenshot attached), and I no longer see the 500 error as before. However, the document doesn't seem to progress beyond this state.

Image

Here are the logs from the r2r container:

2025-01-22 10:42:05 - �[32mINFO�[0m - Environment R2R_CONFIG_NAME: �[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Environment R2R_CONFIG_PATH: /home/ahmad/PD/r2r/my_r2r_local_llm.toml�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Environment R2R_PROJECT_NAME: r2r_default�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Environment R2R_POSTGRES_HOST: postgres�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Environment R2R_POSTGRES_DBNAME: postgres�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Environment R2R_POSTGRES_PORT: 5432�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Environment R2R_POSTGRES_PASSWORD: postgres�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Environment R2R_PROJECT_NAME: None�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Started server process [7]�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Waiting for application startup.�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Initializing EmbeddingProvider with config app=AppConfig(project_name=None, default_max_documents_per_user=10000, default_max_chunks_per_user=10000000, default_max_collections_per_user=5000, default_max_upload_size=2147483648, max_upload_size_by_type={'txt': 2147483648, 'md': 2147483648, 'tsv': 2147483648, 'csv': 2147483648, 'xml': 2147483648, 'html': 2147483648, 'doc': 2147483648, 'docx': 2147483648, 'ppt': 2147483648, 'pptx': 2147483648, 'xls': 2147483648, 'xlsx': 2147483648, 'odt': 2147483648, 'pdf': 2147483648, 'eml': 2147483648, 'msg': 2147483648, 'p7s': 2147483648, 'bmp': 2147483648, 'heic': 2147483648, 'jpeg': 2147483648, 'jpg': 2147483648, 'png': 2147483648, 'tiff': 2147483648, 'epub': 2147483648, 'rtf': 2147483648, 'rst': 2147483648, 'org': 2147483648}) extra_fields={} provider='ollama' base_model='mxbai-embed-large' base_dimension=1024 rerank_model=None rerank_url=None batch_size=128 prefixes=None add_title_as_prefix=True concurrent_request_limit=2 max_retries=3 initial_backoff=1 max_backoff=64.0 quantization_settings=VectorQuantizationSettings(quantization_type=<VectorQuantizationType.FP32: 'FP32'>) rerank_dimension=None rerank_transformer_type=None.�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Using Ollama API base URL: http://host.docker.internal:11434�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Initializing CompletionProvider with config: app=AppConfig(project_name=None, default_max_documents_per_user=10000, default_max_chunks_per_user=10000000, default_max_collections_per_user=5000, default_max_upload_size=2147483648, max_upload_size_by_type={'txt': 2147483648, 'md': 2147483648, 'tsv': 2147483648, 'csv': 2147483648, 'xml': 2147483648, 'html': 2147483648, 'doc': 2147483648, 'docx': 2147483648, 'ppt': 2147483648, 'pptx': 2147483648, 'xls': 2147483648, 'xlsx': 2147483648, 'odt': 2147483648, 'pdf': 2147483648, 'eml': 2147483648, 'msg': 2147483648, 'p7s': 2147483648, 'bmp': 2147483648, 'heic': 2147483648, 'jpeg': 2147483648, 'jpg': 2147483648, 'png': 2147483648, 'tiff': 2147483648, 'epub': 2147483648, 'rtf': 2147483648, 'rst': 2147483648, 'org': 2147483648}) extra_fields={} provider='litellm' generation_config=GenerationConfig(model='ollama/deepseek-r1:14b', temperature=0.1, top_p=1.0, max_tokens_to_sample=1024, stream=False, functions=None, tools=None, add_generation_kwargs={}, api_base=None, response_format=None) concurrent_request_limit=1 fast_llm='ollama/deepseek-r1:14b' max_retries=3 initial_backoff=1.0 max_backoff=64.0�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Initializing BcryptCryptoProvider�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Initializing DatabaseProvider with config app=AppConfig(project_name=None, default_max_documents_per_user=10000, default_max_chunks_per_user=10000000, default_max_collections_per_user=5000, default_max_upload_size=2147483648, max_upload_size_by_type={'txt': 2147483648, 'md': 2147483648, 'tsv': 2147483648, 'csv': 2147483648, 'xml': 2147483648, 'html': 2147483648, 'doc': 2147483648, 'docx': 2147483648, 'ppt': 2147483648, 'pptx': 2147483648, 'xls': 2147483648, 'xlsx': 2147483648, 'odt': 2147483648, 'pdf': 2147483648, 'eml': 2147483648, 'msg': 2147483648, 'p7s': 2147483648, 'bmp': 2147483648, 'heic': 2147483648, 'jpeg': 2147483648, 'jpg': 2147483648, 'png': 2147483648, 'tiff': 2147483648, 'epub': 2147483648, 'rtf': 2147483648, 'rst': 2147483648, 'org': 2147483648}) extra_fields={} provider='postgres' user=None password=None host=None port=None db_name=None project_name=None postgres_configuration_settings=None default_collection_name='Default' default_collection_description='Your default collection.' collection_summary_system_prompt='default_system' collection_summary_task_prompt='default_collection_summary' enable_fts=False batch_size=256 kg_store_path=None graph_enrichment_settings=KGEnrichmentSettings(force_kg_enrichment=False, graphrag_communities='graphrag_communities', max_summary_input_length=65536, generation_config=GenerationConfig(model='ollama/deepseek-r1:14b', temperature=0.1, top_p=1.0, max_tokens_to_sample=1024, stream=False, functions=None, tools=None, add_generation_kwargs=None, api_base=None, response_format=None), leiden_params={}) graph_creation_settings=KGCreationSettings(clustering_mode='local', graphrag_relationships_extraction_few_shot='graphrag_relationships_extraction_few_shot', graph_entity_description_prompt='graphrag_entity_description', entity_types=[], relation_types=[], chunk_merge_count=4, max_knowledge_relationships=100, max_description_input_length=65536, generation_config=GenerationConfig(model='ollama/deepseek-r1:14b', temperature=0.1, top_p=1.0, max_tokens_to_sample=1024, stream=False, functions=None, tools=None, add_generation_kwargs=None, api_base=None, response_format=None), automatic_deduplication=False) graph_search_settings=GraphSearchSettings(generation_config=GenerationConfig(model='ollama/deepseek-r1:14b', temperature=0.1, top_p=1.0, max_tokens_to_sample=1024, stream=False, functions=None, tools=None, add_generation_kwargs=None, api_base=None, response_format=None), graphrag_map_system='graphrag_map_system', graphrag_reduce_system='graphrag_reduce_system', max_community_description_length=65536, max_llm_queries_for_global_search=250, limits={}, enabled=True) limits=LimitSettings(global_per_min=300, route_per_min=None, monthly_limit=10000) route_limits={'/v3/retrieval/search': LimitSettings(global_per_min=None, route_per_min=120, monthly_limit=None), '/v3/retrieval/rag': LimitSettings(global_per_min=None, route_per_min=30, monthly_limit=None)} user_limits={}.�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Connecting to Postgres via TCP/IP�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Initializing `PostgresDatabaseProvider`.�[0m
2025-01-22 10:42:05 - �[32mINFO�[0m - Connecting with 921 connections to `asyncpg.create_pool`.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Successfully connected to Postgres database and created connection pool.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Creating table, if not exists: r2r_default.documents�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: default_collection_summary�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: vision_img�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: default_rag�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: graphrag_reduce_system�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: graphrag_entity_description�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: graphrag_relationships_extraction_few_shot�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: default_system�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: rag_agent�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: rag_fusion�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: default_summary�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: hyde�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: rag_context�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: graphrag_map_system�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: chunk_enrichment�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: graphrag_communities�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading saved prompt: vision_pdf�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading prompts from /app/core/database/prompts�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: graphrag_map_system from /app/core/database/prompts/graphrag_map_system.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: graphrag_relationships_extraction_few_shot from /app/core/database/prompts/graphrag_relationships_extraction_few_shot.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: default_system from /app/core/database/prompts/default_system.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: default_collection_summary from /app/core/database/prompts/default_collection_summary.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: vision_img from /app/core/database/prompts/vision_img.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: default_rag from /app/core/database/prompts/default_rag.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: graphrag_reduce_system from /app/core/database/prompts/graphrag_reduce_system.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: graphrag_entity_description from /app/core/database/prompts/graphrag_entity_description.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: graphrag_communities from /app/core/database/prompts/graphrag_communities.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: hyde from /app/core/database/prompts/hyde.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: rag_context from /app/core/database/prompts/rag_context.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: rag_agent from /app/core/database/prompts/rag_agent.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: rag_fusion from /app/core/database/prompts/rag_fusion.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: vision_pdf from /app/core/database/prompts/vision_pdf.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: default_summary from /app/core/database/prompts/default_summary.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Loading default prompt: chunk_enrichment from /app/core/database/prompts/chunk_enrichment.yaml.�[0m
2025-01-22 10:42:06 - �[32mINFO�[0m - Initializing text splitter with method: ChunkingStrategy.RECURSIVE�[0m
2025-01-22 10:42:07 - �[32mINFO�[0m - R2RIngestionProvider initialized with config: app=AppConfig(project_name=None, default_max_documents_per_user=10000, default_max_chunks_per_user=10000000, default_max_collections_per_user=5000, default_max_upload_size=2147483648, max_upload_size_by_type={'txt': 2147483648, 'md': 2147483648, 'tsv': 2147483648, 'csv': 2147483648, 'xml': 2147483648, 'html': 2147483648, 'doc': 2147483648, 'docx': 2147483648, 'ppt': 2147483648, 'pptx': 2147483648, 'xls': 2147483648, 'xlsx': 2147483648, 'odt': 2147483648, 'pdf': 2147483648, 'eml': 2147483648, 'msg': 2147483648, 'p7s': 2147483648, 'bmp': 2147483648, 'heic': 2147483648, 'jpeg': 2147483648, 'jpg': 2147483648, 'png': 2147483648, 'tiff': 2147483648, 'epub': 2147483648, 'rtf': 2147483648, 'rst': 2147483648, 'org': 2147483648}) extra_fields={} provider='r2r' excluded_parsers=['mp4'] chunking_strategy=<ChunkingStrategy.RECURSIVE: 'recursive'> chunk_size=1024 chunk_enrichment_settings=ChunkEnrichmentSettings(enable_chunk_enrichment=False, n_chunks=2, generation_config=GenerationConfig(model='openai/gpt-4o-mini', temperature=0.1, top_p=1.0, max_tokens_to_sample=1024, stream=False, functions=None, tools=None, add_generation_kwargs=None, api_base=None, response_format=None)) extra_parsers={'pdf': 'zerox'} audio_transcription_model='openai/whisper-1' vision_img_prompt_name='vision_img' vision_img_model='ollama/llama3.2-vision' vision_pdf_prompt_name='vision_pdf' vision_pdf_model='ollama/llama3.2-vision' skip_document_summary=False document_summary_system_prompt='default_system' document_summary_task_prompt='default_summary' chunks_for_document_summary=16 document_summary_model='ollama/deepseek-r1:14b' parser_overrides={} automatic_extraction=False chunk_overlap=512 separator=None�[0m
2025-01-22 10:42:07 - �[32mINFO�[0m - Default admin user already exists.�[0m
2025-01-22 10:42:07 - �[32mINFO�[0m - Initializing an `GraphStoragePipe` to store knowledge graph extractions in a graph database.�[0m
2025-01-22 10:42:07 - �[32mINFO�[0m - Initalizing an `QueryTransformPipe` pipe.�[0m
2025-01-22 10:42:07 - �[32mINFO�[0m - Initalizing an `QueryTransformPipe` pipe.�[0m
2025-01-22 10:42:08 - �[32mINFO�[0m - Scheduler started�[0m
2025-01-22 10:42:08 - �[32mINFO�[0m - Application startup complete.�[0m
2025-01-22 10:42:08 - �[32mINFO�[0m - Uvicorn running on http://0.0.0.0:7272 (Press CTRL+C to quit)�[0m
2025-01-22 10:42:08 - �[32mINFO�[0m - 127.0.0.1:52180 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:14 - �[32mINFO�[0m - 127.0.0.1:34244 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[31mERROR�[0m - Error reading version from pyproject.toml: No package metadata was found for r2r�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49900 - "POST /v3/users/# HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49900 - "OPTIONS /v3/users/me HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49900 - "GET /v3/users/me HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49900 - "OPTIONS /v3/system/settings HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49900 - "GET /v3/system/settings HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49900 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49906 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49906 - "OPTIONS /v3/system/status HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49900 - "OPTIONS /v3/system/status HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49906 - "GET /v3/system/status HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:16 - �[32mINFO�[0m - 172.23.0.1:49900 - "GET /v3/system/status HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:19 - �[32mINFO�[0m - 172.23.0.1:49906 - "OPTIONS /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:19 - �[32mINFO�[0m - 172.23.0.1:49906 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:20 - �[32mINFO�[0m - 127.0.0.1:34266 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:24 - �[32mINFO�[0m - 172.23.0.1:49906 - "OPTIONS /v3/documents/62602c37-b59f-507a-b678-193b68116b36 HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:24 - �[32mINFO�[0m - 172.23.0.1:49906 - "DELETE /v3/documents/62602c37-b59f-507a-b678-193b68116b36 HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:24 - �[32mINFO�[0m - 172.23.0.1:49906 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:26 - �[32mINFO�[0m - 172.23.0.1:49906 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:27 - �[32mINFO�[0m - 127.0.0.1:58992 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:33 - �[32mINFO�[0m - 127.0.0.1:59002 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:35 - �[32mINFO�[0m - 172.23.0.1:57154 - "OPTIONS /v3/documents HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:35 - �[32mINFO�[0m - 172.23.0.1:57168 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:36 - �[32mINFO�[0m - Initializing text splitter with method: ChunkingStrategy.RECURSIVE�[0m
2025-01-22 10:42:39 - �[32mINFO�[0m - 127.0.0.1:54408 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:45 - �[32mINFO�[0m - 127.0.0.1:34790 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:51 - �[32mINFO�[0m - 127.0.0.1:34804 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:42:57 - �[32mINFO�[0m - 127.0.0.1:52172 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:01 - �[32mINFO�[0m - 172.23.0.1:58156 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:03 - �[32mINFO�[0m - 127.0.0.1:47044 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:09 - �[32mINFO�[0m - 127.0.0.1:47046 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:15 - �[32mINFO�[0m - 127.0.0.1:36994 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:21 - �[32mINFO�[0m - 127.0.0.1:37000 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:27 - �[32mINFO�[0m - 127.0.0.1:51460 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:33 - �[32mINFO�[0m - 127.0.0.1:48334 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:39 - �[32mINFO�[0m - 127.0.0.1:48350 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:44 - �[32mINFO�[0m - 172.23.0.1:40758 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:45 - �[32mINFO�[0m - 127.0.0.1:56046 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:51 - �[32mINFO�[0m - 127.0.0.1:56058 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:43:58 - �[32mINFO�[0m - 127.0.0.1:46526 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:44:04 - �[32mINFO�[0m - 127.0.0.1:44196 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:44:10 - �[32mINFO�[0m - 127.0.0.1:44206 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:44:16 - �[32mINFO�[0m - 127.0.0.1:52088 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:44:22 - �[32mINFO�[0m - 127.0.0.1:52104 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:44:28 - �[32mINFO�[0m - 127.0.0.1:33982 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:44:34 - �[32mINFO�[0m - 127.0.0.1:59526 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:44:40 - �[32mINFO�[0m - 127.0.0.1:59538 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:44:46 - �[32mINFO�[0m - 127.0.0.1:47742 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:44:52 - �[32mINFO�[0m - 127.0.0.1:47746 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:44:58 - �[32mINFO�[0m - 127.0.0.1:37076 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:45:04 - �[32mINFO�[0m - 127.0.0.1:46266 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:45:10 - �[32mINFO�[0m - 127.0.0.1:46276 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:45:16 - �[32mINFO�[0m - 127.0.0.1:58170 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:45:23 - �[32mINFO�[0m - 127.0.0.1:58182 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:45:29 - �[32mINFO�[0m - 127.0.0.1:42630 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:45:35 - �[32mINFO�[0m - 127.0.0.1:56862 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:45:41 - �[32mINFO�[0m - 127.0.0.1:56878 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:45:47 - �[32mINFO�[0m - 127.0.0.1:43886 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:45:53 - �[32mINFO�[0m - 127.0.0.1:49350 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:45:59 - �[32mINFO�[0m - 127.0.0.1:49364 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:05 - �[32mINFO�[0m - 127.0.0.1:41038 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:11 - �[32mINFO�[0m - 127.0.0.1:41044 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:17 - �[32mINFO�[0m - 127.0.0.1:58102 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:23 - �[32mINFO�[0m - 127.0.0.1:58462 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:29 - �[32mINFO�[0m - 127.0.0.1:58466 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:32 - �[32mINFO�[0m - 172.23.0.1:50254 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:35 - �[32mINFO�[0m - 127.0.0.1:42780 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:42 - �[32mINFO�[0m - 127.0.0.1:42784 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:44 - �[32mINFO�[0m - 172.23.0.1:44552 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:48 - �[32mINFO�[0m - 127.0.0.1:52362 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:46:54 - �[32mINFO�[0m - 127.0.0.1:42694 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:00 - �[32mINFO�[0m - 127.0.0.1:42708 - "GET /v3/health HTTP/1.1" �[32m200�[0m

�[1;31mGive Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new�[0m
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

2025-01-22 10:47:05 - �[33mWARNING�[0m - Request failed (attempt 1): litellm.APIConnectionError: OllamaException - All connection attempts failed�[0m
2025-01-22 10:47:06 - �[32mINFO�[0m - 127.0.0.1:46254 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:12 - �[32mINFO�[0m - 127.0.0.1:46264 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:18 - �[32mINFO�[0m - 127.0.0.1:48118 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:24 - �[32mINFO�[0m - 127.0.0.1:59594 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:30 - �[32mINFO�[0m - 127.0.0.1:59602 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:36 - �[32mINFO�[0m - 127.0.0.1:33050 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:42 - �[32mINFO�[0m - 127.0.0.1:33066 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:44 - �[32mINFO�[0m - 172.23.0.1:43170 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:48 - �[32mINFO�[0m - 127.0.0.1:59884 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:54 - �[32mINFO�[0m - 172.23.0.1:41486 - "OPTIONS /v3/conversations?offset=0&limit=500 HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:54 - �[32mINFO�[0m - 172.23.0.1:41494 - "OPTIONS /v3/documents?offset=0&limit=100 HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:54 - �[32mINFO�[0m - 172.23.0.1:41510 - "OPTIONS /v3/collections?offset=0&limit=100 HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:54 - �[32mINFO�[0m - 172.23.0.1:41486 - "GET /v3/conversations?offset=0&limit=500 HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:54 - �[32mINFO�[0m - 172.23.0.1:41510 - "GET /v3/collections?offset=0&limit=100 HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:54 - �[32mINFO�[0m - 172.23.0.1:41494 - "GET /v3/documents?offset=0&limit=100 HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:54 - �[32mINFO�[0m - 127.0.0.1:54088 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:59 - �[32mINFO�[0m - 172.23.0.1:41512 - "OPTIONS /v3/conversations HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:59 - �[32mINFO�[0m - 172.23.0.1:41512 - "POST /v3/conversations HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:59 - �[32mINFO�[0m - 172.23.0.1:41512 - "OPTIONS /v3/retrieval/agent HTTP/1.1" �[32m200�[0m
2025-01-22 10:47:59 - �[32mINFO�[0m - 172.23.0.1:41512 - "POST /v3/retrieval/agent HTTP/1.1" �[32m200�[0m
2025-01-22 10:48:00 - �[32mINFO�[0m - 127.0.0.1:54104 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:48:06 - �[32mINFO�[0m - 127.0.0.1:40114 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:48:13 - �[32mINFO�[0m - 127.0.0.1:40118 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:48:19 - �[32mINFO�[0m - 127.0.0.1:54488 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:48:25 - �[32mINFO�[0m - 127.0.0.1:42912 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:48:31 - �[32mINFO�[0m - 127.0.0.1:42914 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:48:37 - �[32mINFO�[0m - 127.0.0.1:34718 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:48:43 - �[32mINFO�[0m - 127.0.0.1:41092 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:48:49 - �[32mINFO�[0m - 127.0.0.1:41096 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:48:55 - �[32mINFO�[0m - 127.0.0.1:57120 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:49:01 - �[32mINFO�[0m - 127.0.0.1:57124 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:49:07 - �[32mINFO�[0m - 127.0.0.1:36120 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:49:12 - �[32mINFO�[0m - 172.23.0.1:58448 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:49:13 - �[32mINFO�[0m - 127.0.0.1:60594 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:49:14 - �[32mINFO�[0m - 172.23.0.1:58448 - "GET /v3/documents?offset=0&limit=1000 HTTP/1.1" �[32m200�[0m
2025-01-22 10:49:19 - �[32mINFO�[0m - 127.0.0.1:60604 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:49:25 - �[32mINFO�[0m - 127.0.0.1:53900 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:49:31 - �[32mINFO�[0m - 127.0.0.1:53912 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:49:38 - �[32mINFO�[0m - 127.0.0.1:48106 - "GET /v3/health HTTP/1.1" �[32m200�[0m
2025-01-22 10:49:44 - �[32mINFO�[0m - 127.0.0.1:47810 - "GET /v3/health HTTP/1.1" �[32m200�[0m

Additionally, I successfully tested litellm and ollama in Python, and they seem to be working fine. Here’s the test script and its output:

from litellm import completion

response = completion(
    model="ollama/deepseek-r1:14b", 
    messages=[{ "content": "respond in 20 words. who are you?","role": "user"}], 
    api_base="http://localhost:11434",
    stream=True
)
print(response)
for chunk in response:
    print(chunk['choices'][0]['delta'])

Output:

(r2r) ahmad@ahmad-All-Series:~/PD/r2r$ python test_litellm_ollama.py 
/home/ahmad/.pyenv/versions/3.11.8/envs/r2r/lib/python3.11/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:
* 'fields' has been removed
  warnings.warn(message, UserWarning)
<litellm.litellm_core_utils.streaming_handler.CustomStreamWrapper object at 0x759f7cc96650>
Delta(content='<think>', role='assistant', function_call=None, tool_calls=None, audio=None)
Delta(content='\n', role=None, function_call=None, tool_calls=None, audio=None)
Delta(content='Okay', role=None, function_call=None, tool_calls=None, audio=None)
Delta(content=',', role=None, function_call=None, tool_calls=None, audio=None)
Delta(content=' so', role=None, function_call=None, tool_calls=None, audio=None)
Delta(content=' the', role=None, function_call=None, tool_calls=None, audio=None)
Delta(content=' user', role=None, function_call=None, tool_calls=None, audio=None)
Delta(content=' is', role=None, function_call=None, tool_calls=None, audio=None)
Delta(content=' asking', role=None, function_call=None, tool_calls=None, audio=None)
Delta(content=' me', role=None, function_call=None, tool_calls=None, audio=None)
Delta(content=' to', role=None, function_call=None, tool_calls=None, audio=None)
Delta(content=' respond', role=None, function_call=None, tool_calls=None, audio=None)

For reference, here’s the updated toml config I’m using:

[agent]
system_instruction_name = "rag_agent"
tool_names = ["local_search"]

  [agent.generation_config]
  model = "ollama/deepseek-r1:14b"

[completion]
provider = "litellm"
concurrent_request_limit = 1
fast_llm = "ollama/deepseek-r1:14b" # used inside R2R for `fast` completions, like document summaries

  [completion.generation_config]
  model = "ollama/deepseek-r1:14b"
  temperature = 0.1
  top_p = 1
  max_tokens_to_sample = 1_024
  stream = false
  add_generation_kwargs = { }

[embedding]
provider = "ollama"
base_model = "mxbai-embed-large"
base_dimension = 1_024
batch_size = 128
add_title_as_prefix = true
concurrent_request_limit = 2

[database]
provider = "postgres"

  [database.graph_creation_settings]
    graph_entity_description_prompt = "graphrag_entity_description"
    entity_types = [] # if empty, all entities are extracted
    relation_types = [] # if empty, all relations are extracted
    fragment_merge_count = 4 # number of fragments to merge into a single extraction
    max_knowledge_relationships = 100
    max_description_input_length = 65536
    generation_config = { model = "ollama/deepseek-r1:14b" } # and other params, model used for relationshipt extraction
    automatic_deduplication = false

  [database.graph_enrichment_settings]
    community_reports_prompt = "graphrag_community_reports"
    max_summary_input_length = 65536
    generation_config = { model = "ollama/deepseek-r1:14b" } # and other params, model used for node description and graph clustering
    leiden_params = {}

  [database.graph_search_settings]
    generation_config = { model = "ollama/deepseek-r1:14b" }


[orchestration]
provider = "simple"


[ingestion]
vision_img_model = "ollama/llama3.2-vision"
vision_pdf_model = "ollama/llama3.2-vision"
chunks_for_document_summary = 16
document_summary_model = "ollama/deepseek-r1:14b"
automatic_extraction = false

  [ingestion.extra_parsers]
    pdf = "zerox"

Any further insights or suggestions would be greatly appreciated!

Best regards,
Ahmad

@luozhy88
Copy link

luozhy88 commented Jan 22, 2025

I have same error

Image

Image

Image

[agent]
system_instruction_name = "rag_agent"
tool_names = ["local_search"]

[agent.generation_config]
model = "ollama/llama3.1"

[completion]
provider = "litellm"
concurrent_request_limit = 1
fast_llm = "ollama/llama3.1" # used inside R2R for fast completions, like document summaries

[completion.generation_config]
model = "ollama/llama3.1"
temperature = 0.1
top_p = 1
max_tokens_to_sample = 1_024
stream = false
add_generation_kwargs = { }

[embedding]
provider = "ollama"
base_model = "mxbai-embed-large"
base_dimension = 1_024
batch_size = 128
add_title_as_prefix = true
concurrent_request_limit = 2

[database]
provider = "postgres"

[database.graph_creation_settings]
graph_entity_description_prompt = "graphrag_entity_description"
entity_types = [] # if empty, all entities are extracted
relation_types = [] # if empty, all relations are extracted
fragment_merge_count = 4 # number of fragments to merge into a single extraction
max_knowledge_relationships = 100
max_description_input_length = 65536
generation_config = { model = "ollama/llama3.1" } # and other params, model used for relationshipt extraction
automatic_deduplication = false

[database.graph_enrichment_settings]
community_reports_prompt = "graphrag_community_reports"
max_summary_input_length = 65536
generation_config = { model = "ollama/llama3.1" } # and other params, model used for node description and graph clustering
leiden_params = {}

[database.graph_search_settings]
generation_config = { model = "ollama/llama3.1" }

[orchestration]
provider = "simple"

[ingestion]
vision_img_model = "ollama/llama3.2-vision"
vision_pdf_model = "ollama/llama3.2-vision"
chunks_for_document_summary = 16
document_summary_model = "ollama/llama3.1"
automatic_extraction = false

[ingestion.extra_parsers]
pdf = "zerox"

Image

@NolanTrem
Copy link
Collaborator

What OS are you running? Can you try changing the Ollama port from localhost to host.docker.internal?

@luozhy88
Copy link

luozhy88 commented Jan 23, 2025

Thanks for your reply!
I am using centos Linux.

Image

My ollama is installed in local centos (not container). Does this have an impact?

Image

@NolanTrem
Copy link
Collaborator

You might need to play around with you network settings—Linux can be a bit weird around this. It's actually good that your Ollama isn't in a container—those things run prohibitively slow.

This SO post might have some good things to try: https://stackoverflow.com/questions/48546124/what-is-the-linux-equivalent-of-host-docker-internal

Let me know if any of these work for you—I'd love to add something around this to our docs to prevent other from experiencing the same issue!

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants