Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

add model metadata #96

Merged
merged 6 commits into from
Mar 22, 2024
Merged

add model metadata #96

merged 6 commits into from
Mar 22, 2024

Conversation

ccurme
Copy link
Collaborator

@ccurme ccurme commented Mar 22, 2024

#93

Copy link

vercel bot commented Mar 22, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchain-extract ✅ Ready (Inspect) Visit Preview 💬 Add feedback Mar 22, 2024 5:12pm

@ccurme ccurme changed the title (WIP) add model metadata add model metadata Mar 22, 2024
@ccurme ccurme requested a review from eyurtsev March 22, 2024 15:02
@@ -191,6 +192,11 @@ async def extract_entire_document(
model_name=DEFAULT_MODEL,
)
texts = text_splitter.split_text(content)
if len(texts) > MAX_CHUNK_COUNT:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eyurtsev what's your opinion on

  1. raising error as we do here;
  2. truncating (e.g., proceed with first N chunks) and propagating information back to the user. if we did that, would we need to add metadata to the extraction response? lmk what you think (can do this in a separate PR too).

@ccurme ccurme merged commit 0da6b2b into main Mar 22, 2024
8 checks passed
@ccurme ccurme deleted the cc/update_configuration branch March 22, 2024 19:21
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant