Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

fix: patch #1401 #1406

Merged
merged 2 commits into from
May 23, 2024
Merged

fix: patch #1401 #1406

merged 2 commits into from
May 23, 2024

Conversation

cpacker
Copy link
Collaborator

@cpacker cpacker commented May 23, 2024

https://platform.openai.com/docs/guides/text-generation/json-mode

JSON mode only supported by: gpt-4o, gpt-4-turbo, or gpt-3.5-turbo

Patch for #1401

@cpacker cpacker merged commit e8eed7b into main May 23, 2024
7 checks passed
@cpacker cpacker deleted the cpacker-patch-1 branch May 23, 2024 03:27
@lenaxia
Copy link
Contributor

lenaxia commented May 23, 2024

            if "gpt-4o" in llm_config.model or "gpt-4-turbo" in llm_config.model or "gpt-3.5-turbo" in llm_config.model:
                data.response_format = {"type": "json_object"}

This defeats the point of my original change. If the model MUST be one of the openai ones, then response_format won't be set for other models, namely open source models.

I am using MemGPT with LocalAI (https://github.com/go-skynet/LocalAI/) and the models must receive the response_format={"type": "json_object"} in order to properly return JSON to memgpt. For instance, llama3, or Hermes 2 Pro all require this to work with memgpt.

Because MemGPT relies 100% on function calls it should be enabled by default because otherwise models won't return valid JSON. If you prefer it to be toggleable then I can put it behind an env var.

@cpacker
Copy link
Collaborator Author

cpacker commented May 23, 2024

            if "gpt-4o" in llm_config.model or "gpt-4-turbo" in llm_config.model or "gpt-3.5-turbo" in llm_config.model:
                data.response_format = {"type": "json_object"}

This defeats the point of my original change. If the model MUST be one of the openai ones, then response_format won't be set for other models, namely open source models.

I am using MemGPT with LocalAI (https://github.com/go-skynet/LocalAI/) and the models must receive the response_format={"type": "json_object"} in order to properly return JSON to memgpt. For instance, llama3, or Hermes 2 Pro all require this to work with memgpt.

Because MemGPT relies 100% on function calls it should be enabled by default because otherwise models won't return valid JSON. If you prefer it to be toggleable then I can put it behind an env var.

Ah OK sorry I misread the first PR - the MemGPT CI tests use GPT-4, so with the prior PR the tests were failing (since JSON mode isn't supported). Probably the easiest way to support both your workflow and have the default OAI GPT-4 setup pass is to use an env var like you mentioned? Happy to merge a PR for that in ASAP (or consider other ideas too).

@lenaxia
Copy link
Contributor

lenaxia commented May 23, 2024

Sounds good. I'll get a PR opened shortly. I'm not in a huge rush as I'm working on another PR to create an OpenAI compatible completions endpoint to make integration easier, so I've got the json_object set in my branch for now.

mattzh72 pushed a commit that referenced this pull request Oct 9, 2024
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants