You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Calls to $openai->chat with large messages and max_tokens length (~4K + ~4K respectively for a total of 8K) are often timing out. Meaning php script that calls function exits after 10 minutes while waiting for response without receiving response and some times receiving response "Error: Gateway timeout." Calling the same script from a web browser will fail earlier before any response to endpoint call received.
This does not happen every time, but has occurred almost every time $openai->chat is called with large context.
Is there:
An alternative way to request large context completion that is less likely to fail in this manner?
A way to request a timeout -- meaning a termination of the endpoint request along with explicit error response if OpenAI endpoint does not respond within a specified amount of time?
A way to keep the API call and/or calling php script alive longer?
Does the $client->setTimeout(x) functionality described in #31 for completion endpoint address question (2) posed in the opening post of this thread for completion, chat, and all endpoints?
Appears completions timedout in this manner do not return any data to function call. Is there a way to have calls timedout this way return a specific message indicating library timeout occurred (as opposed to OpenAI timeout)?
Would it be helpful to start a thread in this github in which we can all share our experiences with OpenAI timeout issues to help collectively track API performance? Sort of like a "down detector" for OpenAI.
Describe the bug
Calls to $openai->chat with large messages and max_tokens length (~4K + ~4K respectively for a total of 8K) are often timing out. Meaning php script that calls function exits after 10 minutes while waiting for response without receiving response and some times receiving response "Error: Gateway timeout." Calling the same script from a web browser will fail earlier before any response to endpoint call received.
This does not happen every time, but has occurred almost every time $openai->chat is called with large context.
Is there:
An alternative way to request large context completion that is less likely to fail in this manner?
A way to request a timeout -- meaning a termination of the endpoint request along with explicit error response if OpenAI endpoint does not respond within a specified amount of time?
A way to keep the API call and/or calling php script alive longer?
To Reproduce
Code snippets
No response
OS
Linux
PHP version
PHP 7.6
Library version
openai v3
The text was updated successfully, but these errors were encountered: