Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

More robust chunking #318

Closed
chitalian opened this issue Mar 6, 2024 · 2 comments
Closed

More robust chunking #318

chitalian opened this issue Mar 6, 2024 · 2 comments

Comments

@chitalian
Copy link

In some environments, each chunk is not received as a single chunk, but rather the chunk is split up into two chunks.

Right now, the typescript SDK assumes each chunk is formatted as

event: ____
data:  _____

However, this is not always the case. Can we add more robust chunking algorithm that waits for all the parts of the chunks?

Proposed fix

Within _createMessage (

protected async _createMessage(
)

We can wait and look for the whole chunk to arrive before adding the next stream event

Additional notes

The OpenAI Python and Typescript Packages handle this.

The Ruby OpenAI packages that are community ran, run into this issue: alexrudall/ruby-openai#411

@RobertCraigie
Copy link
Collaborator

Hey @chitalian can you share an example of something that breaks the SDK streaming logic? The SDK should handle events that come through multiple chunks.

Maybe you ran into #292? (which will be fixed in the next release)

@rattrayalex
Copy link
Collaborator

This should now be fixed; please try the latest release and provide details if you continue to see issues.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants