-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Update the context window and max output token of gpt-3.5-turbo
#121
Update the context window and max output token of gpt-3.5-turbo
#121
Conversation
gpt-3.5-turbo currently points to gpt-3.5-turbo-0125 with a 16385 tokens context window and a maximum of 4096 output tokens. Reference: - https://platform.openai.com/docs/models/gpt-3-5-turbo
WalkthroughThe recent update enhances the token limits for the 'gpt-3.5-turbo' model within the extension, increasing the context value from 4096 to 16385 while keeping the max value unchanged. This adjustment aims to improve performance and output quality by accommodating a larger context window. Additionally, a comment regarding future updates for 'gpt-35-turbo' has been removed, signaling a refocus on current model specifications. Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (1)
- src/renderer/types.ts (1 hunks)
Additional comments not posted (1)
src/renderer/types.ts (1)
151-152
: Update confirmed: Increased context value for 'gpt-3.5-turbo'.The
context
value has been correctly updated to 16385, aligning with the PR objective. Themax
value remains unchanged at 4096.Ensure that other parts of the codebase referencing this constant are consistent with this update.
Verification successful
Update confirmed: Increased context value for 'gpt-3.5-turbo'.
The
context
value has been correctly updated to 16385, aligning with the PR objective. Themax
value remains unchanged at 4096. The references toMODEL_TOKEN_LIMITS
andgpt-3.5-turbo
are consistent across the codebase.
src/renderer/types.ts
src/helpers.ts
src/renderer/components/TokenCountPopup.tsx
src/renderer/components/ModelSelect.tsx
src/renderer/components/QuestionInputField.tsx
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify consistency of the updated context value for 'gpt-3.5-turbo' across the codebase. # Test: Search for references to 'MODEL_TOKEN_LIMITS' and 'gpt-3.5-turbo'. Expect: Consistency with the updated context value. rg --type ts -A 5 $'MODEL_TOKEN_LIMITS' | rg 'gpt-3.5-turbo'Length of output: 57
Script:
#!/bin/bash # Description: Verify consistency of the updated context value for 'gpt-3.5-turbo' across the codebase. # Test: Search for all references to 'gpt-3.5-turbo' across the codebase. rg --type ts 'gpt-3.5-turbo' # Test: Search for all references to 'MODEL_TOKEN_LIMITS' across the codebase. rg --type ts 'MODEL_TOKEN_LIMITS'Length of output: 3912
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thank you.
I know we'll drop it at some point, but
gpt-3.5-turbo
currently points togpt-3.5-turbo-0125
with a 16385 tokens context window and a maximum of 4096 output tokens. It'll be good to make sure all the details are correct and not confusing to the users.Reference:
Summary by CodeRabbit
New Features
Documentation