Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Update RunRequest class to accept new create run parameters #360

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

saluzafa
Copy link

@saluzafa saluzafa commented Jun 21, 2024

Q A
Bug fix? no
New feature? yes
BC breaks? no
Related Issue

Describe your change

Added properties to RunRequest class to reflect new parameters introduced in Assistant run API.
See https://platform.openai.com/docs/api-reference/runs/createRun

Added parameters list:

  • temperature
  • top_p
  • max_prompt_tokens
  • max_completion_tokens
  • truncation_strategy
  • response_format

What problem is this fixing?

It allows to use the new run API parameters.

added parameters:
- temperature
- top_p
- max_prompt_tokens
- max_completion_tokens
- truncation_strategy
- response_format
@saluzafa
Copy link
Author

Hello @aallam ! :)

A small contribution to this amazing library, thank you for creating it!

I didn't updated the CHANGELOG.md file because I was not sure about the version number I should use, please let me know what you want :).

Cheers!

Copy link
Owner

@aallam aallam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution! 🙌
I've added an Unreleased section in the changelog :)

* What sampling temperature to use, between 0 and 2.
* Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
*/
@SerialName("temperature") val temperature: Int? = null,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
@SerialName("temperature") val temperature: Int? = null,
@SerialName("temperature") val temperature: Double? = null,

I believe this should be a floating number

* An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
* So 0.1 means only the tokens comprising the top 10% probability mass are considered.
*/
@SerialName("top_p") val topP: Int? = null,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
@SerialName("top_p") val topP: Int? = null,
@SerialName("top_p") val topP: Double? = null,

Same here :)

* If the run exceeds the number of prompt tokens specified, the run will end with status `incomplete`.
* See `incomplete_details` for more info.
*/
@SerialName("max_prompt_tokens") val maxPromptTokens: Int? = null,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
@SerialName("max_prompt_tokens") val maxPromptTokens: Int? = null,
@SerialName("max_prompt_tokens") val maxPromptTokens: Long? = null,

I would suggest using Longs here

* If the run exceeds the number of completion tokens specified, the run will end with status `incomplete`.
* See `incomplete_details` for more info.
*/
@SerialName("max_completion_tokens") val maxCompletionTokens: Int? = null,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
@SerialName("max_completion_tokens") val maxCompletionTokens: Int? = null,
@SerialName("max_completion_tokens") val maxCompletionTokens: Long? = null,

Same suggestion here

Comment on lines +14 to +20
@SerialName("type") val type: String,

/**
* The number of most recent messages from the thread when constructing the context for the run.
*/
@SerialName("last_messages") val lastMessages: Int? = null
)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can improve a little bit the API here, with something like this:

Suggested change
@SerialName("type") val type: String,
/**
* The number of most recent messages from the thread when constructing the context for the run.
*/
@SerialName("last_messages") val lastMessages: Int? = null
)
@SerialName("type") val type: TruncationStrategyType,
/**
* The number of most recent messages from the thread when constructing the context for the run.
*/
@SerialName("last_messages") val lastMessages: Int? = null
)
/**
* The truncation strategy to use for the thread.
*/
@JvmInline
@Serializable
public value class TruncationStrategyType(public val value: String) {
public companion object {
public val TruncationStrategyType: Auto = TruncationStrategyType("auto")
public val TruncationStrategyType LastMessages = TruncationStrategyType("last_messages")
}
}

@aallam aallam force-pushed the main branch 2 times, most recently from ba88d2a to 91f971a Compare July 20, 2024 09:50
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants