Skip to content

Commit 84faee6

Browse files
added some notes on addressing lazy behavior (#1902)
1 parent 4157b8e commit 84faee6

File tree

1 file changed

+12
-0
lines changed

1 file changed

+12
-0
lines changed

examples/o-series/o3o4-mini_prompting_guide.ipynb

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -163,6 +163,18 @@
163163
"Validate arguments against the format before sending the call; if you are unsure, ask for clarification instead of guessing.\n",
164164
"```\n",
165165
"\n",
166+
"3. Another note on lazy behavior\n",
167+
"We are aware of rare instances of lazy behavior from o3, such as stating it does not have enough time to complete a task, promising to follow up separately, or giving terse answers even when explicitly prompted to provide more detail. We have found that the following steps help ameliorate this behavior:\n",
168+
"\n",
169+
" a. Start a new conversation for unrelated topics:\n",
170+
" When switching to a new or unrelated topic, begin a fresh conversation thread rather than continuing in the same context. This helps the model focus on the current subject and prevents it from being influenced by previous, irrelevant context, which can sometimes lead to incomplete or lazy responses. For example, if you were previously discussing code debugging and now want to ask about documentation best practices, which does not require previous conversation context, start a new conversation to ensure clarity and focus.\n",
171+
"\n",
172+
" b. Discard irrelevant past tool calls/outputs when the list gets too long, and summarize them as context in the user message:\n",
173+
" If the conversation history contains a long list of previous tool calls or outputs that are no longer relevant, remove them from the context. Instead, provide a concise summary of the important information as part of the user message. This keeps the context manageable and ensures the model has access to only the most pertinent information. For instance, if you have a lengthy sequence of tool outputs, you can summarize the key results and include only that summary in your next message.\n",
174+
"\n",
175+
" c. We are constantly improving our models and expect to have this issue addressed in future versions.\n",
176+
"\n",
177+
"\n",
166178
"### Avoid Chain of Thought Prompting\n",
167179
"Since these models are reasoning models and produce an internal chain of thought, they do not have to be explicitly prompted to plan and reason between toolcalls. Therefore, a developer should not try to induce additional reasoning before each function call by asking the model to plan more extensively. Asking a reasoning model to reason more may actually hurt the performance. \n",
168180
"\n",

0 commit comments

Comments
 (0)