Collection of Basic Prompt Templates for Various Chat LLMs | Chat LLM 的基础提示模板集合
Motivation: The basic prompt template will significantly affect the effectiveness of instruction following. Models of different architectures may use different prompt templates during training. However, at present, these templates can be challenging to locate; sometimes they are embedded in example codes, hidden within GitHub issues, or occasionally found in official blogs...
动机: 基础提示模板会显著影响指令跟随的效果。不同架构的模型在训练时可能使用不同的提示模板。然而,目前这些模板往往难以找到;有时它们嵌入在示例代码中,有时隐藏在 GitHub 问题中,有时偶尔在官方博客中发现...
Note
Chat Markup Language is the mainstream, eg. HuggingFace's transformers, OpenAI's tiktoken.
Chat Markup Language 是主流格式,例如 HuggingFace's transformers, OpenAI's tiktoken。
(Alphabetical order by architecture)
(按架构名称的字典序排列)
template = """<reserved_195>{query}<reserved_196>"""
- References:
- Model Site: https://huggingface.co/baichuan-inc
template = """<|system|>
You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.
<|user|>
{query}
<|assistant|>
"""
- References:
- Model Site: https://huggingface.co/THUDM/chatglm3-6b
- Note: If no special template is provided, the instruction following still works very well.
template = """<bos><start_of_turn>user
{query}<end_of_turn>
<start_of_turn>model"""
- References:
- Model Site: https://huggingface.co/google/gemma-2b-it
- Note: Gemma does not have a system field
template = """<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
{query}<|im_end|>
<|im_start|>assistant
"""
- References:
- Model Site: https://huggingface.co/internlm/internlm-chat-20b
template = """<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant.
<</SYS>> {query} [/INST] Sure, I'd be happy to help. Here is the answer:"""
- References:
- Model Site: https://huggingface.co/meta-llama/Llama-2-13b-chat
- Note:
Sure, I'd be happy to help. Here is the answer:
avoids verbose model outputs.
template = """Instruct: {query}\nOutput:"""
# or template = """Alice: {query}\nBob:"""
- References:
- Model Site: https://huggingface.co/microsoft/phi-2
template = """<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
{query}<|im_end|>
<|im_start|>assistant
"""
- References:
- Model Site: https://huggingface.co/Qwen/Qwen-7B-Chat
- Note: Qwen will output special tokens like
<|im_end|>
or<|endoftext|>
, you can remove texts after them or usestop
parameter like:
# Using vLLM interface
payload = json.dumps({
"prompt": query,
"n": 1,
"stop": ["<|endoftext|>", "<|im_end|>"],
})
template = """<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
- References:
- Model Site: https://huggingface.co/01-ai/Yi-34B-Chat
template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
{query}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
- References: https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/#llama-3-instruct
- Model Site: https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
- Note:
- Newlines (0x0A) are part of the prompt format, for clarity in the examples, they have been represented as actual new lines.
- The model expects the assistant header at the end of the prompt to start completing it.
template =
- References:
- Model Site:
- Note: