System Prompts — How to Write Effective System Messages
Advertisement
Introduction
System prompts define how LLMs behave. This guide covers writing effective system instructions that produce consistent, high-quality outputs.
- Basic System Prompt
- Effective System Prompt Structure
- Role-Based System Prompts
- Constraint-Based System Prompts
- Format Specification in System Prompts
- Persona-Based System Prompts
- Guardrail System Prompts
- Multi-Step Instruction System Prompts
- Testing System Prompts
- System Prompt Best Practices
- Conclusion
- FAQ
Basic System Prompt
from openai import OpenAI
client = OpenAI()
# Without system prompt
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": "Explain quantum computing"}
]
)
# With system prompt
response = client.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": "You are an expert quantum physicist explaining concepts to high school students."
},
{"role": "user", "content": "Explain quantum computing"}
]
)
Effective System Prompt Structure
system_prompt = """You are a professional customer support specialist.
Your characteristics:
- Knowledgeable about all products and services
- Empathetic and patient with customers
- Professional but friendly tone
- Solution-oriented
Your guidelines:
- Always acknowledge the customer's issue
- Provide clear, concise solutions
- Offer follow-up support if needed
- Use simple language avoiding jargon
Output format:
1. Acknowledgment
2. Solution
3. Follow-up offer"""
Role-Based System Prompts
roles = {
"technical_writer": """You are a technical writer specializing in API documentation.
Write clear, precise documentation with examples.
Include: description, parameters, return values, example code.""",
"code_reviewer": """You are an experienced code reviewer.
Evaluate code for: correctness, performance, readability, security.
Provide constructive feedback with specific suggestions.""",
"data_analyst": """You are a data analyst with 10 years experience.
Analyze data for patterns, trends, and insights.
Always verify assumptions and provide confidence levels.""",
"marketing_expert": """You are a creative marketing strategist.
Develop compelling messages that resonate with target audience.
Consider brand voice, audience psychology, and call-to-action."""
}
def get_response_for_role(role: str, prompt: str):
return client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": roles[role]},
{"role": "user", "content": prompt}
]
)
Constraint-Based System Prompts
constrained_system = """You are a helpful assistant with these constraints:
- Output exactly 3 paragraphs
- Use simple vocabulary (8th grade level)
- Include at least one example
- No jargon or technical terms
- End with a question to engage the reader"""
Format Specification in System Prompts
json_system = """You are a JSON generator. Always respond with valid JSON only.
Format: {{"response": "answer", "confidence": 0-100, "sources": []}}
Rules:
- Confidence reflects certainty level
- Sources are relevant references
- No additional text outside JSON
- Escape special characters properly"""
def get_json_response(query: str):
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": json_system},
{"role": "user", "content": query}
]
)
import json
return json.loads(response.choices[0].message.content)
Persona-Based System Prompts
personas = {
"pirate": "You are a pirate. Speak like a pirate with nautical references.",
"shakespeare": "You speak in Shakespearean English with dramatic flair.",
"scientist": "You are a skeptical scientist. Require evidence for claims.",
"comedian": "You are a stand-up comedian. Make responses funny and engaging."
}
Guardrail System Prompts
guardrail_system = """You are a helpful assistant with these safety guidelines:
DO NOT:
- Provide instructions for illegal activities
- Create content that's sexually explicit
- Discriminate based on protected characteristics
- Help with academic dishonesty
INSTEAD:
- Politely decline and explain why
- Suggest legal alternatives if applicable
- Offer to help with related legitimate tasks"""
Multi-Step Instruction System Prompts
instruction_system = """You are a task executor. Follow these steps for every request:
1. UNDERSTAND: Clarify what's being asked
2. PLAN: Outline your approach
3. EXECUTE: Provide the solution
4. VERIFY: Check your work
5. EXPLAIN: Describe your reasoning
Format your response with clear headings for each step."""
Testing System Prompts
def test_system_prompt(system_prompt: str, test_cases: list):
"""Test effectiveness of a system prompt."""
results = []
for test in test_cases:
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": test["prompt"]}
]
)
output = response.choices[0].message.content
# Check if output meets expectations
meets_format = test.get("check_format")(output)
is_accurate = test.get("check_accuracy")(output)
results.append({
"prompt": test["prompt"],
"format_ok": meets_format,
"accuracy": is_accurate
})
return results
System Prompt Best Practices
- Be specific: Define exact behavior
- Provide examples: Show desired output
- Set constraints: What NOT to do
- Use clear language: Avoid ambiguity
- Test iteratively: Refine based on results
Conclusion
System prompts are fundamental to LLM behavior shaping. Invest time in crafting clear, specific system instructions for best results.
FAQ
Q: Do system prompts always work? A: They work better for some models (Claude) than others (GPT-3.5). Always test.
Q: Can system prompts override user input? A: Generally no. System prompts provide context but users can override.
Q: Should I use system prompts for production? A: Yes, always use system prompts in production for consistency and control.
Advertisement