LLM Prompt Engineering — Advanced Techniques

Sanjeev SharmaSanjeev Sharma
4 min read

Advertisement

Introduction

Prompt engineering is an art and science. Crafting the right prompts dramatically improves LLM outputs. This guide covers proven techniques used by AI professionals.

Basic Principles

Clarity: Be specific about what you want Context: Provide relevant background Format: Specify output format Examples: Show desired behavior

# Bad prompt
"Tell me about AI"

# Good prompt
"""Explain artificial intelligence in 2-3 paragraphs.
Focus on: definition, applications, limitations.
Use simple language suitable for a high school student."""

Chain-of-Thought Prompting

Asking LLMs to show reasoning improves accuracy:

# Without CoT
prompt = "What is 15 * 8 * 3?"

# With CoT
prompt = """Let me think through this step by step:
15 * 8 = 120
120 * 3 = 360
What is 15 * 8 * 3?"""

Few-Shot Learning

Provide examples of desired output:

prompt = """Classify sentiment as positive or negative.

Examples:
"I love this!" -> positive
"This is terrible" -> negative
"The product is okay" -> neutral

Now classify: "Amazing quality and fast shipping!"
"""

System Prompts

Set context and behavior:

system_prompt = """You are an expert Python programmer.
Provide only working code.
Include docstrings for all functions.
Use modern Python 3.10+ features."""

user_prompt = "Write a function to check if a string is a palindrome"

Role-Playing Prompts

Have LLMs adopt personas:

prompt = """You are a venture capital investor with 20 years experience.
Evaluate this startup pitch: [pitch]
Provide feedback on market opportunity, team, and financial projections."""

Structured Output

Force consistent formatting:

prompt = """Extract information from this text and return as JSON.

JSON format:
{
  "name": "string",
  "age": "number",
  "skills": ["string", "string"],
}

Text: John is 30 years old. He knows Python and JavaScript."""

Negative Prompting

Tell LLM what NOT to do:

prompt = """Write product description.
DON'T use clichés like "revolutionary" or "game-changing".
DON'T make unsupported claims.
DO focus on specific features and benefits."""

Temperature and Parameters

Control randomness:

from openai import OpenAI

client = OpenAI()

# For deterministic outputs (classification, extraction)
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Classify this sentiment"}],
    temperature=0  # Most deterministic
)

# For creative outputs (writing, brainstorming)
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Write a story"}],
    temperature=1.0  # More creative
)

Iterative Refinement

Improve outputs through feedback:

def refine_output(initial_prompt, feedback):
    """Refine output based on feedback."""
    refined_prompt = f"""{initial_prompt}

User feedback: {feedback}

Please revise your response addressing the feedback."""

    return refined_prompt

Prompt Patterns

Expert Pattern

You are an expert in [domain].
[Task description]
[Specific requirements]

Situation-Complication-Resolution

Situation: [Background]
Complication: [Problem]
Resolution needed: [Desired outcome]

Template Pattern

Use this template for all responses:
[Structure defined]

Now: [Actual task]

Advanced Techniques

Priming: Start with successful examples Anchoring: Mention relevant concepts first Decomposition: Break complex tasks into steps Self-consistency: Get multiple outputs and find consensus

Measuring Prompt Quality

def evaluate_prompt_quality(responses: list) -> dict:
    """Evaluate quality of responses."""
    return {
        "consistency": len(set(responses)) == 1,  # All same = good
        "length_variance": max(len(r) for r in responses) / min(len(r) for r in responses),
        "relevance": evaluate_relevance(responses)  # Custom function
    }

Common Pitfalls

  1. Too vague: Specify exactly what you want
  2. Too long: Keep prompts concise
  3. Contradictory: Ensure instructions don't conflict
  4. Ambiguous: One interpretation only
  5. Low-quality examples: Good examples matter

Conclusion

Prompt engineering is skill that improves with practice. The best prompts are clear, specific, and provide context.

FAQ

Q: How much do prompts matter? A: Significantly! Good prompts can improve output quality by 50%+.

Q: Should I use system prompts? A: Yes, they set context and improve consistency.

Q: How do I know if my prompt is good? A: Test it multiple times and iterate based on results.

Advertisement

Sanjeev Sharma

Written by

Sanjeev Sharma

Full Stack Engineer · E-mopro