Appendix 03: Google Gemini Prompting Guide Analysis

Deep analysis of Google’s official prompting guides for Gemini 2.0 Flash and Gemini 3 models with extracted techniques and examples
Author

Dario Airoldi

Published

January 20, 2026

Appendix 03: Google Gemini Prompting Guide Analysis

This appendix provides a comprehensive analysis of Google’s official prompting documentation, extracting key techniques, patterns, and recommendations for Gemini 2.0 Flash and Gemini 3 models.

Guide Version: This analysis is based on Google AI documentation as of 2026-01-20. Official guides may have been updated since this analysis. Always verify with the official documentation.

Table of Contents

📊 Model Overview

Gemini Model Family

Model Best For Context Window Key Characteristic
Gemini 2.0 Flash Fast inference, multimodal 1M+ tokens Quick responses, visual reasoning
Gemini 2.5 Flash Balanced performance 1M+ tokens Improved instruction following
Gemini 3 Advanced reasoning, agentic Context varies Strong planning and execution
Gemini 3 Pro Complex specialized tasks Large Highest capability in family

Core Philosophy

Google’s prompting guidance emphasizes:

“Prompt engineering is iterative. These guidelines and templates are starting points. Experiment and refine based on your specific use cases and observed model responses.”

Fundamental Approach

Prompt design is the process of creating natural language requests that elicit accurate, high-quality responses. The key is providing clear and specific instructions with appropriate structure.

📝 Core Prompting Principles

Clear and Specific Instructions

Every effective prompt should include:

Component Purpose Example
Input What the model should process Questions, tasks, entities, partial content
Constraints Limitations on the response Length limits, format restrictions
Response Format How output should be structured Tables, lists, JSON, paragraphs

Input Types

Gemini supports four primary input types:

Type Description Example
Question Direct inquiry “What’s a good name for a flower shop?”
Task Action to perform “Create a list of camping essentials”
Entity Object to process “Classify these items as large or small”
Completion Partial content to continue “The capital of France is…”

Input Examples

Question Input:

What's a good name for a flower shop that specializes in selling 
bouquets of dried flowers? Create a list of 5 options with just the names.

Task Input:

Give me a simple list of just the things that I must bring on a camping trip. 
The list should have 5 items.

Entity Input:

Classify the following items as [large, small]:
Elephant
Mouse
Snail

Constraints

Always specify limitations explicitly:

Summarize this text in one sentence:
Text: A quantum computer exploits quantum mechanical phenomena to perform 
calculations exponentially faster than any modern traditional computer...

Response Format

Control output structure through explicit instructions:

# System instruction
All questions should be answered comprehensively with details, 
unless the user requests a concise response specifically.

Or through format specifications:

  • “Format as a markdown table”
  • “Return only the code, no explanations”
  • “Output as valid JSON”

🎯 Zero-Shot vs Few-Shot Prompting

When to Use Each

Approach Best For Token Cost
Zero-shot Clear, simple tasks Lower
Few-shot Format enforcement, pattern learning Higher

Recommendation: “We recommend to always include few-shot examples in your prompts. Prompts without few-shot examples are likely to be less effective.”

Zero-Shot Example

Please choose the best explanation to the question:

Question: How is snow formed?
Explanation1: Snow is formed when water vapor in the air freezes into ice
crystals in the atmosphere, which can combine and grow into snowflakes...
Explanation2: Water vapor freezes into ice crystals forming snow.
Answer:

Few-Shot Example

Below are some examples showing a question, explanation, and answer format:

Question: Why is the sky blue?
Explanation1: The sky appears blue because of Rayleigh scattering...
Explanation2: Due to Rayleigh scattering effect.
Answer: Explanation2

Question: What is the cause of earthquakes?
Explanation1: Sudden release of energy in the Earth's crust.
Explanation2: Earthquakes happen when tectonic plates suddenly slip...
Answer: Explanation1

Now, Answer the following question given the example formats above:

Question: How is snow formed?
Explanation1: Snow is formed when water vapor freezes into ice crystals...
Explanation2: Water vapor freezes into ice crystals forming snow.
Answer:

Best Practices for Few-Shot

1. Optimal Number of Examples

  • Start with 2-5 examples
  • Add more only if needed
  • Warning: Too many examples can cause overfitting

2. Positive Patterns Over Anti-Patterns

Anti-pattern (avoid):

Don't end haikus with a question:
Haiku are fun
A short and simple poem
Don't you enjoy them?

Positive pattern (use):

Always end haikus with an assertion:
Haiku are fun
A short and simple poem
A joy to write

3. Consistent Formatting

Ensure identical structure across all examples:

  • Same delimiters
  • Same whitespace patterns
  • Same tag usage

📐 Structured Prompting with Tags

Gemini responds well to both XML tags and Markdown for structure.

XML Style

<role>
You are a senior solution architect.
</role>

<constraints>
- No external libraries allowed.
- Python 3.11+ syntax only.
</constraints>

<task>
Design a caching layer for the provided API.
</task>

<output_format>
Return a single code block.
</output_format>

Markdown Style

# Identity
You are a senior solution architect.

# Constraints
- No external libraries allowed.
- Python 3.11+ syntax only.

# Output format
Return a single code block.

Prefix Patterns

Prefixes help the model understand semantic boundaries:

Prefix Type Purpose Example
Input prefix Mark input sections “English:”, “Text:”
Output prefix Guide output format “JSON:”, “The answer is:”
Example prefix Label few-shot examples “Example 1:”, “

Prefix Example

Classify the text as one of the following categories.
- large
- small

Text: Rhino
The answer is: large

Text: Mouse
The answer is: small

Text: Snail
The answer is: small

Text: Elephant
The answer is:

✂️ Completion Strategy (Prefilling)

The completion strategy controls output format by providing partial output for the model to continue.

Basic Completion

Create an outline for an essay about hummingbirds.

I. Introduction
*

The model will continue the established pattern.

Format Control

Valid fields are cheeseburger, hamburger, fries, and drink.

Order: Give me a cheeseburger and fries
Output:

{ “cheeseburger”: 1, “fries”: 1 }


Order: I want two burgers, a drink, and fries.
Output:

The model will match the JSON format from the example.

Benefits

  • Enforces specific output structures
  • Reduces need for explicit format instructions
  • Shows rather than tells the desired format

📚 Context and Grounding

When to Add Context

Include context when the model needs:

  • Proprietary information not in training data
  • Specific constraints on what to reference
  • Domain-specific knowledge for the task

Context Example: Generic vs Grounded

Without Context (Generic Response)

What should I do to fix my disconnected wifi? 
The light on my Google Wifi router is yellow and blinking slowly.

Response: Generic troubleshooting steps…

With Context (Grounded Response)

Answer the question using the text below. Respond with only the text provided.

Question: What should I do to fix my disconnected wifi? 
The light on my Google Wifi router is yellow and blinking slowly.

Text:
Color: Slowly pulsing yellow
What it means: There is a network error.
What to do:
Check that the Ethernet cable is connected to both your router and your modem 
and both devices are turned on. You might need to unplug and plug in each 
device again.

Color: Fast blinking yellow
What it means: You are holding down the reset button...
[Additional reference text]

Response: Specific answer from the provided text.

Best Practice

Use the instruction: “Respond with only the text provided” to prevent the model from adding information beyond the given context.

🔀 Breaking Down Complex Prompts

For complex tasks, decompose into manageable components.

Three Decomposition Strategies

1. Break Down Instructions

Instead of one complex prompt, create separate prompts for distinct tasks:

Prompt A: Extract entities from document
Prompt B: Classify entities by type
Prompt C: Generate summary based on entities

2. Chain Prompts

Sequential processing where each output feeds the next:

Step 1 Output → Step 2 Input → Step 2 Output → Step 3 Input → Final

3. Aggregate Responses

Parallel processing with aggregation:

          ┌─── Prompt A (first half) ──► Output A ───┐
Document ─┤                                           ├──► Aggregate
          └─── Prompt B (second half) ─► Output B ───┘

⚙️ Model Parameters

Key Parameters

Parameter Purpose Recommendation
max_output_tokens Limit response length Set based on expected output
temperature Randomness in generation 0 = deterministic, 1 = creative
topK Tokens considered per step Lower = more focused
topP Cumulative probability threshold 0.95 default
stop_sequences When to stop generating Define end markers

Temperature Guidelines

Value Use Case
0 Factual responses, code generation
0.5 Balanced creativity
1.0 Creative writing, brainstorming

⚠️ Gemini 3 Warning: “When using Gemini 3 models, we strongly recommend keeping the temperature at its default value of 1.0. Changing the temperature (setting it below 1.0) may lead to unexpected behavior.”

Things to Avoid

  • ❌ Don’t rely on models to generate factual information
  • ❌ Use with care on math and logic problems
  • ❌ Avoid setting non-default temperature for Gemini 3

🧠 Gemini 3 Specific Techniques

Gemini 3 models are designed for advanced reasoning and instruction following.

Core Principles for Gemini 3

Principle Description
Be precise and direct State goals clearly, avoid persuasive language
Use consistent structure XML or Markdown, pick one and stick with it
Define parameters Explain ambiguous terms explicitly
Control verbosity Default is direct; request detail if needed
Handle multimodal coherently Treat text, images, audio as equal inputs
Prioritize critical instructions Essential rules at the beginning
Structure for long contexts Context first, questions last
Anchor context “Based on the information above…”

Gemini 3 Flash Specific Tips

Current Day Accuracy

For time-sensitive user queries that require up-to-date information, you
MUST follow the provided current time (date and year) when formulating
search queries in tool calls. Remember it is 2025 this year.

Knowledge Cutoff Awareness

Your knowledge cutoff date is January 2025.

Grounding Performance

You are a strictly grounded assistant limited to the information provided in
the User Context. In your answers, rely **only** on the facts that are
directly mentioned in that context. You must **not** access or utilize your
own knowledge or common sense to answer. Do not assume or infer from the
provided facts; simply report them exactly as they appear. Your answer must
be factual and fully truthful to the provided text, leaving absolutely no
room for speculation or interpretation.

Enhancing Reasoning and Planning

Explicit Planning

Before providing the final answer, please:
1. Parse the stated goal into distinct sub-tasks.
2. Check if the input information is complete.
3. Create a structured outline to achieve the goal.

Self-Critique

Before returning your final response, review your generated output against 
the user's original constraints.
1. Did I answer the user's *intent*, not just their literal words?
2. Is the tone authentic to the requested persona?

Complete Gemini 3 Template

System Instruction

<role>
You are Gemini 3, a specialized assistant for [Insert Domain].
You are precise, analytical, and persistent.
</role>

<instructions>
1. **Plan**: Analyze the task and create a step-by-step plan.
2. **Execute**: Carry out the plan.
3. **Validate**: Review your output against the user's task.
4. **Format**: Present the final answer in the requested structure.
</instructions>

<constraints>
- Verbosity: [Specify Low/Medium/High]
- Tone: [Specify Formal/Casual/Technical]
</constraints>

<output_format>
Structure your response as follows:
1. **Executive Summary**: [Short overview]
2. **Detailed Response**: [The main content]
</output_format>

User Prompt

<context>
[Insert relevant documents, code snippets, or background info here]
</context>

<task>
[Insert specific user request here]
</task>

<final_instruction>
Remember to think step-by-step before answering.
</final_instruction>

🤖 Agentic Workflow Optimization

For deep agentic workflows, Gemini provides specific guidance on controlling behavior.

Behavioral Dimensions to Configure

1. Reasoning and Strategy

Aspect What to Configure
Logical decomposition How thoroughly to analyze constraints
Problem diagnosis Depth of root cause analysis
Information exhaustiveness Balance between thoroughness and speed

2. Execution and Reliability

Aspect What to Configure
Adaptability Stick to plan vs pivot on new data
Persistence Retry attempts before giving up
Risk Assessment Distinguish read vs write operations

3. Interaction and Output

Aspect What to Configure
Ambiguity handling Assume vs ask for clarification
Verbosity Explain actions vs silent execution
Precision Exact figures vs estimates

Agentic System Instruction Template

You are a very strong reasoner and planner. Use these critical instructions 
to structure your plans, thoughts, and responses.

Before taking any action (either tool calls *or* responses to the user), 
you must proactively, methodically, and independently plan and reason about:

1) Logical dependencies and constraints: Analyze the intended action against:
    1.1) Policy-based rules, mandatory prerequisites, and constraints.
    1.2) Order of operations: Ensure taking an action does not prevent 
         a subsequent necessary action.
    1.3) Other prerequisites (information and/or actions needed).
    1.4) Explicit user constraints or preferences.

2) Risk assessment: What are the consequences of taking the action? 
   Will the new state cause any future issues?
    2.1) For exploratory tasks (like searches), missing *optional* parameters 
         is a LOW risk. Prefer calling the tool with available information.

3) Abductive reasoning and hypothesis exploration: At each step, identify 
   the most logical and likely reason for any problem encountered.
    3.1) Look beyond immediate or obvious causes.
    3.2) Hypotheses may require additional research.

4) Outcome evaluation and adaptability: Does the previous observation 
   require any changes to your plan?
    4.1) If initial hypotheses are disproven, generate new ones.

5) Information availability: Incorporate all applicable sources:
    5.1) Using available tools and their capabilities
    5.2) All policies, rules, checklists, and constraints
    5.3) Previous observations and conversation history
    5.4) Information only available by asking the user

6) Precision and Grounding: Ensure reasoning is extremely precise.
    6.1) Verify claims by quoting exact applicable information.

7) Completeness: Ensure all requirements are exhaustively incorporated.
    7.1) Resolve conflicts using priority order.
    7.2) Avoid premature conclusions.

8) Persistence and patience: Do not give up unless all reasoning is exhausted.
    8.1) On *transient* errors, you *must* retry.
    8.2) On other errors, change strategy, don't repeat failed calls.

9) Inhibit your response: Only take action after all reasoning is completed.

🔧 Practical Examples

Example 1: Order Parsing with Completion

Valid fields are cheeseburger, hamburger, fries, and drink.

Order: Give me a cheeseburger and fries
Output:

{ “cheeseburger”: 1, “fries”: 1 }


Order: I want two burgers, a drink, and fries.
Output:

Example 2: Multiple Choice Classification

Multiple choice problem: Which of the following options describes 
the book The Odyssey?

Options:
• thriller
• sci-fi
• mythology
• biography

Example 3: Grounded Troubleshooting

Answer the question using the text below. Respond with only the text provided.

Question: What should I do to fix my disconnected wifi? 
The light on my Google Wifi router is yellow and blinking slowly.

Text:
Color: Slowly pulsing yellow
What it means: There is a network error.
What to do:
Check that the Ethernet cable is connected to both your router and 
your modem and both devices are turned on. You might need to unplug 
and plug in each device again.

Example 4: Gemini 3 Structured Task

<role>
You are a code review specialist.
</role>

<constraints>
- Focus on security vulnerabilities only
- Python 3.11+ best practices
- Output severity ratings
</constraints>

<context>
<code language="python">
def process_user_input(user_data):
    query = f"SELECT * FROM users WHERE name = '{user_data}'"
    return db.execute(query)
</code>
</context>

<task>
Identify security vulnerabilities in the code above.
</task>

<output_format>
| Vulnerability | Severity | Line | Recommendation |
</output_format>

⚠️ Common Pitfalls

Pitfall 1: Inconsistent Example Formatting

Wrong:

Example 1: hello -> greeting
Example 2:
  input: goodbye
  output: farewell

Correct:

Example 1:
Input: hello
Output: greeting

Example 2:
Input: goodbye
Output: farewell

Pitfall 2: Anti-Pattern Examples

Wrong:

DON'T do this: [bad example]

Correct:

ALWAYS do this: [good example]

Pitfall 3: Changing Gemini 3 Temperature

Wrong:

response = genai.generate(
    model="gemini-3",
    temperature=0.2,  # May cause loops or degraded performance
    ...
)

Correct:

response = genai.generate(
    model="gemini-3",
    temperature=1.0,  # Keep default for Gemini 3
    ...
)

Pitfall 4: Missing Context Anchoring

Wrong:

<documents>
[Large context block]
</documents>
What is the main theme?

Correct:

<documents>
[Large context block]
</documents>

Based on the information above, what is the main theme?

Pitfall 5: Mixed Tag Styles

Wrong:

<role>
You are a helpful assistant.
</role>

# Instructions
- Be concise

<output>
Return JSON
</output>

Correct:

<role>
You are a helpful assistant.
</role>

<instructions>
- Be concise
</instructions>

<output>
Return JSON
</output>

📚 References

Official Documentation

Interactive Resources

  • 📘 Google AI Studio [📘 Official] Interactive prompt development environment.

  • 📘 Prompt Gallery [📘 Official] Sample prompts showcasing key concepts.