How to Manage Information Flow During Prompt Orchestrations

Master information flow between prompts, agents, instructions, tools, and MCP servers with strategies for robustness, effectiveness, and token efficiency
Author

Dario Airoldi

Published

January 25, 2026

How to Manage Information Flow During Prompt Orchestrations

When orchestrating complex AI workflows, information flow is everything.
The most sophisticated prompt architecture fails if critical context doesn’t reach the right component at the right time.
This article provides a comprehensive guide to managing information flow across all GitHub Copilot customization components—prompts, agents, instructions, tools, MCP servers, and prompt-snippets.

You’ll learn how to design communication strategies that balance robustness (information doesn’t get lost), effectiveness (the right information reaches the right place), and token efficiency (minimal context window consumption).

Table of Contents


🎯 Why information flow matters

The context problem

Every interaction with an AI model operates within a context window—a limited space where all relevant information must coexist.
In simple single-prompt scenarios, managing context is straightforward.
But in multi-agent orchestrations with handoffs, tool calls, and MCP integrations, information flow becomes the critical success factor.

The three failure modes

Failure Mode Symptom Root Cause
Context Loss Agent doesn’t know about earlier decisions Handoff didn’t transfer critical context
Context Bloat Responses become slow, expensive, or confused Too much irrelevant context accumulated
Context Conflict Agent receives contradictory instructions Multiple sources provide conflicting guidance

Underpinning all three failure modes is a deeper phenomenon: context rot.

Context rot: why context management is urgent

Context rot is the progressive degradation of model accuracy as the context window grows longer. It’s not a hypothetical risk—it’s a well-documented phenomenon with measurable benchmarks.

Research by Liu et al. (2023) in “Lost in the Middle: How Language Models Use Long Contexts” demonstrated that language models struggle to use information placed in the middle of long contexts. Burke Holland’s VS Code deep-dive (February 2026) quantified the practical impact:

  • At 32,000 tokens, accuracy can drop from 88% to 30%—even for capable models like Claude 3.5 Sonnet
  • VS Code limits context windows at certain thresholds specifically to maintain performance
  • Earlier instructions progressively lose influence as the conversation grows

This means that every additional token in the context window actively degrades the model’s ability to follow your earlier instructions. The three failure modes above—Context Loss, Context Bloat, and Context Conflict—are all accelerated by context rot.

Practical mitigations:

  • Start new chat sessions frequently to reset the context window
  • Clear context between workflow phases rather than carrying forward an ever-growing history
  • Transfer only essential information during handoffs (see the patterns below)
  • Use the right-sized model for each task—smaller models hit context rot thresholds faster

For strategies to reduce token consumption and slow context rot, see How to Optimize Token Consumption During Prompt Orchestrations.

The goal

Design information flow that ensures:

  1. Right information reaches each component
  2. Minimal tokens consumed for context transfer
  3. No critical data lost during transitions
  4. Clear priority when multiple sources provide guidance

🏗️ Information flow architecture

The context window as communication bus

Think of the context window as a shared communication bus where all components must place their messages.
Every prompt, instruction, tool result, and MCP response competes for space in this limited channel.

┌─────────────────────────────────────────────────────────────────────────┐
│                        CONTEXT WINDOW (128K-200K tokens)                │
├─────────────────────────────────────────────────────────────────────────┤
│ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │
│ │  SYSTEM       │ │  INSTRUCTIONS │ │  USER         │ │  ASSISTANT    │ │
│ │  PROMPT       │ │  (auto-       │ │  MESSAGE      │ │  RESPONSE     │ │
│ │  (agent def)  │ │  injected)    │ │  (prompt +    │ │  (+ tool      │ │
│ │               │ │               │ │  context)     │ │  results)     │ │
│ └───────────────┘ └───────────────┘ └───────────────┘ └───────────────┘ │
│         ▲                 ▲                 ▲                 ▲         │
│         │                 │                 │                 │         │
│   Agent File       Instruction Files   Prompt File +      Tool Calls + │
│   (.agent.md)      (.instructions.md)  Prompt-Snippets   MCP Results   │
└─────────────────────────────────────────────────────────────────────────┘

Component roles in information flow

Before designing communication strategies, understand who contributes what to the context window.

The tables below split components into customization files (what you author) and tool sources (where capabilities come from). This distinction matters for three reasons:

  1. Some components disappear after one use (short persistence) — Prompt files are “single execution”: once the task completes, their content is gone. If the next agent needs that information, you must explicitly transfer it during handoff.

  2. Some components are auto-included, others require actionAgent files and instruction files (System → Model files) are injected automatically by VS Code. Prompt files and snippet files (User → Model) require you to invoke or reference them explicitly.

  3. Tool results consume tokens (bidirectional tools) — When the model calls a tool and receives results, those results are added to the context window. Large file reads or search results can bloat context quickly.

Customization components (what you create)

The table below describes GitHub Copilot customization files and how context information flows into the model. For each component:

  • Direction tells you how it reaches the model: User → Model means you include it explicitly; System → Model means VS Code auto-injects it
  • Persistence tells you how long it stays active — and whether you need to transfer it during handoffs
Component What It Provides Direction Persistence
Prompt File Task-specific instructions + variables User → Model Single execution
Agent File Persona + default tools + behavior System → Model Session-wide
Instruction File Coding standards + context rules System → Model (auto) Pattern-matched
Prompt-Snippet Reusable context fragments User → Model On-demand

Tool sources (where capabilities come from)

Unlike customization components (which provide instructions), tool sources provide actions the model can execute. The table below shows where these capabilities come from and their token impact. For each source:

  • Direction tells you how data flows: Model ↔︎ X means bidirectional (results add to context, consuming tokens); X → Model means one-way pull (lighter on tokens)
Source What It Provides Direction Persistence
Built-in Tools VS Code actions (read_file, semantic_search, create_file) Model ↔︎ VS Code Per-invocation
MCP Server Tools Custom actions via MCP protocol Model ↔︎ Server Connection lifetime
MCP Resources External data access (configs, live data) Server → Model On-demand
MCP Prompts Reusable templates from server Server → Model On-demand

Key distinction: Tools are callable functions. MCP servers are containers that provide tools, resources, and prompts via the MCP protocol. Built-in tools come from VS Code; custom tools come from MCP servers you configure.

Now that you understand what each component contributes and when it’s active, the next sections explore how information flows between them—the communication pathways that connect prompts to agents, agents to tools, and phases to each other.

Information flow diagram

┌────────────────────────────────────────────────────────────────────────────┐
│                           ORCHESTRATOR PROMPT                              │
│                     (coordinates workflow phases)                          │
└──────────────────────────────┬─────────────────────────────────────────────┘
                               │
         ┌─────────────────────┼─────────────────────┐
         │                     │                     │
         ▼                     ▼                     ▼
┌─────────────────┐   ┌─────────────────┐   ┌─────────────────┐
│ AGENT A         │   │ AGENT B         │   │ AGENT C         │
│ (researcher)    │──►│ (builder)       │──►│ (validator)     │
│                 │   │                 │   │                 │
│ ┌─────────────┐ │   │ ┌─────────────┐ │   │ ┌─────────────┐ │
│ │Instructions │ │   │ │Instructions │ │   │ │Instructions │ │
│ │(auto-inject)│ │   │ │(auto-inject)│ │   │ │(auto-inject)│ │
│ └─────────────┘ │   │ └─────────────┘ │   │ └─────────────┘ │
│                 │   │                 │   │                 │
│ ┌─────────────┐ │   │ ┌─────────────┐ │   │ ┌─────────────┐ │
│ │Tools        │ │   │ │Tools        │ │   │ │Tools        │ │
│ │ • search    │ │   │ │ • create    │ │   │ │ • read      │ │
│ │ • fetch     │ │   │ │ • edit      │ │   │ │ • validate  │ │
│ └─────────────┘ │   │ └─────────────┘ │   │ └─────────────┘ │
│                 │   │                 │   │                 │
│ ┌─────────────┐ │   │ ┌─────────────┐ │   │ ┌─────────────┐ │
│ │MCP Servers  │ │   │ │MCP Servers  │ │   │ │MCP Servers  │ │
│ │ • iqpilot   │ │   │ │             │ │   │ │ • linter    │ │
│ └─────────────┘ │   │ └─────────────┘ │   │ └─────────────┘ │
└─────────────────┘   └─────────────────┘   └─────────────────┘
         │                     │                     │
         │    ┌────────────────┤                     │
         │    │                │                     │
         ▼    ▼                ▼                     ▼
    ┌──────────────────────────────────────────────────────┐
    │              HANDOFF CONTEXT                          │
    │  (conversation history, reports, file references)     │
    └──────────────────────────────────────────────────────┘

📡 Communication pathways

Prompt → Agent communication

When a prompt file references an agent via the agent field, information flows in a specific priority order.

Information priority chain

Prompt > Agent > Default — This is the fundamental rule for all configuration inheritance.

Configuration Priority Example
Prompt tools Highest Prompt specifies tools: ['fetch', 'codebase']
Agent tools Medium Agent defines tools: ['semantic_search', 'read_file']
Default tools Lowest Built-in Copilot tools

How prompt invokes agent

# review-security.prompt.md
---
name: security-review
agent: security-specialist    # ← References agent
tools: ['codebase', 'grep']   # ← Overrides agent's default tools
model: claude-sonnet-4        # ← Can override agent's model too
---

Review the following code for security vulnerabilities:

${selection}

Focus on: ${input:focus:What security aspect should I focus on?}

What flows from prompt to agent

Data Flow Mechanism Notes
Task instructions Prompt body → User message The main task description
Variable values Substituted at runtime ${selection}, ${file}, ${input:...}
Tool restrictions tools field override Limits agent’s available tools
Model selection model field override Changes from agent’s default model

What agent provides

Data Source Notes
Persona/behavior Agent body (Markdown) System message defining role
Default tools Agent tools field Available unless prompt overrides
Handoff options Agent handoffs field Buttons for workflow transitions

Agent → Agent (handoffs)

Handoffs are the critical juncture where information can be lost or bloated.
VS Code provides a binary handoff mechanism: either transfer the full conversation (send: true) or let the user control the handoff (send: false).

Handoff configuration

# builder.agent.md
---
name: builder
tools: ['create_file', 'replace_string_in_file']
handoffs:
  - label: "Validate Build"
    agent: validator
    send: true                  # ← Full conversation transfers
    prompt: |
      Validate the file created above.
      
      **Focus on** (from conversation above):
      - File structure correctness
      - Required sections present
      
      **Expected output format**:
      Validation report with PASS/FAIL status.
---

What flows during handoff

send Setting Data Transferred Token Impact
send: true Entire conversation history High—all prior phases included
send: false Nothing automatically User manually pastes context

Key Limitation: VS Code does not support selective context filtering.
You cannot specify “send only Phase 3 output” directly in the handoff configuration.

Handoff data flow diagram

┌─────────────────────────────────────────────────────────────────┐
│  AGENT A (Builder)                                              │
│                                                                 │
│  Conversation History:                                          │
│  ├── User request (Phase 1)           ~200 tokens               │
│  ├── Research report (Phase 2)        ~2,500 tokens             │
│  ├── Architecture spec (Phase 3)      ~1,500 tokens             │
│  └── Built file content (Phase 4)     ~2,000 tokens             │
│                                        ─────────                 │
│                                        ~6,200 tokens TOTAL       │
└─────────────────────────────────┬───────────────────────────────┘
                                  │
                    ┌─────────────┴─────────────┐
                    │     send: true            │
                    │                           │
                    ▼                           ▼
┌─────────────────────────────┐   ┌─────────────────────────────┐
│  AGENT B (Validator)        │   │  AGENT B (Validator)        │
│  send: true                 │   │  send: false                │
│                             │   │                             │
│  Receives:                  │   │  Receives:                  │
│  ALL 6,200 tokens           │   │  Only handoff prompt        │
│  + handoff prompt           │   │  (~100 tokens)              │
│                             │   │                             │
│  Total: ~6,300 tokens       │   │  User must paste context    │
└─────────────────────────────┘   └─────────────────────────────┘

Instructions integration

Instructions are automatically injected based on file patterns—you don’t explicitly invoke them.

How instructions flow

  1. Pattern matching: VS Code evaluates applyTo glob patterns against current file context
  2. Automatic injection: Matching instructions are added to the system prompt
  3. Cumulative application: Multiple matching instructions combine (no guaranteed order)
# react-guidelines.instructions.md
---
applyTo: "**/*.tsx,**/*.jsx"
description: "React component coding standards"
---

## React Component Guidelines

- Use functional components with TypeScript
- Define props interface before component
- Use React.FC<Props> for typing

What instructions provide

Data Flow Mechanism Visibility
Coding standards System prompt injection Silent (not shown in chat)
Tool suggestions #tool:<tool-name> references Guides tool selection
Best practices Markdown content Applied to all matching files

Instructions don’t control

Aspect Why Not Alternative
Tool restrictions Instructions can’t limit tools Use agent tools field
Handoff behavior No handoff configuration Use agent handoffs field
Model selection No model field Use prompt or agent model field

Tool invocations and results

Tools represent bidirectional communication between the model and external systems.

Tool invocation flow

┌──────────────────┐     Request      ┌──────────────────┐
│                  │ ───────────────► │                  │
│   AI MODEL       │                  │   TOOL           │
│   (Copilot)      │                  │   (e.g., fetch)  │
│                  │ ◄─────────────── │                  │
└──────────────────┘     Result       └──────────────────┘

What flows in tool invocation

Direction Data Format
Request (Model → Tool) Function name + arguments JSON Schema validated
Result (Tool → Model) Structured output Text, JSON, or multi-part content

Tool result handling

Tool results are added to the conversation context and consume tokens.
Large tool results (like full file reads or extensive search results) can quickly bloat the context.

## Tool Result Impact on Context

| Tool | Typical Result Size | Context Impact |
|------|--------------------|--------------------|
| `read_file` (50 lines) | ~500 tokens | Low |
| `read_file` (500 lines) | ~5,000 tokens | High |
| `semantic_search` (10 results) | ~2,000 tokens | Medium |
| `fetch_webpage` | ~3,000-10,000 tokens | Very High |
| `grep_search` (20 matches) | ~1,500 tokens | Medium |

Best practices for tool result handling

  1. Request specific line ranges for read_file instead of entire files
  2. Limit search results using maxResults parameter
  3. Summarize large results in intermediary reports before handoffs
  4. Use #tool: references to suggest but not mandate tool use

Terminal command boundaries

The run_in_terminal tool requires special attention because it can execute arbitrary shell commands.

⚠️ Warning: Always define explicit boundaries for agents with run_in_terminal access to prevent unintended system modifications.

Safe patterns:

# In agent instructions:
## Terminal Usage Boundaries

**ALLOWED commands**:
- `npm install`, `npm run build`, `npm test`
- `dotnet build`, `dotnet test`
- `git status`, `git diff`

**NEVER execute**:
- `rm -rf`, `del /s`, or any recursive deletion
- Commands that modify system configuration
- Commands that access credentials or secrets

Best practice: Create separate agents with restricted tool access—don’t give run_in_terminal to agents that don’t need it.


VS Code tasks and terminal commands

VS Code tasks (defined in .vscode/tasks.json) are not directly accessible to agents or prompts.
However, agents can achieve similar automation outcomes through other mechanisms.

Approach Use Case Tool Required
Terminal commands Run build scripts, npm/dotnet commands run_in_terminal
MCP server actions Execute complex build workflows Custom MCP tool
Pre-workflow setup Ensure environment is ready User runs task manually

Note: If your workflow requires build automation, consider creating an MCP server that wraps your build commands with proper error handling and status reporting.


MCP server communication

MCP (Model Context Protocol) enables external server communication with tools, resources, and prompts.

MCP communication flow

┌─────────────────────────────────────────────────────────────────┐
│  VS CODE (MCP Host)                                             │
│  ┌─────────────────────────────────────────────────────────┐   │
│  │  MCP Client                                              │   │
│  │  (manages connection)                                    │   │
│  └──────────────────────────────┬──────────────────────────┘   │
│                                 │                               │
└─────────────────────────────────┼───────────────────────────────┘
                                  │ stdio or HTTP
                                  │
                                  ▼
┌─────────────────────────────────────────────────────────────────┐
│  MCP SERVER (e.g., IQPilot)                                     │
│                                                                 │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐          │
│  │   TOOLS      │  │  RESOURCES   │  │   PROMPTS    │          │
│  │  • validate  │  │  • config    │  │  • templates │          │
│  │  • analyze   │  │  • cache     │  │              │          │
│  └──────────────┘  └──────────────┘  └──────────────┘          │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

What flows with MCP

Primitive Flow Direction Data
Tools Request → Execution → Result Function calls with JSON Schema args
Resources Request → Data File contents, configurations, live data
Prompts Discovery → Template Reusable prompt templates

MCP configuration in agents

# specialist.agent.md (for github-copilot target)
---
name: specialist
target: github-copilot
mcp-servers:
  - name: iqpilot
    command: dotnet
    args: ["run", "--project", "src/IQPilot"]
---

MCP result handling

MCP results follow the same pattern as tool results—they’re added to context and consume tokens.
Unlike built-in tools, MCP servers can return rich, multi-part content:

{
  "content": [
    { "type": "text", "text": "Analysis complete..." },
    { "type": "resource", "uri": "file:///report.md" },
    { "type": "image", "data": "base64...", "mimeType": "image/png" }
  ]
}

Prompt-snippet inclusion

Prompt-snippets are reusable context fragments that you include on-demand using file references.

How to include snippets

# In your prompt file or chat message:

Analyze this code following our security standards.

#file:.github/prompt-snippets/security-checklist.md

Review the code: ${selection}

What snippets provide

Use Case Example Snippet Benefit
Checklists security-checklist.md Consistent validation criteria
Domain glossaries api-terminology.md Shared vocabulary
Architecture summaries system-overview.md Context without full docs
Output templates report-format.md Consistent output structure

Snippet organization

.github/prompt-snippets/
├── checklists/
│   ├── security-review.md
│   ├── code-quality.md
│   └── documentation.md
├── context/
│   ├── architecture-overview.md
│   ├── api-patterns.md
│   └── coding-standards-summary.md
└── templates/
    ├── validation-report.md
    └── implementation-plan.md

Snippets vs instructions

Aspect Prompt-Snippets Instructions
Activation Manual (#file:...) Automatic (applyTo pattern)
Visibility Explicit in prompt Silent injection
Use case Task-specific context Universal coding standards
Token control Full control (include only when needed) Always included for matching files

📊 Communication strategies comparison

Strategy comparison matrix

Strategy Token Efficiency Robustness Complexity Best For
Full Context (send: true) ⚠️ Low ✅ High ✅ Low Simple 2-3 phase workflows
Progressive Summarization ✅ Medium ✅ Medium-High ⚠️ Medium Multi-phase workflows (3-5 phases)
File-Based Isolation ✅ High ⚠️ Medium ⚠️ Medium Specialized single-purpose agents
User-Mediated Handoff ✅ Maximum ⚠️ Variable ⚠️ High (user effort) Maximum control needed
Structured Report Passing ✅ Medium-High ✅ High ⚠️ Medium Complex orchestrations

Token efficiency analysis

Cumulative token cost by strategy

Phase 1 (200) ────► Phase 2 (2,500) ────► Phase 3 (4,000) ────► Phase 4 (6,000) ────► Phase 5 (8,000)

FULL CONTEXT (send: true on each handoff):
├── P1→P2:    200 + 2,300 =  2,500 tokens
├── P2→P3:  2,500 + 1,500 =  4,000 tokens
├── P3→P4:  4,000 + 2,000 =  6,000 tokens
├── P4→P5:  6,000 + 2,000 =  8,000 tokens
│
└── TOTAL INPUT: ~20,500 tokens across all phases

PROGRESSIVE SUMMARIZATION:
├── P1→P2:    200 + 2,300 =  2,500 tokens
├── P2→P3:    500 + 1,500 =  2,000 tokens (summarized P2)
├── P3→P4:    600 + 2,000 =  2,600 tokens (summarized P3)
├── P4→P5:    700 + 2,000 =  2,700 tokens (summarized P4)
│
└── TOTAL INPUT: ~9,800 tokens across all phases (52% reduction)

FILE-BASED ISOLATION:
├── P1→P2:    200 + 2,300 =  2,500 tokens
├── P2→P3:    300 (file ref) = 300 tokens
├── P3→P4:    300 (file ref) = 300 tokens
├── P4→P5:    300 (file ref) = 300 tokens
│
└── TOTAL INPUT: ~3,400 tokens across all phases (83% reduction)

Robustness comparison

Information preservation by strategy

Strategy Goal Preserved Scope Preserved Critical Data Recovery if Lost
Full Context ✅ Always ✅ Always ✅ Always N/A (everything present)
Progressive Summary ✅ Explicit in summary ✅ If included ⚠️ Depends on summary quality Re-summarize from previous output
File-Based ⚠️ Must be in file ⚠️ Must be in file ⚠️ Must be in file Read file for recovery
User-Mediated ⚠️ User must include ⚠️ User must include ⚠️ User controls User re-pastes

Risk mitigation

## Reliability Checksum Pattern

Before each handoff, validate that critical data survives:

- [ ] **Goal Preservation**: Refined goal from Phase 1 still intact?
- [ ] **Scope Boundaries**: IN/OUT scope still clear?
- [ ] **Tool Requirements**: Tool list carried forward?
- [ ] **Critical Constraints**: Boundaries included in handoff?
- [ ] **Success Criteria**: Validation criteria defined?

**If any checkbox fails**: Re-inject missing context before handoff.

🔧 Implementation patterns

Pattern 1: Full context handoff

Best for: Simple 2-3 phase workflows where token cost is acceptable.

# orchestrator.prompt.md
---
name: simple-workflow
agent: orchestrator
handoffs:
  - label: "Build Component"
    agent: builder
    send: true       # Full context transfers
    prompt: |
      Create the component based on the requirements discussed above.
      
      Implement all features from the specification.
---

Pros:

  • ✅ No context loss risk
  • ✅ Simple to implement
  • ✅ Agent has complete history

Cons:

  • ⚠️ High token cost
  • ⚠️ Context bloat in long workflows
  • ⚠️ May exceed context window

Pattern 2: Progressive summarization

Best for: Multi-phase workflows (3-5 phases) requiring context but controlling tokens.

# At end of each phase, instruct agent to produce a summary:

## Phase Completion Instructions

When completing this phase, produce a **PHASE SUMMARY** block:

\`\`\`markdown
## Phase {N} Summary

**Key Outputs**:
- [Output 1]: {1-sentence description}
- [Output 2]: {1-sentence description}

**Critical Data** (preserve in next phase):
- Goal: "{quote the refined goal}"
- Scope: IN=[list], OUT=[list]

**For Next Phase**:
- {Specific instruction 1}
- {Specific instruction 2}
\`\`\`

Handoff using summary:

handoffs:
  - label: "Continue to Build"
    agent: builder
    send: true
    prompt: |
      Build from the **Phase 2 Summary** above.
      
      **FOCUS ON** the "Key Outputs" and "Critical Data" sections.
      **IGNORE** the detailed research process—only the summary matters.

Pattern 3: File-based isolation

Best for: Specialized agents that need minimal context.

Step 1: Write context to file

# research-agent.agent.md
---
name: researcher
tools: ['read_file', 'write_file', 'semantic_search']
---

## Output Requirements

After completing research, write findings to:
`.copilot/temp/{timestamp}-research-spec.md`

Include:
- All key patterns discovered
- Template recommendations
- Critical constraints
- Success criteria

This file will be the ONLY input for the builder agent.

Step 2: Handoff references file

handoffs:
  - label: "Build from Specification"
    agent: builder
    send: false      # Don't send conversation
    prompt: |
      Read the build specification from `.copilot/temp/latest-spec.md`
      and create the file according to that specification.
      
      Ignore any other context—the spec file contains everything you need.

File location convention:

  • Path: .copilot/temp/ (add to .gitignore)
  • Naming: {ISO-timestamp}-{phase}-{topic}.md
  • Cleanup: Delete after workflow completes or after 24 hours

Pattern 4: User-mediated handoff

Best for: Maximum control and token efficiency when user effort is acceptable.

handoffs:
  - label: "Validate Prompt"
    agent: validator
    send: false      # User decides what to share

Workflow:

  1. Orchestrator completes Phase 3, produces specification
  2. User sees: “Ready to validate. Click ‘Validate Prompt’ to continue.”
  3. User clicks handoff button
  4. New chat session opens with validator agent
  5. User pastes only the specification (not full history)
  6. Validator works with minimal context

Pattern 5: Structured report passing

Best for: Complex orchestrations requiring explicit data contracts.

Define data contract

# In orchestrator instructions:

## Phase-to-Phase Data Contracts

### Phase 2 → Phase 3
**Researcher** must output:
- Pattern list (3-5 items with file references)
- Template recommendation
- Anti-patterns to avoid

**Architect** expects to receive:
- Pattern summary (not full analysis)
- Recommended template path
- Constraint list

### Phase 3 → Phase 4
**Architect** must output:
- Specification document
- Tool requirements (explicit list)
- Success criteria (measurable)

**Builder** expects to receive:
- Specification (complete)
- Tool list (validated against available tools)
- Validation criteria

Handoff template

handoffs:
  - label: "Build from Specification"
    agent: builder
    send: true
    prompt: |
      {Primary Task Statement - 1 sentence}
      
      **CONTEXT FROM PREVIOUS PHASE** (reference above):
      - Specification: See Phase 3 output
      - Tools required: {list from spec}
      
      **YOUR SPECIFIC INPUTS**:
      - Template: {path from architect}
      - Constraints: {list from architect}
      
      **EXPECTED OUTPUT FORMAT**:
      - Created file path
      - Implementation notes
      - Validation readiness status
      
      **SUCCESS CRITERIA**:
      - All specification requirements met
      - File passes linting
      - Ready for validation phase

⚠️ Common pitfalls and solutions

Pitfall 1: Context loss during handoffs

Problem: Critical information from earlier phases gets lost.

❌ Bad example:

handoffs:
  - label: "Test"
    agent: test-specialist
    prompt: "Write tests"

Problem: Test specialist doesn’t know what was implemented or what requirements to validate.

✅ Solution:

handoffs:
  - label: "Generate Tests"
    agent: test-specialist
    send: true
    prompt: |
      Create comprehensive tests for the implementation discussed above.
      
      **Validate requirements** from the research phase.
      **Cover edge cases** documented during planning.
      **Reference implementation** from the builder's output.

Pitfall 2: Token bloat from tool results

Problem: Large tool results consume excessive context.

❌ Bad example:

Read the entire file for context.
#tool:read_file path="/path/to/large-file.ts"

✅ Solution:

Read only the relevant section (lines 50-100).
#tool:read_file path="/path/to/large-file.ts" startLine=50 endLine=100

Pitfall 3: Instruction conflicts

Problem: Multiple instruction files provide contradictory guidance.

Scenario:

  • python-standards.instructions.md says: “Use 4-space indentation”
  • team-conventions.instructions.md says: “Use 2-space indentation”

✅ Solution:

  1. Use specific applyTo patterns to avoid overlaps
  2. Create hierarchy with base instructions and specific overrides
  3. Document precedence in a parent instruction file
# base-standards.instructions.md
---
applyTo: "**/*"
---

## Base Standards
These are default standards. More specific instruction files override these.

Pitfall 4: MCP timeout and error handling

Problem: MCP server calls can fail or timeout, breaking workflows.

✅ Solution: Design prompts with fallback instructions:

## MCP Tool Usage

Try to use #tool:mcp_iqpilot_validate to validate the article.

**If the tool is unavailable or times out**:
- Perform manual validation using the checklist below
- Note that MCP validation was skipped
- Recommend running MCP validation later

### Manual Validation Checklist
- [ ] YAML frontmatter present and valid
- [ ] Required sections (intro, conclusion, references)
- [ ] References properly classified

💡 Best practices and guidelines

Token budget management

Estimate before handoff

Workflow Phase Typical Output Token Estimate
Requirements gathering User request + clarifications 200-500 tokens
Research/discovery Pattern analysis, recommendations 2,000-4,000 tokens
Architecture/planning Specifications, designs 1,500-3,000 tokens
Building/implementation Created code/content 1,000-5,000 tokens
Validation Check results, issues 500-1,500 tokens

Set phase budgets

## Phase Budget Guidelines

| Phase | Target Max | Action if Exceeded |
|-------|-----------|-------------------|
| Research | 3,000 tokens | Summarize to 1,000 tokens before handoff |
| Architecture | 2,000 tokens | Split into multiple specs if larger |
| Build | 4,000 tokens | Use file-based output, reference by path |
| Validation | 1,500 tokens | Compress to issues-only report |

Testing information flow

Information flow checklist

Before deploying a multi-agent workflow:

  1. Trace critical data path: Follow the most important data point through every phase
  2. Verify handoff completeness: Ensure each handoff prompt references required prior outputs
  3. Test with minimal input: Run workflow with shortest possible valid input
  4. Measure token consumption: Track tokens at each phase boundary
  5. Simulate failures: What happens if a tool fails? If an agent can’t complete?

Debug logging pattern

## Phase {N} Completion

### Debug Information
- **Phase started**: {timestamp}
- **Phase completed**: {timestamp}
- **Tokens consumed** (estimate): {count}
- **Tools invoked**: {list}
- **MCP calls**: {list}

### Context Verification
- Goal from Phase 1: "{quote or 'MISSING'}"
- Scope boundaries: "{quote or 'MISSING'}"
- Critical constraints: "{quote or 'MISSING'}"

### Handoff Readiness
- [ ] Output format matches contract
- [ ] Required data preserved
- [ ] No unresolved blockers

🎯 Conclusion

Key takeaways

Managing information flow in prompt orchestrations requires understanding:

  1. The context window is your communication bus — All components share this limited space
  2. Handoffs are the critical junctures — Design them explicitly with data contracts
  3. Token efficiency compounds — Small savings per phase add up significantly
  4. Robustness requires explicit preservation — Critical data must be actively carried forward
  5. Each component has a specific role — Use the right mechanism for each type of data

Decision framework

Scenario Recommended Strategy
Simple 2-3 phase workflow Full context (send: true)
Multi-phase (3-5 phases) Progressive summarization
Long workflow (5+ phases) File-based isolation
Maximum control needed User-mediated handoff
Complex data contracts Structured report passing
Mix of above Combine strategies per phase

Next steps

To apply these patterns:

  1. Audit existing workflows for context loss or bloat
  2. Implement reliability checksum before each handoff
  3. Measure token consumption and set budgets
  4. Design explicit data contracts between agents
  5. Test failure scenarios and add fallback instructions

For related topics, see:

Execution contexts note: Information flow differs across VS Code’s three execution contexts (Local, Background, Cloud). Local agents modify workspace directly; Background agents use isolated Git worktrees; Cloud agents create separate branches and PRs. See article 12 appendix for details.


📚 References

Official documentation

VS Code: Customize Chat to Your Workflow 📘 [Official]
Comprehensive overview of VS Code Copilot customization options including custom instructions, prompt files, agents, and MCP integration. Essential reading for understanding how all customization types work together.

VS Code: Custom Agents Documentation 📘 [Official]
Official documentation on creating custom agents in VS Code, including handoff configuration and tool restrictions. Primary reference for agent file structure.

Model Context Protocol: Architecture Overview 📘 [Official]
Official MCP specification covering the client-server architecture, primitives (tools, resources, prompts), and communication protocols. Essential for understanding MCP server integration.

Model Context Protocol: Server Concepts 📘 [Official]
Detailed documentation on MCP server primitives and how they provide context to AI applications. Covers tool discovery, execution, and result handling.

Verified community resources

GitHub Blog: How to Write a Great AGENTS.md 📗 [Verified Community]
Analysis of 2,500+ repositories with AGENTS.md files, extracting best practices for agent instructions. Valuable for understanding real-world patterns.

Internal references

Handoffs Pattern for Multi-Agent Orchestration 📒 [Internal]
Repository context file defining handoff patterns, intermediary report formats, and anti-patterns. Primary internal reference for orchestration design.

How to Create a Prompt Orchestrating Multiple Agents 📒 [Internal]
Detailed article on orchestrator design with phase-based coordination and information exchange protocols. Covers implementation patterns and common mistakes.