How to Structure Content for GitHub Copilot Agent Files
Custom agents enable you to create specialized AI personas tailored to specific development roles and tasks.
Agent files (.agent.md) define the behavior, available tools, and instructions for these personas, creating reusable configurations that can be quickly activated in chat.
This article explores how to structure agent files effectively and explains how they interact with prompts and instructions to create powerful, composable AI workflows.
Table of Contents
- 🤖 Understanding Custom Agents
- 🔄 How Agents Differ from Prompts and Instructions
- 📋 Agent File Structure
- 🎯 Designing Agent Personas
- 🔗 Agent Interactions and Handoffs
- ⚙️ Tool Configuration for Agents
- 🧩 Composing Agents with Prompts and Instructions
- 💡 Decision Framework: When to Use Each File Type
- 🎯 Conclusion
- 📚 References
🤖 Understanding Custom Agents
Custom agents are AI personas that you can switch between in GitHub Copilot Chat. Each agent has:
- A specific role or expertise (e.g., security reviewer, planner, implementer)
- Restricted tool access tailored to its function
- Specialized instructions for how it should operate
- Optional handoff capabilities to transition between agents
Unlike prompt files that are invoked on-demand for specific tasks, agents provide ongoing contextual behavior.
When you switch to a custom agent, all subsequent chat interactions adopt that agent’s persona, tools, and guidelines until you switch to a different agent.
Why Use Custom Agents?
Task-appropriate capabilities: Different tasks require different tools. A planning agent might only need read-only tools for research and analysis to prevent accidental code changes, while an implementation agent would need full editing capabilities.
Specialized behavior: Custom agents provide specialized instructions that define how the AI should operate. For instance, a planning agent could instruct the AI to collect project context and generate a detailed implementation plan, while a code review agent might focus on identifying security vulnerabilities and suggesting improvements.
Workflow orchestration: Through handoffs, agents can create guided sequential workflows that transition between specialized personas with suggested next steps, giving developers control to review and approve each stage.
Availability
- VS Code: Custom agents are available from version 1.106+
- Visual Studio: Not currently supported (but see
AGENTS.mdfor Copilot Coding Agent instructions) - Storage Locations:
- Workspace:
.github/agents/*.agent.md(shared with team via Git) - User profile: Personal agents available across all workspaces
- Workspace:
🔄 How Agents Differ from Prompts and Instructions
Understanding the distinction between these three file types is crucial for effective organization:
Comparison Table
| Aspect | Prompt Files (.prompt.md) |
Agent Files (.agent.md) |
Instruction Files (.instructions.md) |
|---|---|---|---|
| Purpose | Define specific tasks/workflows | Define AI personas with persistent behavior | Define context-specific rules |
| Activation | On-demand via /promptName |
Switched to via agent picker | Automatic when conditions match |
| Scope | Single execution | Ongoing chat session context | All operations matching applyTo pattern |
| Reusability | Task-specific, one prompt = one job | Role-specific, one agent = multiple tasks | Context-specific, always active when relevant |
| Tool Control | Can specify tools for this prompt | Defines default tools for this persona | No tool control (relies on agent/prompt) |
| Can Reference | Agents (via agent field), tools, instructions |
Tools, instructions (via Markdown links) | Tools (via #tool: syntax) |
| Best For | “Generate React form”, “Review API security” | “Planner persona”, “Security reviewer role” | “Python coding standards”, “C# conventions” |
Conceptual Model
Think of these file types as layers in a composable system:
┌─────────────────────────────────────────────────────────┐
│ PROMPT FILE │
│ "Generate React form component" │
│ • References agent: planner │
│ • Adds task-specific tools: ['fetch'] │
│ • Invokes with: /create-react-form │
└────────────────┬────────────────────────────────────────┘
│
│ Uses agent configuration
▼
┌─────────────────────────────────────────────────────────┐
│ AGENT FILE │
│ "Planner" persona │
│ • Default tools: ['search', 'codebase', 'usages'] │
│ • Behavior: Generate plans, no code edits │
│ • Can handoff to: implementation agent │
└────────────────┬────────────────────────────────────────┘
│
│ Reads relevant instructions
▼
┌─────────────────────────────────────────────────────────┐
│ INSTRUCTION FILES │
│ • "react-guidelines.instructions.md" (applyTo: **/*.tsx)│
│ • "api-standards.instructions.md" (applyTo: **/api/**) │
│ • "security-rules.instructions.md" (applyTo: **) │
└─────────────────────────────────────────────────────────┘
Information Flow
When you invoke a prompt that references an agent:
- Prompt provides task-specific instructions and may override tools
- Agent provides persona, default tools, and behavioral guidelines
- Instructions are automatically included based on file context (
applyTopatterns) - Tool Priority: Prompt tools > Agent tools > Default agent tools
📋 Agent File Structure
Agent files use the .agent.md extension and follow this structure:
---
description: Brief description shown in agent picker
name: Agent name (defaults to filename if omitted)
argument-hint: Placeholder text in chat input (optional)
tools: ['tool1', 'tool2', 'toolset-name']
model: Preferred model (e.g., Claude Sonnet 4, GPT-4o)
target: vscode | github-copilot
handoffs:
- label: Button text for handoff
agent: target-agent-name
prompt: Message to send to target agent
send: false # auto-submit if true
---
# Agent Instructions
Your agent's system instructions in Markdown format.
Define:
- The agent's role and expertise
- How it should approach tasks
- What output format to produce
- Any constraints or guidelines
You can reference other files using Markdown links.
Reference tools using #tool:<tool-name> syntax.YAML Frontmatter Fields
Core Fields
| Field | Type | Required | Description |
|---|---|---|---|
description |
String | No | Brief description shown as placeholder text in chat input |
name |
String | No | Agent name (defaults to filename without extension) |
argument-hint |
String | No | Hint text shown in chat input field |
tools |
Array | No | List of tools/tool sets available to this agent |
model |
String | No | Preferred AI model (e.g., “Claude Sonnet 4”, “GPT-4o”) |
target |
String | No | vscode (local) or github-copilot (remote) |
Advanced Fields
| Field | Type | Description |
|---|---|---|
mcp-servers |
Array | MCP server configurations (for github-copilot target only) |
handoffs |
Array | Workflow transitions to other agents |
handoffs.label |
String | Button text displayed for handoff |
handoffs.agent |
String | Target agent identifier |
handoffs.prompt |
String | Message to send when handing off |
handoffs.send |
Boolean | Auto-submit prompt (default: false) |
Body Content
The agent file body contains the agent’s instructions in Markdown format. This is where you:
- Define the persona: “You are a senior security engineer…”
- Specify behavior: “Always prioritize security over convenience…”
- Describe output format: “Generate a report with the following sections…”
- Set constraints: “Never modify files directly; propose changes only…”
You can reference:
- Other files via Markdown links:
[Review Guidelines](../instructions/review-process.md) - Tools via syntax:
Use #tool:codebase to search for similar patterns
🎯 Designing Agent Personas
Effective agent design starts with clear persona definition. Here’s a framework for creating specialized agents:
The Five Elements of Agent Persona
1. Role Definition
Clearly state the agent’s expertise and responsibility:
You are a **security-focused code reviewer** specializing in identifying vulnerabilities in web applications.2. Behavioral Guidelines
Define how the agent approaches tasks:
Your approach:
- Prioritize identifying security vulnerabilities (XSS, SQL injection, authentication bypass)
- Consider both immediate threats and long-term security architecture
- Provide actionable remediation steps with code examples
- Flag compliance issues (OWASP Top 10, CWE)3. Output Format
Specify the structure of responses:
Format your review as:
## Summary
Brief overview of findings (2-3 sentences)
## Critical Issues
- [Issue]: Description and impact
- **Remediation**: Step-by-step fix
## Recommendations
- Best practices to prevent similar issues4. Constraints
Set boundaries on what the agent should or shouldn’t do:
Constraints:
- Do NOT modify code directly; propose changes for review
- Do NOT skip security checks for "convenience"
- ALWAYS verify authentication and authorization logic
- Ask for clarification if security context is ambiguous5. Tool Usage
Explain how the agent should leverage available tools:
Available tools:
- Use #tool:codebase to find similar security patterns
- Use #tool:fetch to check CVE databases for known vulnerabilities
- Use #tool:search to locate related test filesExample: Complete Agent Persona
Here’s a full example of a well-structured planning agent:
---
description: Generate implementation plans for features and refactoring
name: Planner
tools: ['fetch', 'githubRepo', 'search', 'usages', 'codebase']
model: Claude Sonnet 4
handoffs:
- label: Start Implementation
agent: agent
prompt: Implement the plan outlined above.
send: false
---
# Planning Agent
You are a **senior solution architect** tasked with creating detailed, actionable implementation plans.
## Your Role
Generate comprehensive plans that guide developers through complex features or refactoring tasks. Focus on clarity, completeness, and feasibility.
## Approach
1. **Understand Context**
- Use #tool:codebase to explore existing architecture
- Use #tool:search to find related components
- Use #tool:fetch to research best practices and patterns
2. **Analyze Requirements**
- Identify functional and non-functional requirements
- Flag potential risks or challenges
- Note dependencies on other systems/components
3. **Design Solution**
- Break down work into logical, testable phases
- Identify reusable patterns and components
- Consider error handling, logging, and monitoring
4. **Document Plan**
- Create clear, numbered implementation steps
- Specify files to create/modify for each step
- Include test strategy for each component
## Output FormatOverview
[Brief description of the feature/refactoring and its purpose]
Requirements
- Functional: [What the system must do]
- Non-Functional: [Performance, security, scalability considerations]
- Dependencies: [External systems, libraries, or prerequisites]
Architecture Changes
[High-level architectural modifications, if any]
Implementation Steps
Phase 1: [Phase Name]
- Step: [Detailed description]
- Files: [Files to create/modify]
- Changes: [What changes to make]
- Tests: [What to test]
Phase 2: [Next Phase]
…
Testing Strategy
- Unit tests: [What to cover]
- Integration tests: [What scenarios]
- Manual verification: [What to check]
Risks and Mitigations
- Risk: [Potential issue]
- Mitigation: [How to address it] ```
Constraints
- Do NOT write implementation code; describe what should be implemented
- Do NOT skip important steps for brevity; completeness is critical
- ALWAYS consider backward compatibility and migration paths
- Ask clarifying questions if requirements are ambiguous
Handoff
When the plan is complete and approved, use the “Start Implementation” handoff to transition to the implementation agent with full context.
# 🔗 Agent Interactions and Handoffs
<mark>**Handoffs**</mark> are one of the most powerful features of custom agents. They enable orchestrated, multi-step workflows where control transitions between specialized agents.
## How Handoffs Work
When an agent completes its task, handoff buttons appear in the chat interface. Clicking a handoff button:
1. Switches to the target agent
2. Pre-fills the chat input with the specified prompt
3. Optionally auto-submits the prompt (if `send: true`)
4. Carries forward the conversation context
### Handoff Configuration
```yaml
handoffs:
- label: Start Implementation # Button text
agent: agent # Target agent (built-in or custom)
prompt: Implement the plan above. # Message to send
send: false # Manual submission (user reviews first)
Common Handoff Patterns
1. Plan → Implement
Planning agent generates a detailed plan, then hands off to implementation agent:
# planner.agent.md
handoffs:
- label: Start Implementation
agent: agent
prompt: Implement the plan outlined above, starting with Phase 1.
send: falseWorkflow:
- User asks for implementation plan
- Planner agent generates detailed steps
- User reviews plan
- User clicks “Start Implementation”
- Implementation agent begins coding
2. Implement → Review
Implementation agent writes code, then hands off to review agent:
# implementer.agent.md
handoffs:
- label: Review Changes
agent: security-reviewer
prompt: Review the changes made in this session for security vulnerabilities.
send: true # Auto-submit since context is clear3. Write Failing Tests → Implement
Test-first approach: generate failing tests, review them, then implement to make them pass:
# test-writer.agent.md
handoffs:
- label: Implement to Pass Tests
agent: agent
prompt: Implement the code changes needed to make these tests pass.
send: false4. Multi-Stage Refinement
Create a chain of specialized agents:
Research → Plan → Implement → Review → Document
Each agent hands off to the next, building on previous context.
Handoff Best Practices
| Practice | Rationale |
|---|---|
Use send: false for critical transitions |
Gives user chance to review before proceeding |
Use send: true for routine transitions |
Streamlines workflow when next step is obvious |
| Include context in prompt | “Implement the plan above” references specific context |
| Name handoffs clearly | “Start Implementation” is clearer than “Next Step” |
| Chain related agents | Reviewer → Documenter → Deployer creates logical flow |
⚙️ Tool Configuration for Agents
Tools define what actions an agent can perform. Proper tool selection ensures agents have the right capabilities without unnecessary access.
Understanding Tool Priority
When multiple sources define tools, this priority order applies:
- Prompt file
toolsfield (highest priority) - Referenced agent’s
toolsfield - Default tools for current agent mode
Example:
# Agent defines default tools
# security-reviewer.agent.md
tools: ['codebase', 'search', 'fetch']
# Prompt can override/extend
# api-security-audit.prompt.md
agent: security-reviewer
tools: ['codebase', 'search', 'fetch', 'githubRepo'] # Adds githubRepoTool Categories
Built-in Tools
| Tool | Purpose | Read-Only |
|---|---|---|
codebase |
Semantic code search | ✓ |
editor |
File read/write operations | ✗ |
filesystem |
Directory navigation, file queries | ✓ |
fetch |
Retrieve web content | ✓ |
web_search |
Internet search | ✓ |
search |
Workspace text search | ✓ |
usages |
Find code usages/references | ✓ |
problems |
Get errors/warnings | ✓ |
changes |
View git changes | ✓ |
Tool Sets
Predefined groups of related tools:
| Tool Set | Included Tools | Use Case |
|---|---|---|
#edit |
editor, filesystem |
Code modification |
#search |
codebase, search, usages |
Code discovery |
#reader |
codebase, problems, changes, usages |
Context gathering |
MCP Tools
Tools from Model Context Protocol servers (e.g., @github, @azure):
tools: ['codebase', '@github/*'] # Include all tools from GitHub MCP serverTool Selection Strategy
Planning Agent (Read-Only Focus)
tools: ['fetch', 'githubRepo', 'search', 'usages', 'codebase']Rationale: Needs to gather information but shouldn’t modify code accidentally.
Implementation Agent (Full Access)
tools: ['editor', 'filesystem', 'codebase', 'search']Rationale: Needs file editing capabilities plus context awareness.
Review Agent (Read + External Research)
tools: ['codebase', 'search', 'fetch', 'web_search']Rationale: Reads code, searches for patterns, fetches security databases/documentation.
Testing Agent (Read + Execute)
tools: ['codebase', 'editor', 'terminal']Rationale: Creates test files and runs tests via terminal.
Restricting Tools for Safety
Limit tools to prevent unintended actions:
# Security reviewer should NOT edit files
tools: ['codebase', 'search', 'fetch'] # Excludes 'editor'
# Planner should NOT execute commands
tools: ['codebase', 'search', 'usages'] # Excludes 'terminal'🧩 Composing Agents with Prompts and Instructions
The true power of agents emerges when combined with prompts and instructions. Here’s how to compose them effectively:
Pattern 1: Prompt References Agent
A prompt can reference a custom agent to inherit its tools and behavior:
# api-security-audit.prompt.md
---
name: api-security-audit
description: Perform comprehensive security audit of REST API
agent: security-reviewer # Use security-reviewer agent
tools: ['codebase', 'fetch', 'githubRepo'] # Override with specific tools
---
Perform a security audit of the API in ${selection}.
Focus on:
- Authentication and authorization
- Input validation
- SQL injection vulnerabilities
- XSS risks
Use #tool:fetch to check OWASP guidelines.
Use #tool:githubRepo to find similar secure implementations.Result: Prompt gets security-reviewer’s behavior + overridden tools.
Pattern 2: Agent References Instructions
An agent can reference instruction files for reusable guidelines:
---
name: security-reviewer
tools: ['codebase', 'search', 'fetch']
---
# Security Reviewer Agent
You are a security expert. Follow the guidelines in:
- [Security Best Practices](../instructions/security-standards.instructions.md)
- [OWASP Compliance Checklist](../instructions/owasp-checklist.instructions.md)
Your task is to identify vulnerabilities and propose fixes.Result: Agent inherits detailed security rules from instruction files.
Pattern 3: Automatic Instruction Application
Instructions apply automatically based on applyTo patterns:
# python-security.instructions.md
---
applyTo: "**/*.py"
---
# Python Security Guidelines
- Always use parameterized queries, never string concatenation for SQL
- Validate all user input with type checking
- Use `secrets` module for tokens, never hardcode credentialsWhen security-reviewer agent analyzes a .py file, these instructions are automatically included.
Pattern 4: Workflow Composition
Combine agents, prompts, and instructions for complete workflows:
1. User invokes: /plan-feature
→ Uses planner agent
→ Includes project-architecture.instructions.md (auto-applied)
→ Generates implementation plan
2. User clicks handoff: "Start Implementation"
→ Switches to implementation agent
→ Includes language-specific instructions (auto-applied)
→ Begins coding
3. User clicks handoff: "Review Changes"
→ Switches to security-reviewer agent
→ Includes security-standards.instructions.md (auto-applied)
→ Reviews for vulnerabilities
Reusability Example
Scenario: Multiple prompts use the same agent with different focuses
# Agent: test-specialist.agent.md
---
tools: ['codebase', 'editor', 'search']
---
You are a testing expert. Generate comprehensive test suites.# Prompt 1: unit-tests.prompt.md
agent: test-specialist
---
Generate unit tests for ${selection} with 100% coverage.# Prompt 2: integration-tests.prompt.md
agent: test-specialist
---
Generate integration tests for the API workflow in ${file}.Both prompts reuse test-specialist’s tools and behavior but define different tasks.
💡 Decision Framework: When to Use Each File Type
Use this decision tree to determine the right file type:
Should this be a Prompt, Agent, or Instruction?
Start Here: What are you defining?
┌─────────────────────────────────────────────────────────────┐
│ QUESTION: What am I trying to define? │
└───────────────────┬─────────────────────────────────────────┘
│
┌───────────┴───────────┐
│ │
┌───▼────┐ ┌────▼───┐
│ Task │ │ Rules │
│ │ │ │
└───┬────┘ └────┬───┘
│ │
│ │
┌───────▼────────────┐ ┌──────▼─────────────────┐
│ Is it invoked │ │ Does it apply to │
│ on-demand for a │ │ specific file types/ │
│ specific job? │ │ contexts automatically?│
└───────┬────────────┘ └──────┬─────────────────┘
│ │
YES │ YES │
│ │
┌───────▼────────────┐ ┌──────▼─────────────────┐
│ Does it define │ │ │
│ persistent │ │ → INSTRUCTION FILE │
│ behavior for │ │ (.instructions.md) │
│ multiple tasks? │ │ │
└───────┬────────────┘ └─────────────────────────┘
│
YES │ NO
│ │
│ └─────────────┐
│ │
┌───────▼────────────┐ │
│ → AGENT FILE │ │
│ (.agent.md) │ │
└────────────────────┘ │
│
┌──────▼─────────────┐
│ → PROMPT FILE │
│ (.prompt.md) │
└────────────────────┘
Detailed Decision Criteria
Use a Prompt File when:
✅ Defining a specific task that users invoke on-demand
✅ Task has a clear start and end point
✅ Users need to provide input/context when invoking
✅ Task is standalone (doesn’t require persistent persona)
✅ Different users will invoke it for different scenarios
Examples:
- Generate React form component
- Perform security audit of API
- Create migration script
- Write documentation for function
Use an Agent File when:
✅ Defining an AI persona/role (not a single task)
✅ Behavior should persist across multiple interactions
✅ Multiple prompts will use this persona
✅ You need to restrict tool access for this role
✅ You want to enable handoffs to/from this agent
Examples:
- Security Reviewer (persona for any security task)
- Planner (generates plans for various features)
- Test Specialist (creates tests for different components)
- Documentation Expert (writes any type of documentation)
Use an Instruction File when:
✅ Defining rules that apply automatically
✅ Rules are specific to file types, languages, or directories
✅ Guidelines should influence ALL work in that context
✅ Rules are orthogonal to task/persona (apply everywhere)
✅ You want both prompts and agents to follow these rules
Examples:
- Python coding standards (applies to all
.pyfiles) - API design guidelines (applies to
api/directory) - Security requirements (applies everywhere:
**) - React component conventions (applies to
.tsxfiles)
Common Scenarios
| Scenario | Solution |
|---|---|
| “Generate unit tests for this function” | Prompt file (generate-unit-tests.prompt.md) |
| “I need a testing expert persona” | Agent file (test-specialist.agent.md) |
| “All Python code should use type hints” | Instruction file (python-standards.instructions.md, applyTo: **/*.py) |
| “Create a workflow: plan → implement → review” | Three agents with handoffs (planner.agent.md → implementer.agent.md → reviewer.agent.md) |
| “Security audit for APIs” | Prompt references security agent (api-audit.prompt.md with agent: security-reviewer) |
| “C# naming conventions” | Instruction file (csharp-standards.instructions.md, applyTo: **/*.cs) |
Anti-Patterns to Avoid
❌ Don’t create an agent for every single task
- ✅ Do create agents for roles/personas, prompts for tasks
❌ Don’t duplicate rules across prompts
- ✅ Do extract common rules into instruction files
❌ Don’t create instructions for task-specific workflows
- ✅ Do use prompts for task workflows, instructions for universal rules
❌ Don’t create a prompt that just switches to an agent
- ✅ Do let users switch agents directly; prompts should add task-specific context
❌ Don’t hardcode tool lists in every prompt
- ✅ Do define default tools in agents, override in prompts only when needed
🎯 Conclusion
Custom agents, prompts, and instructions form a powerful, composable system for customizing GitHub Copilot. By understanding each file type’s purpose and how they interact, you can create sophisticated AI workflows that provide:
- Specialized personas through agents that define persistent roles and behaviors
- Task-specific workflows through prompts that invoke targeted operations
- Context-aware guidelines through instructions that apply automatically
- Orchestrated processes through handoffs that transition between agents
Key Principles:
- Separation of Concerns: Prompts define tasks, agents define personas, instructions define rules
- Composability: Prompts reference agents, agents reference instructions, creating flexible combinations
- Tool Priority: Prompts > Agents > Defaults, enabling precise control
- Reusability: One agent serves multiple prompts; one instruction applies everywhere
- Handoffs: Agents coordinate multi-step workflows with user review points
By following the decision framework and composition patterns in this article, you can build a library of reusable, maintainable AI customizations that enhance your team’s productivity.
📚 References
Official GitHub Copilot Documentation
Custom Agents in VS Code
[📘 Official]
Comprehensive guide to creating custom agents with.agent.mdfiles. Covers agent file structure, YAML frontmatter options, handoffs for workflow orchestration, tool configuration, and how agents differ from chat modes. Essential reading for understanding agent personas and multi-step workflows.Use Prompt Files in VS Code
[📘 Official]
Documentation on creating reusable prompt files with.prompt.mdextension. Explains how prompts reference agents via theagentfield, tool priority order, variable substitution, and the relationship between prompts and agents. Critical for understanding how prompts and agents compose.Use Custom Instructions in VS Code
[📘 Official]
Guide to creating instruction files with.instructions.mdextension. Covers theapplyTofrontmatter field for automatic application based on file patterns, how to reference instructions from prompts and agents, and best practices for context-specific rules. Important for understanding how instructions provide universal guidelines.Use Tools in Chat
[📘 Official]
Comprehensive reference for GitHub Copilot tools including built-in tools, MCP tools, and tool sets. Explains tool approval process, how to reference tools with#tool:syntax, and tool configuration in agents and prompts. Essential for understanding agent capabilities and tool priority.
Community Resources
- Awesome GitHub Copilot - Custom Agents Examples
[📒 Community]
Community-curated collection of custom agents, prompts, and instructions. Browse real-world examples of agent personas, handoff workflows, and composition patterns. Valuable for learning from practical implementations.