How to Create MCP Servers for GitHub Copilot

Learn how to build custom Model Context Protocol (MCP) servers that extend GitHub Copilot with new tools, resources, and integrations using TypeScript, C#, or Python.
Author

Dario Airoldi

Published

January 21, 2026

How to Create MCP Servers for GitHub Copilot

The Model Context Protocol (MCP) enables you to extend GitHub Copilot with custom tools, resources, and integrations that go far beyond what’s possible with prompts, agents, or skills alone.
While the previous articles in this series covered consuming Copilot customizations, this article focuses on creating your own MCP serversβ€”building the server-side components that provide new capabilities to Copilot Chat.

This article covers MCP server architecture, implementation patterns in TypeScript, C#, and Python, and best practices for debugging and deployment.

Table of Contents

🎯 Understanding MCP servers

What is the Model Context Protocol?

MCP (Model Context Protocol) is an open standard that defines how AI assistants communicate with external tools and data sources.
Think of it as a universal adapterβ€”rather than building custom integrations for each AI assistant, you build one MCP server that works with any MCP-compatible client.

Key concepts

Concept Description
MCP Server A process that provides tools, resources, and prompts to AI clients
MCP Client An AI assistant (like Copilot) that connects to servers to access capabilities
Tools Functions the AI can call to perform actions (query databases, call APIs, etc.)
Resources Data sources the AI can read (files, configurations, live data)
Prompts Reusable prompt templates exposed by the server
Transport Communication channel (stdio, SSE/HTTP) between client and server

MCP vs other customization types

Feature MCP Servers Skills Agents Prompts
Purpose Add tools and data sources Bundle workflows with resources Define AI personas Define reusable tasks
Scope Unlimited (any integration) File-based workflows Chat session behavior Single task execution
Language Any (TypeScript, C#, Python, etc.) Markdown only Markdown only Markdown only
Complexity High (full programming) Medium (folder structure) Low (single file) Low (single file)
Capabilities Call APIs, query DBs, execute code Read files, run scripts Control tools, handoff Variable substitution
Cross-platform βœ… Any MCP client βœ… VS Code, CLI, coding agent ❌ VS Code only ❌ VS Code only

When to build an MCP server

βœ… Build an MCP server when you need to:

  • Query external systems β€” Databases, APIs, internal services
  • Perform complex computations β€” Data processing, analysis, transformations
  • Access live data β€” Real-time metrics, monitoring, dashboards
  • Enforce business logic β€” Validation rules, compliance checks
  • Integrate proprietary tools β€” Internal tooling, legacy systems
  • Share capabilities across projects β€” Reusable tooling for teams

❌ Don’t build an MCP server when:

  • A prompt file can accomplish the task
  • You only need to define coding standards (use instruction files)
  • You only need to bundle scripts with instructions (use skills)
  • The existing built-in tools or community servers meet your needs

πŸ—οΈ MCP server architecture

Server lifecycle

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  1. INITIALIZATION                                              β”‚
β”‚  β”œβ”€β”€ Client discovers server (from mcp.json configuration)      β”‚
β”‚  β”œβ”€β”€ Client spawns server process                               β”‚
β”‚  └── Server sends capabilities (tools, resources, prompts)      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                 β”‚
                 β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  2. CAPABILITY NEGOTIATION                                      β”‚
β”‚  β”œβ”€β”€ Client: "What can you do?"                                 β”‚
β”‚  β”œβ”€β”€ Server: Lists tools with JSON Schema definitions           β”‚
β”‚  └── Server: Lists resources and prompt templates               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                 β”‚
                 β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  3. RUNTIME OPERATION                                           β”‚
β”‚  β”œβ”€β”€ Client sends tool invocation requests                      β”‚
β”‚  β”œβ”€β”€ Server executes tool logic                                 β”‚
β”‚  β”œβ”€β”€ Server returns structured results                          β”‚
β”‚  └── (Repeat for each tool call)                                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                 β”‚
                 β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  4. SHUTDOWN                                                    β”‚
β”‚  └── Client terminates server process                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Transport options

MCP supports two primary transport mechanisms:

Transport Use Case Pros Cons
stdio Local servers Simple, secure, fast Single client only
SSE/HTTP Remote servers Multiple clients, network accessible Requires authentication

For GitHub Copilot integration, stdio is the default and recommended transport. The client spawns your server as a subprocess and communicates via stdin/stdout.

Message format

MCP uses JSON-RPC 2.0 for all communication:

// Tool invocation request (client β†’ server)
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "query_database",
    "arguments": {
      "table": "users",
      "filter": "active = true"
    }
  }
}

// Tool result response (server β†’ client)
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Found 42 active users..."
      }
    ]
  }
}

SDK implementations compared

A critical question when building MCP servers is: How do the different language implementations compare? This section clarifies the execution model and capabilities across TypeScript, C#, and Python SDKs.

Process model: out-of-process, not in-process

All MCP servers run as separate processesβ€”they are never loaded in-process into the AI client (like VS Code or Claude Desktop). This is a fundamental architectural decision:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  MCP HOST (e.g., VS Code, Claude Desktop)                        β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”‚
β”‚  β”‚  MCP Client 1  β”‚  β”‚  MCP Client 2  β”‚  β”‚  MCP Client 3  β”‚     β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚                   β”‚                   β”‚
     stdio/HTTP          stdio/HTTP          stdio/HTTP
           β”‚                   β”‚                   β”‚
           β–Ό                   β–Ό                   β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  MCP Server      β”‚  β”‚  MCP Server      β”‚  β”‚  MCP Server      β”‚
β”‚  (TypeScript)    β”‚  β”‚  (Python)        β”‚  β”‚  (C#/.NET)       β”‚
β”‚  SEPARATE PROCESSβ”‚  β”‚  SEPARATE PROCESSβ”‚  β”‚  SEPARATE PROCESSβ”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Why out-of-process?

Benefit Explanation
Language independence Write servers in any languageβ€”the client doesn’t care
Isolation Server crashes don’t affect the host application
Security Servers run with their own permissions and sandboxing
Scalability Multiple servers can run simultaneously
Updates Update servers independently of the client

Communication: transport is the same

Regardless of which SDK you use, communication with the client uses identical protocols:

Transport How it works Language support
stdio Client spawns server as subprocess, communicates via stdin/stdout βœ… All SDKs
Streamable HTTP Server runs as HTTP endpoint, client connects via POST/SSE βœ… All SDKs

Key insight: The client doesn’t know or care what language your server is written in. A TypeScript client can connect to a Python server, and a C# host can spawn a TypeScript serverβ€”the JSON-RPC 2.0 messages are identical.

Capability parity: same MCP primitives everywhere

All official SDKs implement the same MCP specification and expose the same primitives:

Capability TypeScript C# (.NET) Python Notes
Tools βœ… βœ… βœ… Functions AI can call
Resources βœ… βœ… βœ… Read-only data sources
Prompts βœ… βœ… βœ… Reusable templates
Sampling βœ… βœ… βœ… Request LLM completions from client
Elicitation βœ… βœ… βœ… Request user input
Notifications βœ… βœ… βœ… Real-time updates to client
Progress reporting βœ… βœ… βœ… Long-running operation status
Structured output βœ… βœ… βœ… Typed tool responses (spec 2025-06-18)

Bottom line: You can expose the exact same functionality regardless of which SDK you choose.

SDK-specific differences: ecosystem and ergonomics

While capabilities are identical, each SDK has unique strengths based on its language ecosystem:

Aspect TypeScript C# (.NET) Python
Runtime Node.js .NET 8.0+ Python 3.10+
Package manager npm NuGet pip/uv
Async model Promises, async/await Task-based async asyncio
Type safety Optional (TypeScript) Strong (compile-time) Optional (type hints)
Startup time Fast (< 100ms) Medium (cold start ~200ms) Fast (< 100ms)
Memory footprint Medium Higher (CLR overhead) Lower
Best for Web integrations, npm ecosystem Enterprise, existing .NET codebases AI/ML, data science

Ergonomic differences:

// TypeScript: Decorator-free, functional style
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  // Handle tool call
});
// C#: Attribute-based, dependency injection
[McpServerTool, Description("Tool description")]
public string MyTool(string input) => $"Result: {input}";
# Python: Decorator-based, FastMCP convenience layer
@mcp.tool()
def my_tool(input: str) -> str:
    return f"Result: {input}"

Choosing your implementation language

Use this decision framework:

If you need… Choose… Because…
Quick prototyping Python (FastMCP) Minimal boilerplate, decorator syntax
npm ecosystem access TypeScript Direct integration with npm packages
Enterprise integration C# (.NET) Strong typing, dependency injection, ASP.NET integration
AI/ML capabilities Python Rich libraries (PyTorch, transformers, etc.)
Existing .NET codebase C# (.NET) Seamless integration with existing services
Maximum type safety C# (.NET) or TypeScript Compile-time error checking
Smallest deployment Python or TypeScript No CLR overhead

Remember: The choice is about developer experience and ecosystem, not capabilities. All three can do everything MCP supports.

πŸ”§ Building tools

Tools are the primary way MCP servers provide functionality to AI clients. Each tool has:

  1. Name β€” Unique identifier (e.g., query_database)
  2. Description β€” What the tool does (helps AI decide when to use it)
  3. Input Schema β€” JSON Schema defining required and optional parameters
  4. Handler β€” Function that executes when the tool is called

Tool definition anatomy

// TypeScript example - tool definition
{
  name: "validate_yaml",
  description: "Validates a YAML string and returns any syntax errors",
  inputSchema: {
    type: "object",
    properties: {
      content: {
        type: "string",
        description: "The YAML content to validate"
      },
      strict: {
        type: "boolean",
        description: "Enable strict validation mode",
        default: false
      }
    },
    required: ["content"]
  }
}

Tool design best practices

1. Write descriptive descriptions

The AI uses your description to decide when to call your tool. Be specific about:

  • What the tool does
  • When to use it (and when not to)
  • What inputs it expects
  • What outputs it returns
// ❌ Vague description
description: "Validates content"

// βœ… Specific description
description: "Validates YAML syntax and structure. Use this when checking " +
             "configuration files, CI/CD workflows, or any YAML content for " +
             "syntax errors. Returns detailed error messages with line numbers."

2. Design focused, single-purpose tools

Each tool should do one thing well. If you’re tempted to add an action parameter, split into multiple tools instead.

// ❌ Multi-purpose tool
{
  name: "file_operations",
  inputSchema: {
    properties: {
      action: { enum: ["read", "write", "delete", "list"] },
      path: { type: "string" }
    }
  }
}

// βœ… Focused tools
{ name: "read_file", ... }
{ name: "write_file", ... }
{ name: "delete_file", ... }
{ name: "list_directory", ... }

3. Use proper JSON Schema for validation

Leverage JSON Schema features to ensure valid inputs:

inputSchema: {
  type: "object",
  properties: {
    email: {
      type: "string",
      format: "email",
      description: "User's email address"
    },
    age: {
      type: "integer",
      minimum: 0,
      maximum: 150
    },
    role: {
      type: "string",
      enum: ["admin", "user", "guest"]
    }
  },
  required: ["email"],
  additionalProperties: false
}

4. Return structured, actionable results

// ❌ Unstructured result
return { content: [{ type: "text", text: "Error occurred" }] };

// βœ… Structured, actionable result
return {
  content: [{
    type: "text",
    text: JSON.stringify({
      success: false,
      error: {
        code: "VALIDATION_FAILED",
        message: "Invalid YAML syntax",
        line: 42,
        column: 15,
        suggestion: "Check for missing colon after key name"
      }
    }, null, 2)
  }]
};

πŸ“¦ Building resources

Resources provide read-only data that the AI can access. Unlike tools (which perform actions), resources expose information.

Resource types

Type URI Pattern Example
Static Fixed URI config://settings
Template Parameterized URI file://{path}
Dynamic Generated at runtime metrics://cpu-usage

Resource definition

{
  uri: "config://validation-rules",
  name: "Validation Rules",
  description: "Current validation configuration and thresholds",
  mimeType: "application/json"
}

πŸ’» Implementation: TypeScript

TypeScript/Node.js is the most common language for MCP servers, with excellent SDK support.

Setup

# Create new project
mkdir my-mcp-server && cd my-mcp-server
npm init -y

# Install dependencies
npm install @modelcontextprotocol/sdk
npm install -D typescript @types/node

# Initialize TypeScript
npx tsc --init

Basic server structure

// src/index.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";

// Create server instance
const server = new Server(
  {
    name: "my-mcp-server",
    version: "1.0.0",
  },
  {
    capabilities: {
      tools: {},
    },
  }
);

// Define available tools
server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [
      {
        name: "greet",
        description: "Generates a greeting message for the given name",
        inputSchema: {
          type: "object",
          properties: {
            name: {
              type: "string",
              description: "Name of the person to greet",
            },
          },
          required: ["name"],
        },
      },
    ],
  };
});

// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  if (name === "greet") {
    const personName = args?.name as string;
    return {
      content: [
        {
          type: "text",
          text: `Hello, ${personName}! Welcome to MCP.`,
        },
      ],
    };
  }

  throw new Error(`Unknown tool: ${name}`);
});

// Start server with stdio transport
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("MCP server running on stdio");
}

main().catch(console.error);

Build and run

// package.json
{
  "name": "my-mcp-server",
  "version": "1.0.0",
  "type": "module",
  "main": "dist/index.js",
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js"
  }
}
// tsconfig.json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true
  },
  "include": ["src/**/*"]
}

πŸ’» Implementation: C# (.NET)

C# provides strong typing and excellent performance for MCP servers. The official SDK is maintained in collaboration with Microsoft.

Setup

# Create new console project
dotnet new console -n MyMcpServer
cd MyMcpServer

# Add MCP SDK (official package)
dotnet add package ModelContextProtocol --prerelease
dotnet add package Microsoft.Extensions.Hosting

Basic server structure (attribute-based)

The recommended approach uses attributes and dependency injection:

// Program.cs
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using ModelContextProtocol.Server;
using System.ComponentModel;

var builder = Host.CreateApplicationBuilder(args);

// Configure logging to stderr (required for stdio transport)
builder.Logging.AddConsole(options =>
{
    options.LogToStandardErrorThreshold = LogLevel.Trace;
});

builder.Services
    .AddMcpServer()
    .WithStdioServerTransport()
    .WithToolsFromAssembly();  // Auto-discovers tools in assembly

await builder.Build().RunAsync();

// Define tools using attributes
[McpServerToolType]
public static class GreetingTools
{
    [McpServerTool, Description("Generates a greeting message for the given name")]
    public static string Greet(
        [Description("Name of the person to greet")] string name)
        => $"Hello, {name}! Welcome to MCP.";

    [McpServerTool, Description("Generates a farewell message")]
    public static string Farewell(
        [Description("Name of the person")] string name,
        [Description("Optional custom message")] string? message = null)
        => message ?? $"Goodbye, {name}! See you soon.";
}

Advanced: dependency injection and server access

Tools can access services and the MCP server instance:

[McpServerToolType]
public class DataTools
{
    [McpServerTool, Description("Query the database and summarize results")]
    public static async Task<string> QueryDatabase(
        McpServer server,           // Injected: access to MCP server
        HttpClient httpClient,      // Injected: from DI container
        [Description("The query to execute")] string query,
        CancellationToken cancellationToken)
    {
        // Use httpClient from dependency injection
        var data = await httpClient.GetStringAsync($"/api/query?q={query}", cancellationToken);
        
        // Use server to make sampling requests back to client
        var summary = await server.AsSamplingChatClient()
            .GetResponseAsync($"Summarize: {data}", cancellationToken);
        
        return summary;
    }
}

Alternative: low-level handler approach

For fine-grained control over the protocol:

using ModelContextProtocol;
using ModelContextProtocol.Protocol;
using ModelContextProtocol.Server;
using System.Text.Json;

McpServerOptions options = new()
{
    ServerInfo = new Implementation { Name = "MyServer", Version = "1.0.0" },
    Handlers = new McpServerHandlers
    {
        ListToolsHandler = (request, ct) =>
            ValueTask.FromResult(new ListToolsResult
            {
                Tools = [
                    new Tool
                    {
                        Name = "greet",
                        Description = "Generates a greeting message",
                        InputSchema = JsonSerializer.Deserialize<JsonElement>("""
                        {
                            "type": "object",
                            "properties": {
                                "name": { "type": "string", "description": "Name to greet" }
                            },
                            "required": ["name"]
                        }
                        """),
                    }
                ]
            }),

        CallToolHandler = (request, ct) =>
        {
            if (request.Params?.Name == "greet")
            {
                var name = request.Params.Arguments?["name"]?.ToString() ?? "World";
                return ValueTask.FromResult(new CallToolResult
                {
                    Content = [new TextContentBlock { Text = $"Hello, {name}!", Type = "text" }]
                });
            }
            throw new McpProtocolException($"Unknown tool: '{request.Params?.Name}'", McpErrorCode.InvalidRequest);
        }
    }
};

await using var server = McpServer.Create(new StdioServerTransport("MyServer"), options);
await server.RunAsync();

πŸ’» Implementation: Python

Python offers quick prototyping and access to rich AI/ML libraries. The SDK includes FastMCP, a high-level API that minimizes boilerplate.

Setup

# Using uv (recommended)
uv init mcp-server-demo
cd mcp-server-demo
uv add "mcp[cli]"

# Or with pip
pip install "mcp[cli]"

Basic server structure (FastMCPβ€”recommended)

FastMCP provides a decorator-based API for minimal boilerplate:

# server.py
from mcp.server.fastmcp import FastMCP

# Create server instance
mcp = FastMCP("my-mcp-server")

@mcp.tool()
def greet(name: str) -> str:
    """Generates a greeting message for the given name.
    
    Args:
        name: Name of the person to greet
    """
    return f"Hello, {name}! Welcome to MCP."

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers together."""
    return a + b

@mcp.resource("config://settings")
def get_settings() -> str:
    """Get application settings."""
    return '{"theme": "dark", "language": "en"}'

@mcp.prompt()
def summarize(content: str) -> str:
    """Generate a summarization prompt."""
    return f"Please summarize the following content:\n\n{content}"

if __name__ == "__main__":
    mcp.run()  # Uses stdio transport by default

Run with:

python server.py
# Or with uv
uv run server.py

Advanced: context access and progress reporting

Tools can access the MCP context for logging, progress, and client interaction:

from mcp.server.fastmcp import FastMCP, Context
from mcp.server.session import ServerSession

mcp = FastMCP("progress-demo")

@mcp.tool()
async def long_task(
    task_name: str, 
    ctx: Context[ServerSession, None],
    steps: int = 5
) -> str:
    """Execute a long-running task with progress updates."""
    await ctx.info(f"Starting: {task_name}")
    
    for i in range(steps):
        progress = (i + 1) / steps
        await ctx.report_progress(
            progress=progress,
            total=1.0,
            message=f"Step {i + 1}/{steps}"
        )
        await ctx.debug(f"Completed step {i + 1}")
    
    return f"Task '{task_name}' completed"

Alternative: low-level server

For full control over the protocol:

import asyncio
from mcp.server.lowlevel import Server
from mcp.server.stdio import stdio_server
import mcp.types as types

server = Server("my-mcp-server")

@server.list_tools()
async def list_tools() -> list[types.Tool]:
    return [
        types.Tool(
            name="greet",
            description="Generates a greeting message",
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {"type": "string", "description": "Name to greet"}
                },
                "required": ["name"]
            }
        )
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
    if name == "greet":
        person_name = arguments.get("name", "World")
        return [types.TextContent(type="text", text=f"Hello, {person_name}!")]
    raise ValueError(f"Unknown tool: {name}")

async def main():
    async with stdio_server() as (read, write):
        await server.run(read, write)

if __name__ == "__main__":
    asyncio.run(main())

Running with HTTP transport

For remote deployments, use Streamable HTTP:

if __name__ == "__main__":
    # Streamable HTTP (recommended for production)
    mcp.run(transport="streamable-http", host="0.0.0.0", port=8000)

βš™οΈ Configuration and registration

VS Code configuration

MCP servers are registered in VS Code’s settings.json or workspace .vscode/mcp.json:

// .vscode/mcp.json (workspace-level, recommended)
{
  "servers": {
    "my-server": {
      "type": "stdio",
      "command": "node",
      "args": ["${workspaceFolder}/mcp-servers/my-server/dist/index.js"]
    }
  }
}
// settings.json (user-level)
{
  "github.copilot.chat.mcp.servers": {
    "my-server": {
      "type": "stdio",
      "command": "node",
      "args": ["/path/to/my-server/dist/index.js"]
    }
  }
}

Configuration options

Option Description Example
type Transport type "stdio" or "sse"
command Executable to run "node", "python", "dotnet"
args Command arguments ["dist/index.js"]
env Environment variables { "API_KEY": "..." }
cwd Working directory "${workspaceFolder}"

Environment variables

Never hardcode secrets in your server code. Use environment variables:

// .vscode/mcp.json
{
  "servers": {
    "my-server": {
      "type": "stdio",
      "command": "node",
      "args": ["dist/index.js"],
      "env": {
        "DATABASE_URL": "${env:DATABASE_URL}",
        "API_KEY": "${env:MY_API_KEY}"
      }
    }
  }
}

πŸ§ͺ Testing and debugging

MCP Inspector

The MCP Inspector is the primary debugging tool for MCP servers:

# Install globally
npm install -g @modelcontextprotocol/inspector

# Run your server through the inspector
mcp-inspector node dist/index.js

The inspector provides:

  • Live message inspection (requests/responses)
  • Tool testing interface
  • Resource browser
  • Protocol validation

Logging strategies

Since MCP uses stdio for communication, never write to stdout for debugging. Use stderr instead:

// TypeScript
console.error("Debug: Processing request...");

// Python
import sys
print("Debug: Processing request...", file=sys.stderr)

// C#
Console.Error.WriteLine("Debug: Processing request...");

Unit testing tools

// TypeScript example with Jest
describe("greet tool", () => {
  it("returns greeting with name", async () => {
    const result = await handleToolCall("greet", { name: "Alice" });
    
    expect(result.content[0].text).toContain("Hello, Alice");
  });

  it("handles missing name gracefully", async () => {
    await expect(handleToolCall("greet", {}))
      .rejects.toThrow("name is required");
  });
});

Integration testing

// Test full server lifecycle
describe("MCP Server", () => {
  let server: Server;
  let transport: TestTransport;

  beforeEach(async () => {
    transport = new TestTransport();
    server = createServer();
    await server.connect(transport);
  });

  it("lists tools correctly", async () => {
    const response = await transport.sendRequest({
      method: "tools/list",
      params: {}
    });
    
    expect(response.tools).toHaveLength(1);
    expect(response.tools[0].name).toBe("greet");
  });
});

πŸš€ Deployment patterns

Local development

.vscode/
β”œβ”€β”€ mcp.json              # Server configuration
└── settings.json         # Enable MCP

mcp-servers/
└── my-server/
    β”œβ”€β”€ src/
    β”‚   └── index.ts
    β”œβ”€β”€ dist/             # Compiled output
    β”œβ”€β”€ package.json
    └── tsconfig.json

Workspace distribution

Share servers with your team by including them in the repository:

.copilot/
└── mcp-servers/
    └── my-server/
        β”œβ”€β”€ dist/         # Pre-compiled binaries
        └── package.json

.vscode/
└── mcp.json              # Points to .copilot/mcp-servers/

Publishing to MCP Registry

For public distribution, submit your server to the MCP Registry:

  1. Ensure your server follows the MCP specification
  2. Add comprehensive documentation
  3. Include example configurations
  4. Submit a pull request to the registry repository

⚠️ Common pitfalls

1. Writing to stdout

CRITICAL: MCP uses stdout for protocol communication. Any debug output to stdout corrupts the protocol.

// ❌ Breaks MCP protocol
console.log("Debug message");

// βœ… Use stderr for debugging
console.error("Debug message");

2. Synchronous blocking

Long-running operations block the entire server. Always use async patterns:

// ❌ Blocks server
function processData(data: string): string {
  // Heavy synchronous processing
  return heavyComputation(data);
}

// βœ… Non-blocking
async function processData(data: string): Promise<string> {
  return await heavyComputationAsync(data);
}

3. Missing error handling

Unhandled errors crash the server. Wrap tool handlers in try-catch:

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  try {
    const result = await handleTool(request.params);
    return result;
  } catch (error) {
    return {
      content: [{
        type: "text",
        text: `Error: ${error.message}`
      }],
      isError: true
    };
  }
});

4. Vague tool descriptions

The AI relies on descriptions to decide when to use tools. Be specific:

// ❌ AI doesn't know when to use this
description: "Processes data"

// βœ… Clear use case
description: "Validates JSON configuration files against a predefined schema. " +
             "Use when checking config.json, settings.json, or similar files. " +
             "Returns detailed validation errors with line numbers."

5. Overly complex input schemas

Keep inputs simple. If you need many parameters, consider splitting into multiple tools:

// ❌ Too many parameters
inputSchema: {
  properties: {
    source: { type: "string" },
    destination: { type: "string" },
    format: { type: "string" },
    compression: { type: "boolean" },
    encryption: { type: "boolean" },
    encryptionKey: { type: "string" },
    // ... many more
  }
}

// βœ… Focused tool with sensible defaults
inputSchema: {
  properties: {
    source: { type: "string" },
    destination: { type: "string" }
  },
  required: ["source", "destination"]
}

πŸ’‘ Decision framework

Use this flowchart to decide if you need an MCP server:

Need to extend Copilot capabilities?
β”‚
β”œβ”€ Just coding standards/rules?
β”‚  └─ Use instruction files (.instructions.md)
β”‚
β”œβ”€ Reusable task with user input?
β”‚  └─ Use prompt files (.prompt.md)
β”‚
β”œβ”€ Persistent AI persona with tool restrictions?
β”‚  └─ Use agent files (.agent.md)
β”‚
β”œβ”€ Cross-platform workflow with scripts/templates?
β”‚  └─ Use skill files (SKILL.md)
β”‚
β”œβ”€ Need to call external APIs or databases?
β”‚  └─ Build an MCP server βœ…
β”‚
β”œβ”€ Need real-time data access?
β”‚  └─ Build an MCP server βœ…
β”‚
β”œβ”€ Need complex business logic?
β”‚  └─ Build an MCP server βœ…
β”‚
└─ Need to integrate proprietary systems?
   └─ Build an MCP server βœ…

🧩 MCP Apps β€” Rich UI in chat (experimental)

MCP servers can go beyond text responses by returning interactive HTML UIs directly in the chat panel. The @anthropic-ai/ext-apps package enables this through a ui:// resource scheme, turning text-only tool responses into full interactive experiences.

Status: Experimental β€” API may change. Enable in VS Code: Settings β†’ search β€œMCP apps” β†’ toggle on the experimental feature.

Architecture

An MCP app consists of three parts bundled together:

Component Purpose Technology
HTML file UI layout and structure Standard HTML + CSS (e.g., Pico CSS)
TypeScript file Client-side logic and server communication DOM APIs + @anthropic-ai/ext-apps
MCP server Tool registration and resource handling @modelcontextprotocol/sdk

Vite with vite-plugin-single-file bundles the HTML and TypeScript into a single self-contained HTML file that the chat client renders inline.

Project structure

my-mcp-app/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ index.ts          # MCP server (tools + resource registration)
β”‚   β”œβ”€β”€ mcpapp.html       # UI layout
β”‚   └── mcpapp.ts         # Client-side logic
β”œβ”€β”€ package.json
β”œβ”€β”€ tsconfig.json
└── vite.config.ts

Required dependencies

{
  "dependencies": {
    "@modelcontextprotocol/sdk": "latest",
    "@anthropic-ai/ext-apps": "latest",
    "zod": "latest"
  },
  "devDependencies": {
    "vite": "latest",
    "vite-plugin-single-file": "latest",
    "cross-env": "latest"
  }
}

How it works

  1. The MCP server declares a ui:// resource using the ext-apps framework
  2. A tool handler returns a reference to the UI resource instead of plain text
  3. The chat client renders the HTML/CSS/JS inline in the conversation
  4. The app can call server tools, and the server can respond backβ€”bidirectional communication

Registering a UI resource

In the server’s index.ts, register the app resource and a tool that triggers it:

import { createApp } from "@anthropic-ai/ext-apps";

// Register the UI resource
server.resource("ui://my-app", async () => {
  const html = fs.readFileSync("dist/mcpapp.html", "utf8");
  return {
    contents: [{ uri: "ui://my-app", mimeType: "text/html", text: html }]
  };
});

// Register a tool that shows the UI
server.tool("show-my-app", "Show the interactive control panel", {}, async () => {
  return {
    content: [{ type: "resource", resource: { uri: "ui://my-app" } }]
  };
});

Client-side tool calls

In the app’s TypeScript file, use callServerTool() to invoke server tools from the UI:

// mcpapp.ts
import { callServerTool } from "@anthropic-ai/ext-apps";

document.getElementById("submit")?.addEventListener("click", async () => {
  const name = (document.getElementById("name") as HTMLInputElement).value;
  const result = await callServerTool("hello", { name });
  document.getElementById("output")!.textContent = result;
});

The promise pattern β€” forcing chat to wait

By default, when a tool returns a UI, the chat finishes immediatelyβ€”it doesn’t wait for user interaction. The promise pattern solves this by returning an unresolved promise that blocks until the user submits input.

How it works

  1. Declare a promise variable at the top of the server file
  2. Return await promise from the tool handlerβ€”the chat blocks
  3. Create an app-only tool (invisible to chat) that the UI calls on submit
  4. The app-only tool’s handler resolves the promise, unblocking the chat
// Server-side: index.ts
let resolvePromise: (value: string) => void;

// Tool visible to chat β€” shows the form
server.tool("show-get-name", "Show name input form", {}, async () => {
  const promise = new Promise<string>((resolve) => {
    resolvePromise = resolve;
  });

  // Return UI resource β€” chat renders the form
  // Await the promise β€” chat blocks until resolved
  return {
    content: [
      { type: "resource", resource: { uri: "ui://get-name" } },
      { type: "text", text: await promise }  // Blocks here
    ]
  };
});

// App-only tool β€” invisible to chat, only the MCP app can call it
server.tool(
  "submit-name",
  "Submit name from form",
  { name: z.string() },
  async ({ name }) => {
    const greeting = figlet.textSync(`Hello, ${name}!`);
    resolvePromise(greeting);  // Unblocks the chat
    return { content: [{ type: "text", text: greeting }] };
  },
  { visibility: "mcp-app-only" }  // Hidden from chat
);

App-only tool visibility

Tools with visibility: "mcp-app-only" don’t appear in the chat’s tool list but remain callable from MCP apps. This pattern:

  • Keeps the chat’s tool list clean and focused
  • Prevents unintended invocations by the model
  • Creates a clear separation between user-facing and internal tools

When to use MCP Apps

Use case Example
Disambiguation Show a form when the user hasn’t specified enough parameters
Interactive controls Color pickers, configuration panels, light controllers
Data visualization Charts, graphs, and interactive dashboards
Multi-step workflows Forms requiring user choices before proceeding
Rich content Org charts, diagrams, media previews

MCP Apps will appear wherever AI shows upβ€”not just VS Code. Expect rich interfaces in web, mobile, and other IDE environments as the standard matures.

For a practical walkthrough building an MCP app from scratch, see Burke Holland β€” MCP Apps.

🎯 Conclusion

Building MCP servers unlocks the full potential of GitHub Copilot by enabling:

  • Custom tool integrations β€” Connect to any API, database, or service
  • Real-time data access β€” Provide live metrics, logs, and status information
  • Business logic enforcement β€” Implement validation, compliance, and domain rules
  • Cross-platform compatibility β€” Work with any MCP-compatible AI assistant

Key takeaways:

  1. Start simple β€” Build one tool, test it thoroughly, then expand
  2. Write clear descriptions β€” The AI uses them to decide when to call your tools
  3. Use stderr for debugging β€” Never write to stdout
  4. Handle errors gracefully β€” Return structured error messages
  5. Test with MCP Inspector β€” Verify protocol compliance before deployment

MCP servers are the most powerful way to extend Copilot, but they’re also the most complex. Use the decision framework above to ensure you’re choosing the right customization type for your needs.

πŸ“š References

Model Context Protocol Specification πŸ“˜ [Official]
The official MCP specification defining protocol messages, transports, and capabilities. Essential reading for understanding the protocol internals.

MCP Architecture Overview πŸ“˜ [Official]
Comprehensive guide to MCP architecture covering participants, layers, lifecycle management, and the data/transport separation.

MCP Transports Documentation πŸ“˜ [Official]
Detailed specification of stdio and Streamable HTTP transports, including message framing, session management, and security considerations.

MCP TypeScript SDK πŸ“˜ [Official]
Official TypeScript/Node.js SDK for building MCP servers and clients. Includes examples and type definitions.

MCP Python SDK πŸ“˜ [Official]
Official Python SDK for building MCP servers. Includes FastMCP high-level API and low-level protocol access.

MCP C# SDK πŸ“˜ [Official]
Official C#/.NET SDK maintained in collaboration with Microsoft. Supports attribute-based tools and ASP.NET Core integration.

MCP SDKs Overview πŸ“˜ [Official]
Complete list of official SDKs (TypeScript, Python, Go, Kotlin, Swift, Java, C#, Ruby, Rust, PHP) with links to documentation.

MCP Inspector πŸ“˜ [Official]
Debugging tool for MCP servers. Essential for development and troubleshooting.

MCP Server Registry πŸ“˜ [Official]
Community registry of published MCP servers. Browse for inspiration or submit your own.

VS Code MCP Documentation πŸ“˜ [Official]
VS Code-specific documentation for configuring and using MCP servers with GitHub Copilot.

Burke Holland β€” MCP Apps Demo βˆ©β”β•œ [Verified Community]
Demonstrates building MCP servers that return rich HTML UIs in chat using @anthropic-ai/ext-apps. Burke Holland is a Senior Cloud Advocate at Microsoft.

@anthropic-ai/ext-apps (npm) πŸ“˜ [Official]
Official Anthropic package for creating MCP resource-based UI applications.