Skip to main content

Compositions

MCP servers become powerful when combined with other Claude Code primitives. Hooks add guardrails. Skills provide workflow orchestration. Subagents scope tool access. CLAUDE.md encodes usage guidance. Memory persists cross-session knowledge. Each composition multiplies the capability of the others.

MCP + Hooks

MCP tools appear in hook events using the mcp__<server>__<tool> naming pattern. PreToolUse, PostToolUse, and PermissionRequest hooks apply identically to MCP and built-in tools. This is the primary mechanism for enforcing safety constraints on external tool usage.

Validate All Calls to a Specific Server

Log every GitHub MCP operation to stderr for visibility:

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "mcp__github__.*",
        "hooks": [
          {
            "type": "command",
            "command": "echo \"GitHub tool called: $(jq -r '.tool_name')\" >&2"
          }
        ]
      }
    ]
  }
}

Block Dangerous Database Operations

Prevent destructive SQL through a PreToolUse hook on database tools:

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "mcp__db__.*",
        "hooks": [
          {
            "type": "command",
            "command": ".claude/hooks/validate-db-query.sh"
          }
        ]
      }
    ]
  }
}

The validation script inspects the tool input and exits with code 2 to block execution:

#!/bin/bash
INPUT=$(cat)
QUERY=$(echo "$INPUT" | jq -r '.tool_input.query // empty')
 
if echo "$QUERY" | grep -iqE "(DROP|DELETE|TRUNCATE|ALTER|UPDATE|INSERT)"; then
  echo "Blocked: destructive SQL operation not allowed" >&2
  exit 2
fi
 
exit 0

Exit code 0 allows the call. Exit code 2 blocks it. Any other exit code is treated as a hook error but does not block execution.

Audit All MCP Tool Usage

Write a structured log entry for every MCP tool invocation:

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "mcp__.*",
        "hooks": [
          {
            "type": "command",
            "command": "jq -c '{timestamp: now | todate, tool: .tool_name, input: .tool_input}' >> ~/.claude/mcp-audit.log"
          }
        ]
      }
    ]
  }
}

This produces a newline-delimited JSON log. Parse it with jq for analysis, feed it into monitoring tools, or review it manually after agent sessions.

Granular Filtering with the if Field

The if field (v2.1.85+) adds pattern matching within a hook matcher, enabling fine-grained control:

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Bash",
        "hooks": [
          {
            "type": "command",
            "if": "Bash(git push *)",
            "command": ".claude/hooks/require-ci-pass.sh"
          }
        ]
      }
    ]
  }
}

MCP + Skills

Skills become MCP workflow orchestrators. Since skill content loads into Claude's context when invoked, skills can guide Claude through multi-step tool sequences with explicit ordering, decision points, and error handling.

Incident Investigation Skill

---
name: investigate-incident
description: Investigate a production incident using monitoring data
---
 
# Incident Investigation Workflow
 
When investigating an incident:
 
1. Use `mcp__datadog__get_logs` to pull recent error logs for the affected service
2. Use `mcp__datadog__get_metrics` to check error rates and latency
3. Use `mcp__github__search_repositories` to find recent deployments
4. Use `mcp__github__list_commits` to identify what changed
5. Correlate the deployment timeline with the error spike
6. Summarize findings with root cause and recommended action
 
Always check the last 2 hours of logs unless the user specifies a different window.
If error rates spiked at a specific time, narrow the git log to commits deployed
within 30 minutes before that timestamp.

Skills reference MCP tools by their fully-qualified names. Claude reads these references as concrete instructions, not suggestions. The skill turns a complex multi-tool investigation into a repeatable workflow.

Deploy Review Skill (Multi-Server)

A skill that coordinates across GitHub, Datadog, and a database server to validate a deployment:

# .claude/skills/deploy-review/SKILL.md
---
name: deploy-review
description: Post-deploy health check using monitoring and CI data
---
 
# Post-Deploy Review
 
After a deployment completes, run this checklist:
 
1. **Deployment context** (GitHub):
   - `mcp__github__list_commits` — last 5 commits on main
   - `mcp__github__get_pull_request` — merged PR details
 
2. **Error monitoring** (Datadog):
   - `mcp__datadog__get_metrics` — error rate for the deployed service (last 30min)
   - `mcp__datadog__get_logs` — error logs since deployment timestamp
   - Compare error rate to the 24h baseline
 
3. **Data integrity** (Database):
   - `mcp__db__query` — run `SELECT count(*) FROM health_checks WHERE status = 'failing' AND checked_at > now() - interval '30 minutes'`
   - Flag any new failing health checks
 
4. **Verdict**:
   - If error rate > 2x baseline OR new failing health checks: recommend rollback
   - Otherwise: deployment healthy, summarize key metrics

MCP + Subagents

Subagents have three mechanisms for accessing MCP tools, each with different scoping implications.

Inherit from Parent

When the tools field is omitted, the subagent inherits all tools including MCP:

---
name: data-analyst
description: Analyses data from connected databases
# tools omitted = inherits all parent tools, including MCP
---

Inline Server Definitions (Scoped to Subagent)

Define MCP servers directly in the subagent's mcpServers field. These servers exist only for the subagent, keeping their tool descriptions out of the parent's context window entirely.

---
name: browser-tester
description: Tests features in a real browser using Playwright
mcpServers:
  - playwright:
      type: stdio
      command: npx
      args: ["-y", "@playwright/mcp@latest"]
  - github
---
 
Use the Playwright tools to navigate, screenshot, and interact with pages.

This is the composition with the highest impact on context management. A server with 15 tools and verbose descriptions loads into the subagent's context, not the parent's. The parent delegates and receives results without paying the token cost.

Explicit Tool Restriction

Use mcp:server:tool syntax to grant access to specific tools only:

---
name: database-assistant
description: Queries and analyses the application database
tools:
  - mcp:shared-db:query
  - mcp:shared-db:list_tables
  - mcp:shared-db:describe_table
  - Read
---

This is least-privilege for subagents. The database assistant can query and inspect schemas but cannot access GitHub, Datadog, or any other connected server.

Limitation: Plugin subagents do NOT support hooks, mcpServers, or permissionMode frontmatter fields. These are silently ignored for security reasons.

MCP + CLAUDE.md

Document MCP tool usage guidance directly in your project CLAUDE.md. This surfaces as persistent context that survives compaction and guides Claude on when, how, and when not to use each server.

## MCP Tools
 
### Database (mcp__db)
- Use `mcp__db__query` for read-only SQL against the analytics database
- Always include LIMIT clauses (max 1000 rows)
- Never run queries estimated to take longer than 30 seconds
- Prefer aggregate queries over fetching raw data
 
### GitHub (mcp__github)
- Use for PR reviews, issue management, and deployment checks
- Always check CI status before approving PRs
- Search for related issues before creating new ones
 
### Datadog (mcp__datadog)
- Use for investigating production errors and performance issues
- Default to 1-hour lookback unless user specifies otherwise
- Correlate logs with metrics when investigating incidents

The specificity matters. "Use for read-only SQL" tells Claude nothing useful. "Always include LIMIT clauses (max 1000 rows)" is a concrete constraint that prevents runaway queries.

MCP + Memory

Claude Code's auto-memory saves notes about tool patterns, useful queries, and workflow insights to ~/.claude/projects/<project>/memory/. The first 200 lines (or 25KB) load at every session start. Across sessions, Claude remembers which MCP queries produced useful results and which tool sequences were effective.

Dedicated Memory Servers

For structured, searchable cross-session persistence beyond auto-memory:

# Anthropic memory server (knowledge graph)
claude mcp add --transport stdio memory -- npx -y @anthropic/mcp-memory
 
# Mem0 for vector-based memory
claude mcp add --transport stdio mem0 -- npx -y mem0-mcp-server

These complement auto-memory by storing structured data that can be queried. A knowledge graph server stores entities and relationships. A vector memory server stores and retrieves by semantic similarity.

Configure both in .mcp.json for automatic session-start availability:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@anthropic/mcp-memory"],
      "env": {
        "MEMORY_DIR": "${HOME}/.claude/memory/knowledge-graph"
      }
    },
    "mem0": {
      "command": "npx",
      "args": ["-y", "mem0-mcp-server"],
      "env": {
        "MEM0_API_KEY": "${MEM0_API_KEY}",
        "MEM0_ORG_ID": "${MEM0_ORG_ID}",
        "MEM0_PROJECT_ID": "${MEM0_PROJECT_ID}"
      }
    }
  }
}

Multi-Server Coordination

Running multiple MCP servers requires no special configuration beyond listing them in .mcp.json. Namespace collisions are impossible due to the mcp__<server>__<tool> pattern.

{
  "mcpServers": {
    "github": {
      "type": "http",
      "url": "https://api.githubcopilot.com/mcp/"
    },
    "datadog": {
      "type": "http",
      "url": "https://mcp.datadoghq.com"
    },
    "db-analytics": {
      "command": "npx",
      "args": ["-y", "@bytebase/dbhub", "--dsn", "${ANALYTICS_DB_URL}"]
    },
    "db-users": {
      "command": "npx",
      "args": ["-y", "@bytebase/dbhub", "--dsn", "${USERS_DB_URL}"]
    }
  }
}

Two instances of the same server package (like DBHub above) work fine -- the server name differentiates them. Claude sees mcp__db-analytics__query and mcp__db-users__query as separate tools.

Enterprise Control

For organization-wide deployment, managed-mcp.json and policy-based restrictions control which servers are available:

{
  "allowedMcpServers": [
    { "serverName": "github" },
    { "serverUrl": "https://mcp.company.com/*" },
    { "serverCommand": ["npx", "-y", "@company/approved-mcp"] }
  ],
  "deniedMcpServers": [
    { "serverUrl": "https://*.untrusted.com/*" }
  ]
}

Deny rules always win over allow rules. A server matching both lists is denied.

MCP + Dynamic Credentials

The headersHelper field in .mcp.json delegates header generation to a script. This is the integration point for secrets managers, short-lived tokens, and credential rotation.

#!/bin/bash
# .claude/scripts/get-auth-headers.sh
# headersHelper for internal MCP servers
# Receives CLAUDE_CODE_MCP_SERVER_NAME and CLAUDE_CODE_MCP_SERVER_URL as env vars
 
# Example: fetch a short-lived token from Vault
TOKEN=$(vault kv get -field=token secret/mcp/${CLAUDE_CODE_MCP_SERVER_NAME} 2>/dev/null)
 
if [ -z "$TOKEN" ]; then
  # Fallback: use 1Password CLI
  TOKEN=$(op read "op://Engineering/MCP-${CLAUDE_CODE_MCP_SERVER_NAME}/token" 2>/dev/null)
fi
 
if [ -z "$TOKEN" ]; then
  echo '{}' # Empty headers — server will reject, Claude sees the error
  exit 0
fi
 
# Output must be a JSON object of headers to stdout
cat <<EOF
{
  "Authorization": "Bearer ${TOKEN}",
  "X-MCP-Client": "claude-code",
  "X-Request-ID": "$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid 2>/dev/null || echo 'no-uuid')"
}
EOF

Wire it into the server config:

{
  "mcpServers": {
    "internal-api": {
      "type": "http",
      "url": "https://mcp.internal.example.com",
      "headersHelper": ".claude/scripts/get-auth-headers.sh"
    }
  }
}

The script runs fresh on each connection (not cached). 10-second timeout. If the script fails or returns invalid JSON, the connection attempt fails with a descriptive error.