-
Notifications
You must be signed in to change notification settings - Fork 1.1k
MCP Tools
The Model Context Protocol (MCP) tools provide a comprehensive suite of 87 specialized tools for AI orchestration, swarm coordination, neural processing, and system management. These tools are integrated with Claude Code through the mcp__claude-flow__
namespace.
- Swarm Management (16 tools)
- Neural & AI (15 tools)
- Memory & Persistence (10 tools)
- Performance & Analytics (10 tools)
- GitHub Integration (6 tools)
- Dynamic Agent Architecture (6 tools)
- Workflow & Automation (8 tools)
- System Utilities (16 tools)
- Usage Examples
- Best Practices
- Tool Composition Patterns
Initialize swarm with topology and configuration.
Parameters:
- topology: hierarchical|mesh|ring|star (required)
- strategy: auto (default)
- maxAgents: 8 (default)
Create specialized AI agents.
Parameters:
- type: coordinator|researcher|coder|analyst|architect|tester|reviewer|optimizer|documenter|monitor|specialist (required)
- name: string (optional)
- swarmId: string (optional)
- capabilities: array (optional)
Orchestrate complex task workflows.
Parameters:
- task: string (required)
- strategy: parallel|sequential|adaptive|balanced
- priority: low|medium|high|critical
- dependencies: array
Monitor swarm health and performance.
Parameters:
- swarmId: string (optional)
List active agents & capabilities.
Parameters:
- swarmId: string (optional)
Agent performance metrics.
Parameters:
- agentId: string (optional)
Real-time swarm monitoring.
Parameters:
- swarmId: string (optional)
- interval: number (optional)
Auto-optimize swarm topology.
Parameters:
- swarmId: string (optional)
Distribute tasks efficiently.
Parameters:
- tasks: array (optional)
- swarmId: string (optional)
Sync agent coordination.
Parameters:
- swarmId: string (optional)
Auto-scale agent count.
Parameters:
- swarmId: string (optional)
- targetSize: number (optional)
Gracefully shutdown swarm.
Parameters:
- swarmId: string (required)
Check task execution status.
Parameters:
- taskId: string (required)
Get task completion results.
Parameters:
- taskId: string (required)
Execute tasks in parallel.
Parameters:
- tasks: array (required)
Batch processing.
Parameters:
- items: array (required)
- operation: string (required)
Check neural network status.
Parameters:
- modelId: string (optional)
Train neural patterns with WASM SIMD acceleration.
Parameters:
- pattern_type: coordination|optimization|prediction (required)
- training_data: string (required)
- epochs: 50 (default)
Analyze cognitive patterns.
Parameters:
- action: analyze|learn|predict (required)
- operation: string (optional)
- outcome: string (optional)
- metadata: object (optional)
Make AI predictions.
Parameters:
- modelId: string (required)
- input: string (required)
Load pre-trained models.
Parameters:
- modelPath: string (required)
Save trained models.
Parameters:
- modelId: string (required)
- path: string (required)
WASM SIMD optimization.
Parameters:
- operation: string (optional)
Run neural inference.
Parameters:
- modelId: string (required)
- data: array (required)
Pattern recognition.
Parameters:
- data: array (required)
- patterns: array (optional)
Cognitive behavior analysis.
Parameters:
- behavior: string (required)
Adaptive learning.
Parameters:
- experience: object (required)
Compress neural models.
Parameters:
- modelId: string (required)
- ratio: number (optional)
Create model ensembles.
Parameters:
- models: array (required)
- strategy: string (optional)
Transfer learning.
Parameters:
- sourceModel: string (required)
- targetDomain: string (required)
AI explainability.
Parameters:
- modelId: string (required)
- prediction: object (required)
Store/retrieve persistent memory with TTL and namespacing.
Parameters:
- action: store|retrieve|list|delete|search (required)
- key: string (conditional)
- value: string (conditional)
- namespace: default (default)
- ttl: number (optional)
Search memory with patterns.
Parameters:
- pattern: string (required)
- namespace: string (optional)
- limit: 10 (default)
Cross-session persistence.
Parameters:
- sessionId: string (optional)
Namespace management.
Parameters:
- namespace: string (required)
- action: string (required)
Backup memory stores.
Parameters:
- path: string (optional)
Restore from backups.
Parameters:
- backupPath: string (required)
Compress memory data.
Parameters:
- namespace: string (optional)
Sync across instances.
Parameters:
- target: string (required)
Manage coordination cache.
Parameters:
- action: string (required)
- key: string (conditional)
Analyze memory usage.
Parameters:
- timeframe: string (optional)
Generate performance reports with real-time metrics.
Parameters:
- format: summary|detailed|json (default: summary)
- timeframe: 24h|7d|30d (default: 24h)
Identify performance bottlenecks.
Parameters:
- component: string (optional)
- metrics: array (optional)
Analyze token consumption.
Parameters:
- operation: string (optional)
- timeframe: 24h (default)
Performance benchmarks.
Parameters:
- suite: string (optional)
Collect system metrics.
Parameters:
- components: array (optional)
Analyze performance trends.
Parameters:
- metric: string (required)
- period: string (optional)
Cost and resource analysis.
Parameters:
- timeframe: string (optional)
Quality assessment.
Parameters:
- target: string (required)
- criteria: array (optional)
Error pattern analysis.
Parameters:
- logs: array (optional)
Usage statistics.
Parameters:
- component: string (optional)
Repository analysis.
Parameters:
- repo: string (required)
- analysis_type: code_quality|performance|security
Pull request management.
Parameters:
- repo: string (required)
- action: review|merge|close (required)
- pr_number: number (conditional)
Issue tracking & triage.
Parameters:
- repo: string (required)
- action: string (required)
Release coordination.
Parameters:
- repo: string (required)
- version: string (required)
Workflow automation.
Parameters:
- repo: string (required)
- workflow: object (required)
Automated code review.
Parameters:
- repo: string (required)
- pr: number (required)
Create dynamic agents.
Parameters:
- agent_type: string (required)
- capabilities: array (optional)
- resources: object (optional)
Match capabilities to tasks.
Parameters:
- task_requirements: array (required)
- available_agents: array (optional)
Resource allocation.
Parameters:
- resources: object (required)
- agents: array (optional)
Agent lifecycle management.
Parameters:
- agentId: string (required)
- action: string (required)
Inter-agent communication.
Parameters:
- from: string (required)
- to: string (required)
- message: object (required)
Consensus mechanisms.
Parameters:
- agents: array (required)
- proposal: object (required)
Create custom workflows.
Parameters:
- name: string (required)
- steps: array (required)
- triggers: array (optional)
Execute predefined workflows.
Parameters:
- workflowId: string (required)
- params: object (optional)
Export workflow definitions.
Parameters:
- workflowId: string (required)
- format: string (optional)
Setup automation rules.
Parameters:
- rules: array (required)
Create CI/CD pipelines.
Parameters:
- config: object (required)
Manage task scheduling.
Parameters:
- action: string (required)
- schedule: object (optional)
Setup event triggers.
Parameters:
- events: array (required)
- actions: array (required)
Manage workflow templates.
Parameters:
- action: string (required)
- template: object (optional)
Run SPARC development modes.
Parameters:
- mode: dev|api|ui|test|refactor (required)
- task_description: string (required)
- options: object (optional)
Execute terminal commands.
Parameters:
- command: string (required)
- args: array (optional)
Configuration management.
Parameters:
- action: string (required)
- config: object (optional)
Feature detection.
Parameters:
- component: string (optional)
Security scanning.
Parameters:
- target: string (required)
- depth: string (optional)
Create system backups.
Parameters:
- destination: string (optional)
- components: array (optional)
System restoration.
Parameters:
- backupId: string (required)
Log analysis & insights.
Parameters:
- logFile: string (required)
- patterns: array (optional)
System diagnostics.
Parameters:
- components: array (optional)
System health monitoring.
Parameters:
- components: array (optional)
Create state snapshots.
Parameters:
- name: string (optional)
Restore execution context.
Parameters:
- snapshotId: string (required)
Multi-repo sync coordination.
Parameters:
- repos: array (required)
Repository metrics.
Parameters:
- repo: string (required)
Fault tolerance & recovery.
Parameters:
- agentId: string (required)
- strategy: string (optional)
Performance optimization.
Parameters:
- target: string (required)
- metrics: array (optional)
// Initialize swarm with hierarchical topology
await mcp__claude-flow__swarm_init({
topology: "hierarchical",
strategy: "auto",
maxAgents: 8
});
// Spawn specialized agents
await mcp__claude-flow__agent_spawn({
type: "coordinator",
name: "MainCoordinator",
capabilities: ["task-distribution", "monitoring"]
});
await mcp__claude-flow__agent_spawn({
type: "coder",
name: "ImplementationAgent",
capabilities: ["typescript", "react", "nodejs"]
});
await mcp__claude-flow__agent_spawn({
type: "tester",
name: "QualityAgent",
capabilities: ["unit-testing", "integration-testing"]
});
// Orchestrate task workflow
await mcp__claude-flow__task_orchestrate({
task: "Implement user authentication system",
strategy: "parallel",
priority: "high",
dependencies: ["database-setup", "api-design"]
});
// Train coordination patterns
await mcp__claude-flow__neural_train({
pattern_type: "coordination",
training_data: JSON.stringify({
scenarios: [
{ agents: 3, tasks: 10, completion_time: 45 },
{ agents: 5, tasks: 20, completion_time: 60 }
]
}),
epochs: 100
});
// Make predictions
const prediction = await mcp__claude-flow__neural_predict({
modelId: "coordination-model",
input: JSON.stringify({ agents: 4, tasks: 15 })
});
// Analyze patterns
await mcp__claude-flow__neural_patterns({
action: "analyze",
operation: "task-distribution",
outcome: "optimal",
metadata: { efficiency: 0.92 }
});
// Store project context
await mcp__claude-flow__memory_usage({
action: "store",
key: "project-context",
value: JSON.stringify({
name: "Auth System",
stack: ["TypeScript", "React", "Node.js"],
requirements: ["JWT", "OAuth2", "2FA"]
}),
namespace: "development",
ttl: 86400 // 24 hours
});
// Search for related memories
const results = await mcp__claude-flow__memory_search({
pattern: "auth*",
namespace: "development",
limit: 20
});
// Create memory backup
await mcp__claude-flow__memory_backup({
path: "./backups/memory-backup-2025-01-25.db"
});
// Analyze repository
await mcp__claude-flow__github_repo_analyze({
repo: "myorg/myproject",
analysis_type: "code_quality"
});
// Create automated workflow
await mcp__claude-flow__github_workflow_auto({
repo: "myorg/myproject",
workflow: {
name: "CI/CD Pipeline",
triggers: ["push", "pull_request"],
jobs: {
test: {
steps: ["checkout", "setup-node", "install", "test"]
},
deploy: {
needs: ["test"],
steps: ["build", "deploy"]
}
}
}
});
// Manage pull request
await mcp__claude-flow__github_pr_manage({
repo: "myorg/myproject",
action: "review",
pr_number: 123
});
// Generate performance report
const report = await mcp__claude-flow__performance_report({
format: "detailed",
timeframe: "7d"
});
// Analyze bottlenecks
await mcp__claude-flow__bottleneck_analyze({
component: "task-orchestration",
metrics: ["latency", "throughput", "cpu-usage"]
});
// Run benchmarks
await mcp__claude-flow__benchmark_run({
suite: "swarm-coordination"
});
// Analyze trends
await mcp__claude-flow__trend_analysis({
metric: "response-time",
period: "30d"
});
- Choose tools based on task requirements
- Use specialized agents for specific domains
- Combine tools for complex workflows
- Initialize swarms with appropriate topology
- Monitor swarm health regularly
- Scale agents based on workload
- Gracefully shutdown swarms when done
- Use namespaces to organize data
- Set appropriate TTL values
- Regular backups for critical data
- Compress old data to save space
- Monitor performance metrics continuously
- Identify and address bottlenecks early
- Use parallel execution for independent tasks
- Optimize neural models for inference
- Implement fault tolerance strategies
- Monitor error patterns
- Use consensus mechanisms for critical decisions
- Maintain execution snapshots for recovery
- Regular security scans
- Secure inter-agent communication
- Validate inputs and outputs
- Monitor for anomalous behavior
// 1. Initialize development swarm
await swarm_init({ topology: "mesh" });
// 2. Spawn SPARC agents
await agent_spawn({ type: "specification" });
await agent_spawn({ type: "pseudocode" });
await agent_spawn({ type: "architecture" });
await agent_spawn({ type: "refinement" });
// 3. Execute SPARC workflow
await sparc_mode({
mode: "dev",
task_description: "Build authentication system"
});
// 4. Monitor progress
await swarm_monitor({ interval: 5000 });
// 1. Create agent swarm for distributed training
await swarm_init({ topology: "star", maxAgents: 16 });
// 2. Distribute training data
await daa_resource_alloc({
resources: {
data: training_dataset,
compute: "gpu-cluster"
}
});
// 3. Train in parallel
await parallel_execute({
tasks: [
{ type: "train", model: "pattern-1" },
{ type: "train", model: "pattern-2" },
{ type: "train", model: "pattern-3" }
]
});
// 4. Create ensemble
await ensemble_create({
models: ["pattern-1", "pattern-2", "pattern-3"],
strategy: "weighted-average"
});
// 1. Setup workflow automation
await workflow_create({
name: "CI/CD Pipeline",
steps: [
{ action: "checkout", type: "git" },
{ action: "test", type: "parallel" },
{ action: "build", type: "sequential" },
{ action: "deploy", type: "conditional" }
],
triggers: ["push", "pull_request"]
});
// 2. Monitor execution
await workflow_execute({
workflowId: "ci-cd-pipeline",
params: { branch: "main" }
});
// 3. Analyze results
await performance_report({ format: "json" });
// 1. Initialize multi-repo swarm
await github_sync_coord({
repos: ["frontend", "backend", "shared-libs"]
});
// 2. Spawn repository agents
for (const repo of repos) {
await agent_spawn({
type: "repo-architect",
name: `${repo}-manager`,
capabilities: ["sync", "merge", "release"]
});
}
// 3. Coordinate releases
await github_release_coord({
repo: "main-project",
version: "2.0.0"
});
// 1. Initialize cognitive system
await neural_patterns({
action: "learn",
operation: "initialize-cognitive-system"
});
// 2. Continuous learning loop
await learning_adapt({
experience: {
type: "user-interaction",
feedback: "positive",
context: "task-completion"
}
});
// 3. Pattern recognition
await pattern_recognize({
data: interaction_logs,
patterns: ["success", "failure", "optimization"]
});
// 4. Adapt behavior
await cognitive_analyze({
behavior: "task-execution-strategy"
});
-
Direct Invocation: Call tools directly using the
mcp__claude-flow__
prefix - Batch Operations: Combine multiple tools in a single operation
- Async Handling: All tools support async/await patterns
- Error Recovery: Built-in error handling and recovery mechanisms
// In Claude Code conversation
User: "Create a swarm to analyze and optimize our codebase"
// Claude Code response with MCP tools
await mcp__claude-flow__swarm_init({
topology: "hierarchical",
maxAgents: 12
});
const agents = await Promise.all([
mcp__claude-flow__agent_spawn({ type: "code-analyzer" }),
mcp__claude-flow__agent_spawn({ type: "performance-benchmarker" }),
mcp__claude-flow__agent_spawn({ type: "security-manager" }),
mcp__claude-flow__agent_spawn({ type: "optimizer" })
]);
await mcp__claude-flow__task_orchestrate({
task: "Comprehensive codebase analysis and optimization",
strategy: "adaptive",
priority: "high"
});
const report = await mcp__claude-flow__performance_report({
format: "detailed",
timeframe: "24h"
});
The MCP Tools suite provides a comprehensive set of 87 specialized tools for AI orchestration, swarm management, and system optimization. By understanding each tool's capabilities and following best practices, you can build sophisticated AI-powered systems that leverage distributed intelligence, adaptive learning, and efficient resource management.
Remember to:
- Choose the right tools for your specific use case
- Compose tools effectively for complex workflows
- Monitor performance and optimize continuously
- Implement proper error handling and recovery
- Keep security considerations in mind
For the latest updates and additional documentation, refer to the Claude Flow repository and MCP specification.