MCP Integration - ericfitz/tmi GitHub Wiki
MCP Integration
This page documents TMI's planned integration with the Model Context Protocol (MCP).
Coming Soon
MCP integration is planned for a future release of TMI. This integration will enable AI assistants and LLM-powered tools to interact with TMI threat models through a standardized protocol. Whether TMI should also support invoking external MCP servers is under consideration.
Implementation Status
Current Status: Not yet implemented
Planned Timeline: TBD
Tracking:
Learn More About MCP
The Model Context Protocol is an open standard for connecting AI assistants to data sources and tools.
Resources:
Get Involved
Interested in MCP integration for TMI?
- Share your use case: Open a GitHub discussion describing how you would use MCP with TMI, or add your comments to the issues above
- Contribute: Help design and implement the MCP integration
- Stay informed: Watch the TMI repository for updates
GitHub Discussions:
Alternative Integrations
While MCP integration is pending, you can integrate AI tools with TMI using:
-
REST API: Use TMI's REST API to build custom AI integrations (example: TMI-TF Terraform analyzer)
- See REST-API-Reference for API documentation
- Use generated SDKs for your language
-
Webhooks: Receive real-time notifications for AI processing
- See Webhook-Integration for setup
- Process threat model changes with AI services
-
Addons: Build AI-powered addons using the addon system
- See Addon-System for details
- Invoke AI analysis on-demand
Example: AI Integration via API
While waiting for MCP support, here's how you can integrate AI tools today:
import requests
import openai
# Configuration
TMI_API = 'https://your-tmi-server'
TMI_TOKEN = 'your-tmi-token'
OPENAI_KEY = 'your-openai-key'
def analyze_threat_model_with_ai(threat_model_id):
"""Use AI to analyze a threat model"""
# Fetch threat model from TMI
response = requests.get(
f'{TMI_API}/threat_models/{threat_model_id}',
headers={'Authorization': f'Bearer {TMI_TOKEN}'}
)
threat_model = response.json()
# Fetch associated threats (nested under the threat model)
response = requests.get(
f'{TMI_API}/threat_models/{threat_model_id}/threats',
headers={'Authorization': f'Bearer {TMI_TOKEN}'}
)
threats = response.json()
# Format for AI analysis
context = f"""
Threat Model: {threat_model['name']}
Description: {threat_model.get('description', 'N/A')}
Identified Threats:
{format_threats(threats)}
Please analyze this threat model and suggest:
1. Any missing threats
2. Improvements to existing threats
3. Prioritization recommendations
"""
# Call OpenAI
client = openai.OpenAI(api_key=OPENAI_KEY)
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a security expert analyzing threat models."},
{"role": "user", "content": context}
]
)
analysis = response.choices[0].message.content
return analysis
def format_threats(threats):
"""Format threats for AI analysis"""
return '\n'.join([
f"- {t['name']} (Severity: {t.get('severity', 'N/A')})"
for t in threats
])
This approach works today and provides AI-assisted threat modeling capabilities while we work on native MCP integration.
Related Documentation
- REST-API-Reference - TMI REST API
- Webhook-Integration - Webhook system
- Addon-System - Addon system for extensions
- API-Integration - API integration guide
- Extending-TMI - Extending TMI capabilities