Introduction
Technical documentation exists in a state of perpetual lag. Your codebase evolves faster than your architecture diagrams, leaving developers to reverse-engineer systems from scattered README files and outdated Confluence pages. The gap between code reality and visual representation creates friction: onboarding takes longer, architecture reviews become guesswork, and knowledge lives only in people's heads rather than in shareable diagrams.
The problem compounds when you have multiple documentation sources. Some teams keep API specs in one format, internal architecture notes in another, and deployment topology scattered across wiki pages. Extracting coherent information from these sources and converting it into visual diagrams requires hours of manual work, and that's before you account for the ongoing maintenance burden.
This workflow solves that problem by automating the entire process. We'll combine document analysis, AI reasoning, and diagram generation to produce architecture diagrams directly from your existing codebase documentation. The workflow runs without manual handoff, meaning you can trigger it whenever your documentation updates and get fresh diagrams automatically.
The Automated Workflow
The core workflow follows this sequence: extract and analyse your documentation files; send them to an AI model for architecture reasoning; generate diagram code; render that code into visual diagrams. We'll use bhava-ai for codebase documentation extraction, chat-with-pdf-by-copilotus to process documentation files, and Mintlify to generate and render the final diagrams. For orchestration, I'll show you the n8n approach since it handles file operations well, though Zapier and Make work equally well for simpler setups.
Why n8n for this workflow
n8n excels at file handling and long-running processes. This workflow involves extracting documentation, processing it through APIs, generating code, and rendering output, which typically takes 30-60 seconds per run. Zapier's timeout limits become restrictive here, whereas n8n runs on your infrastructure and handles longer operations gracefully. Make (Integromat) works well too, but n8n's node library for code execution and file manipulation is superior.
Step 1: Trigger and documentation extraction
Set up an n8n workflow with either a Cron trigger (for scheduled daily runs) or a Webhook trigger (to run when documentation updates). When triggered, the first node calls the bhava-ai API to extract documentation from your codebase repository.
POST https://api.bhava-ai.com/v1/extract
Headers:
Authorization: Bearer YOUR_BHAVA_API_KEY
Content-Type: application/json
Body:
{
"repository_url": "https://github.com/yourorg/yourrepo",
"documentation_paths": [
"docs/",
"README.md",
"architecture/"
],
"include_comments": true,
"output_format": "text"
}
Bhava-ai returns structured documentation in JSON format, extracting both standalone documentation files and inline code comments that describe architecture decisions. Store this response in an n8n variable for the next step.
Step 2: Documentation processing with PDF chat
Not all documentation comes in standard formats. If your team uses PDFs for architecture specifications (common for formal design documents), the chat-with-pdf-by-copilotus node processes these files. Create a separate node that handles PDF uploads or references.
POST https://api.chatpdf.copilotus.com/v1/files/upload
Headers:
Authorization: Bearer YOUR_COPILOTUS_API_KEY
Body: (multipart/form-data)
file: <your-documentation.pdf>
After uploading, send a structured prompt to extract relevant architecture information:
POST https://api.chatpdf.copilotus.com/v1/chats
Headers:
Authorization: Bearer YOUR_COPILOTUS_API_KEY
Content-Type: application/json
Body:
{
"file_id": "file_from_previous_response",
"messages": [
{
"role": "user",
"content": "Extract the following information from this document: system components, data flows, external dependencies, deployment environments, and security boundaries. Format as structured JSON with clear keys for each section."
}
]
}
The response provides structured architecture information extracted from your PDF documentation.
Step 3: AI reasoning with Claude Code
This is where the magic happens. We pass both the extracted documentation (from bhava-ai) and the processed PDF content (from chat-with-pdf) to Claude Code for analysis and diagram generation. Claude Code generates Mermaid diagram syntax, which Mintlify can then render.
Create an n8n Code node that calls Claude through the Anthropic API:
POST https://api.anthropic.com/v1/messages
Headers:
Authorization: Bearer YOUR_ANTHROPIC_API_KEY
Content-Type: application/json
Body:
{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 4096,
"messages": [
{
"role": "user",
"content": "Based on the following documentation, generate a comprehensive technical architecture diagram in Mermaid syntax. Include all system components, data flows, external dependencies, and deployment boundaries.\n\nDocumentation:\n" + extractedDocs + "\n\nFormatting requirements:\n1. Use graph TD for top-down layout\n2. Label each component with its technology stack\n3. Show data flow direction with arrows\n4. Use subgraphs for logical boundaries (e.g., microservices, deployment zones)\n5. Include a legend explaining symbols\n\nReturn ONLY the Mermaid code block, wrapped in triple backticks."
}
]
}
Claude returns Mermaid syntax like this example:
graph TD
Client["Client Applications"]
LB["Load Balancer"]
API1["API Service 1<br/>Node.js + Express"]
API2["API Service 2<br/>Python + FastAPI"]
Cache["Redis Cache"]
DB["PostgreSQL<br/>Primary"]
DBReplica["PostgreSQL<br/>Read Replica"]
Queue["RabbitMQ"]
Worker["Background Workers<br/>Node.js"]
subgraph "Production Environment"
LB
API1
API2
Cache
DB
DBReplica
Queue
Worker
end
Client -->|HTTPS| LB
LB -->|Route| API1
LB -->|Route| API2
API1 -->|Query/Cache| Cache
API2 -->|Query/Cache| Cache
API1 -->|Read/Write| DB
API2 -->|Read| DBReplica
API1 -->|Publish| Queue
Queue -->|Consume| Worker
Worker -->|Write| DB
DB -->|Replicate| DBReplica
Store this Mermaid output in an n8n variable for the next step.
Step 4: Diagram rendering with Mintlify
Mintlify specialises in documentation generation and includes diagram rendering capabilities. Use the Mintlify API to create a new documentation page or update an existing one with your generated diagram:
POST https://api.mintlify.com/v1/docs
Headers:
Authorization: Bearer YOUR_MINTLIFY_API_KEY
Content-Type: application/json
Body:
{
"project_id": "your_project_id",
"page": {
"title": "Technical Architecture",
"path": "architecture/technical-overview",
"content": "# System Architecture\n\nGenerated: " + new Date().toISOString() + "\n\nThis diagram represents the current technical architecture based on our codebase documentation.\n\n
```mermaid\n" + mermaidCode + "\n
```",
"metadata": {
"auto_generated": true,
"source": "bhava_ai_documentation_extraction",
"regenerate_frequency": "daily"
}
}
}
Mintlify processes the Mermaid syntax and renders it as an interactive diagram within your documentation site. The diagram is now live and accessible to your entire team.
Step 5: Error handling and notifications
Add n8n error handlers to catch failures at each step. Set up a Slack notification node that triggers if any API call fails:
// Inside n8n Code node for error handling
try {
// Your API calls here
} catch (error) {
$node.setData({
error: true,
message: error.message,
timestamp: new Date().toISOString()
});
// This triggers your Slack notification node
}
Configure the Slack node to send:
{
"channel": "#architecture-alerts",
"text": "Architecture diagram generation failed",
"attachments": [
{
"color": "danger",
"fields": [
{"title": "Error", "value": "{{error.message}}", "short": false},
{"title": "Time", "value": "{{error.timestamp}}", "short": true},
{"title": "Next attempt", "value": "Check logs and retry manually", "short": true}
]
}
]
}
Putting it together in n8n
Your workflow nodes should connect in this order:
- Cron Trigger (Daily at 9 AM)
- Bhava-ai Extract node
- Chat-with-PDF Process node (runs in parallel with previous node via multiple branches)
- Claude Code Analysis node (waits for both previous nodes to complete)
- Mintlify Update node
- Slack notification node (on success)
- Error handler node (catches failures from any previous step)
The parallel processing of Bhava-ai and Chat-with-PDF means your workflow runs faster; both documentation sources are processed simultaneously rather than sequentially.
The Manual Alternative
If you prefer more control over each step or need to validate output before publishing, you can run this workflow with manual approval gates. Insert an n8n "Wait for Webhook" node after the Claude Code step. This pauses execution and sends you a message with the generated Mermaid code. You can review it, make edits in your text editor, then approve the publish action via a simple HTTP POST.
This approach trades speed for safety. Generate diagrams on demand (rather than on a schedule), review them for accuracy, and only publish when satisfied. It's ideal for teams with strict documentation standards or when your codebase structure changes frequently in ways that confuse AI models.
Alternatively, generate the diagrams and publish them to a draft environment (separate Mintlify workspace) rather than production. This lets team members comment and suggest changes before going live.
Pro Tips
Rate limits and throttling
Each API has rate limits. Bhava-ai allows 100 requests per hour on their free tier, whilst Anthropic's API throttles based on your billing tier. Space your scheduled workflow runs to at least one hour apart. If you trigger diagrams manually, n8n's built-in rate limiting prevents accidental cascade failures.
Cost optimisation
Claude 3.5 Sonnet costs about £0.003 per 1K input tokens. A typical documentation extraction (20-30 KB of text) generates roughly 5,000-8,000 input tokens plus 2,000 output tokens for Mermaid code. One diagram costs under one penny. Running this daily costs around £0.20 monthly, entirely negligible compared to your infrastructure costs.
Regenerating existing diagrams
Store the diagram metadata (generation timestamp, source commit hash, documentation version) alongside the Mermaid code. This lets you track when diagrams become stale. If your documentation hasn't changed in the past week, skip regeneration to save API calls.
Handling large codebases
Extracting documentation from massive repositories takes time. Bhava-ai's documentation_paths parameter lets you specify only the directories relevant to architecture. Focus on docs/architecture, API specifications, and deployment configurations rather than extracting every README in every subfolder.
Validation and drift detection
After Claude generates the diagram, add a follow-up prompt asking it to validate consistency. Does every documented component appear in the diagram? Are there any loops that suggest circular dependencies? This additional validation call costs negligible extra (a few hundred tokens) but catches errors before they reach your documentation.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| bhava-ai | Pro | £30 | 3,000 monthly requests; covers daily diagrams plus manual runs |
| chat-with-pdf-by-copilotus | Professional | £20 | Unlimited PDF uploads; 500 chat sessions monthly |
| Mintlify | Team | £49 | Up to 50 team members; unlimited pages and custom domains |
| Claude API (Anthropic) | Pay-as-you-go | £2-5 | Roughly £0.003 per 1K input tokens; one diagram ~£0.001 |
| n8n (self-hosted) | Free | £0 | Or use n8n Cloud Pro at £20/month if avoiding self-hosting |
| Total | £101-124 | All-inclusive for small to medium teams |
The workflow pays for itself when you consider the alternative: one senior engineer spending 2-3 hours weekly manually updating diagrams. At typical London salaries, that's roughly £150-200 monthly in labour costs.