Most development teams I talk to have the same problem: they've rolled out multiple AI coding assistants across their organisation, but nobody knows what they're actually spending. One engineer is using Claude Code in their editor, another is burning through Copilot tokens at breakneck speed, and a third has somehow justified Cursor to the finance team. By the end of the quarter, you're staring at three separate invoices and wondering which tool actually delivered value and where you can trim costs without upsetting your developers. The challenge is that these tools operate in silos. BurnRate can tell you what individual developers are spending, Windsurf shows you usage patterns within the IDE, and MutableAI gives you a broader view of development acceleration. But without connecting them, you're still manually exporting reports, cross-referencing invoices, and piecing together a cost picture that's always two weeks out of date. This workflow automates the collection, analysis, and reporting of AI coding assistant spending across your entire team, feeding real data into a single dashboard and generating actionable optimisation recommendations without anyone lifting a finger.
The Automated Workflow
The goal here is to pull usage and cost data from BurnRate (which aggregates Claude, Copilot, Cursor, Windsurf, and other tools), enrich it with development metrics from MutableAI, then feed everything into a centralised reporting system that surfaces cost per developer, cost per tool, and cost per project. For this workflow, I'd recommend n8n. It's self-hosted, handles complex data transformations well, and integrates cleanly with both APIs without rate limit headaches.
Step 1: Retrieve daily cost data from BurnRate
BurnRate exposes a REST API that returns usage metrics broken down by developer, tool, and time period. Set up a scheduled workflow that runs once daily at 02:00 UTC (when your team is asleep and API load is lowest).
GET https://api.burnrate.io/v1/costs/daily
Headers: Authorization: Bearer YOUR_BURNRATE_API_KEY Content-Type: application/json Query Parameters: start_date: YYYY-MM-DD (7 days ago) end_date: YYYY-MM-DD (today) group_by: developer,tool
BurnRate returns a JSON response structured like this:
json
{ "data": [ { "developer_id": "dev_001", "developer_email": "alice@company.com", "tool": "claude_code", "date": "2026-03-15", "input_tokens": 2400, "output_tokens": 1850, "cost_usd": 0.87, "requests": 12 }, { "developer_id": "dev_002", "developer_email": "bob@company.com", "tool": "copilot", "date": "2026-03-15", "input_tokens": 5600, "output_tokens": 3200, "cost_usd": 1.20, "requests": 28 } ], "summary": { "total_cost": 2.07, "total_requests": 40 }
}
In n8n, add this as an HTTP Request node. Set the schedule trigger to run daily at 02:00 UTC, and store the response in a temporary variable for the next step.
Step 2: Fetch development metrics from MutableAI
MutableAI tracks productivity signals: code commits per developer, lines of code changed, pull request velocity, and build success rates. Call their API to correlate AI tool spending with actual output.
GET https://api.mutableai.com/v1/metrics/period
Headers: Authorization: Bearer YOUR_MUTABLEAI_API_KEY Content-Type: application/json Query Parameters: start_date: YYYY-MM-DD end_date: YYYY-MM-DD metric_types: commits,lines_changed,pr_merged,build_success
The response structure looks like this:
json
{ "metrics": [ { "developer_id": "dev_001", "commits": 8, "lines_changed": 340, "prs_merged": 2, "build_success_rate": 0.95 }, { "developer_id": "dev_002", "commits": 12, "lines_changed": 580, "prs_merged": 3, "build_success_rate": 0.88 } ]
}
Add this as a second HTTP Request node in your n8n workflow, running in parallel with the BurnRate request.
Step 3: Merge and enrich the data
Once both APIs return data, you need to join the cost data with the productivity metrics. This is where you calculate cost per commit, cost per line of code changed, and cost per merged PR. Use n8n's Function node to do this calculation:
javascript
const costData = $input.all()[0].json.data;
const metricData = $input.all()[1].json.metrics; const enriched = costData.map(costRecord => { const metrics = metricData.find(m => m.developer_id === costRecord.developer_id); return { developer_id: costRecord.developer_id, developer_email: costRecord.developer_email, tool: costRecord.tool, date: costRecord.date, daily_cost: costRecord.cost_usd, requests: costRecord.requests, commits: metrics?.commits || 0, lines_changed: metrics?.lines_changed || 0, prs_merged: metrics?.prs_merged || 0, build_success: metrics?.build_success_rate || 0, cost_per_commit: costRecord.cost_usd / (metrics?.commits || 1), cost_per_line: costRecord.cost_usd / (metrics?.lines_changed || 1) };
}); return enriched;
Step 4: Identify cost optimisation opportunities
BurnRate includes 23 built-in optimisation rules (unused licenses, high token waste, rate limit violations, unused tool seats). Query these directly:
GET https://api.burnrate.io/v1/optimisation/recommendations
Headers: Authorization: Bearer YOUR_BURNRATE_API_KEY Query Parameters: period: last_7_days include_rule_ids: all
The response includes actionable recommendations:
json
{ "recommendations": [ { "developer_id": "dev_003", "rule": "unused_copilot_seat", "potential_savings_monthly": 20, "action": "Remove developer from Copilot plan" }, { "developer_id": "dev_005", "rule": "high_context_waste", "potential_savings_monthly": 8.50, "action": "Implement context window optimisation in Cursor config" } ]
}
Store these recommendations in a separate object for the reporting step.
Step 5: Write aggregated data to a database
Push the enriched cost data, metrics, and recommendations to PostgreSQL (or your preferred database). This creates a queryable historical record and powers your dashboard. Use n8n's Postgres node:
sql
INSERT INTO ai_coding_costs ( developer_id, developer_email, tool, date, daily_cost, requests, commits, lines_changed, prs_merged, build_success, cost_per_commit, cost_per_line
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)
ON CONFLICT (developer_id, tool, date) DO UPDATE SET daily_cost = EXCLUDED.daily_cost, requests = EXCLUDED.requests, commits = EXCLUDED.commits, lines_changed = EXCLUDED.lines_changed, prs_merged = EXCLUDED.prs_merged, build_success = EXCLUDED.build_success, cost_per_commit = EXCLUDED.cost_per_commit, cost_per_line = EXCLUDED.cost_per_line;
Step 6: Generate and email a weekly summary report
Once weekly (Monday morning), query your database to generate aggregate metrics:
sql
SELECT tool, COUNT(DISTINCT developer_id) as active_developers, SUM(daily_cost) as weekly_cost, AVG(cost_per_commit) as avg_cost_per_commit, AVG(build_success) as avg_build_success, SUM(lines_changed) as total_lines_changed
FROM ai_coding_costs
WHERE date >= NOW() - INTERVAL '7 days'
GROUP BY tool
ORDER BY weekly_cost DESC;
Use n8n's Email node to send this summary to your engineering manager and finance team. Include the top optimisation recommendations from Step 4, ranked by potential monthly savings.
The Manual Alternative
If you prefer more control or your team is smaller, you can do this quarterly instead. Log into BurnRate, export the raw cost CSV, download your MutableAI metrics from their web UI, merge them in Excel or Google Sheets, and manually review the optimisation recommendations. It's less timely, but gives you a chance to question assumptions before acting on them. Many teams find this approach acceptable for initial setup; you can always automate once you've figured out your reporting structure.
Pro Tips
Rate limiting and backoff
BurnRate and MutableAI both have rate limits.
BurnRate allows 100 requests per minute; MutableAI allows 50. Since you're running this daily, you won't hit these limits, but if you scale to hourly checks, add exponential backoff to your n8n HTTP nodes. Set retry attempts to 3 with a 2-second initial delay.
Handle missing data gracefully
Not every developer will have activity every day. Your enrichment function should default to zero for missing metrics rather than failing. The cost_per_commit calculation will show infinity if a developer spent money but didn't commit anything. That's actually useful data; it signals a developer who's struggling or context-switching heavily, and it's a conversation starter with their manager.
Store raw API responses
Before transforming data, save the raw JSON responses from both BurnRate and MutableAI in a separate table. It's very useful for debugging when the calculations look wrong, and it gives you a historical archive for comparing API behaviour over time.
Set cost thresholds for alerts
If a single developer's daily spend exceeds your threshold (say, £10), trigger an immediate alert rather than waiting for the weekly report. Use n8n's Conditional node to check each developer's daily cost, and send a Slack message to the engineering lead if anyone spikes. This catches runaway usage or misconfigured tools before it becomes a budget problem.
Validate tool costs against invoices
Once a month, cross-check your calculated costs against the actual invoices from OpenAI, GitHub, Codeium, and other providers. BurnRate's figures are usually accurate, but vendor invoices sometimes lag by a day or two, or include one-time charges you weren't expecting. This reconciliation takes 20 minutes and saves you from surprises.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| BurnRate | Pro | £95 | Cost tracking and optimisation rules; API access included |
| MutableAI | Team | £149 | Development metrics and productivity signals |
| Windsurf | Pro | £80 per developer | Optional; provides granular IDE-level usage data if you need it |
| n8n | Cloud (Small) | £20 or self-hosted (free) | Orchestration; scaling to 200+ workflows requires upgrade to £100/month |
| PostgreSQL | Cloud (5GB) | £15–30 | Database storage for historical cost data; optional if you use n8n's built-in data storage |
| Total (estimated) | £280–340 | For a team of 10 developers; costs scale linearly with team size and API calls |