Back to Alchemy
Alchemy RecipeBeginnerstack

Developer cost tracking dashboard: Monitor AI assistant spending across Claude, Copilot, and Cursor

Your Slack channel is full of messages like "How much are we spending on Cursor this month?" and "Did anyone check if Copilot is actually cheaper than Claude Code?" Development teams across the industry are now using three, four, sometimes five different AI coding assistants simultaneously. Each developer has their preferred tool. Each tool sends bills to different places. Nobody knows the total cost, or whether the expensive option is actually worth it. Most teams resort to spreadsheets. Someone manually logs into each billing dashboard, copies numbers into columns, and tries to reconcile them monthly. It takes hours. The data is always a week out of date. By the time you see the bill spike, you've already committed to the tool for another cycle. The better approach is automated cost tracking that pulls data from all your tools into a single dashboard, compares performance metrics alongside spending, and flags anomalies before they become expensive surprises. This workflow uses BurnRate to collect provider data, Deepnote to build a collaborative analytics interface, and either Zapier or n8n to orchestrate the collection and refresh cycle. For more on this, see AI cost monitoring dashboard for development team spending.

The Automated Workflow

How data flows through the system

The foundation is BurnRate, which already monitors Claude Code, Cursor, Codex, Copilot, Windsurf, Cline, and Aider out of the box. For more on this, see Windsurf vs Cursor vs BurnRate: Tracking Your AI Coding T.... For more on this, see Windsurf vs Cursor vs BurnRate: Which AI Code Editor Give....

It tracks token usage, cost per provider, rate limits, and provides cost optimisation insights via its 23 built-in rules. Your job is to extract that data on a schedule and feed it into Deepnote, where your team can see charts, trends, and comparisons without leaving the notebook. The orchestration happens in n8n (better than Zapier here because you can self-host and avoid API rate limits on larger datasets). Here is the workflow structure: 1. Daily trigger at 08:00 UTC 2. Poll BurnRate API for cost and usage metrics 3. Transform the response into a structured format 4. Push data into Deepnote via API 5. Deepnote notebook auto-refreshes and recalculates charts

Setting up the n8n workflow

Create a new workflow in n8n with a Cron trigger.

Cron trigger: 0 8 * * *
Timezone: UTC

Add an HTTP Request node to fetch BurnRate data.

Method: GET
URL: https://api.burnrate.io/v1/costs/summary
Headers: Authorization: Bearer YOUR_BURNRATE_API_KEY Content-Type: application/json

BurnRate returns a JSON object structured like this:

json
{ "summary": { "total_cost": 1245.60, "period": "2026-03-01T00:00:00Z to 2026-03-07T23:59:59Z" }, "by_provider": [ { "provider": "claude_code", "cost": 450.20, "tokens_used": 1250000, "requests": 320 }, { "provider": "cursor", "cost": 320.15, "tokens_used": 890000, "requests": 210 }, { "provider": "copilot", "cost": 475.25, "tokens_used": 1420000, "requests": 290 } ]
}

Add a Code node to transform this into a format Deepnote expects.

javascript
const input = $input.all();
const burnrateData = input[0].json; const transformed = { timestamp: new Date().toISOString(), totalCost: burnrateData.summary.total_cost, period: burnrateData.summary.period, providers: burnrateData.by_provider.map(p => ({ name: p.provider, cost: p.cost, tokensUsed: p.tokens_used, requests: p.requests, costPerToken: (p.cost / p.tokens_used * 1000000).toFixed(4), costPerRequest: (p.cost / p.requests).toFixed(2) }))
}; return { data: transformed };

Now add an HTTP Request node to push data to Deepnote. You'll use Deepnote's integrations API.

Method: [POST](/tools/post)
URL: https://api.deepnote.com/v1/notebooks/YOUR_NOTEBOOK_ID/integrations/data
Headers: Authorization: Bearer YOUR_DEEPNOTE_API_KEY Content-Type: application/json
Body:
{ "integration_key": "burnrate_costs", "data": {{ $json.data }}
}

Add one more HTTP node to trigger a Deepnote notebook refresh so charts update immediately.

Method: POST
URL: https://api.deepnote.com/v1/notebooks/YOUR_NOTEBOOK_ID/run
Headers: Authorization: Bearer YOUR_DEEPNOTE_API_KEY Content-Type: application/json
Body:
{ "cells": ["all"]
}

Building the Deepnote dashboard

In Deepnote, create cells that pull from the integration data and render charts.

python
import pandas as pd
import plotly.express as px # Access the data pushed from n8n
cost_data = integration_data['burnrate_costs'] # Create a dataframe
df = pd.DataFrame(cost_data['providers']) # Line chart for cost trends
fig_trend = px.line( df, x='name', y='cost', title='Weekly Cost by Provider', markers=True
) fig_trend.show() # Bar chart for cost per token (efficiency metric)
fig_efficiency = px.bar( df, x='name', y='costPerToken', title='Cost Per Million Tokens (Lower is Better)', colour='cost'
) fig_efficiency.show() # Summary stats
print(f"Total weekly spend: ${cost_data['totalCost']:.2f}")
print(f"Most expensive provider: {df.loc[df['cost'].idxmax(), 'name']}")
print(f"Most efficient provider: {df.loc[df['costPerToken'].astype(float).idxmin(), 'name']}")

Set this notebook to share read-only access with your team. They see live data updated every morning without needing API credentials.

Error handling in n8n

Add an Error Trigger node at the workflow level. If the BurnRate API call fails, send a Slack notification.

Trigger: Workflow Error
Then add Slack node: Message: "BurnRate sync failed at {{ $now }}. Check API key and rate limits." Channel: #engineering-alerts

Add a retry mechanism to the BurnRate HTTP node:

Retry on fail: enabled
Max retries: 3
Wait between retries: 30 seconds

The Manual Alternative

If you prefer direct control without orchestration, log into BurnRate's web interface each week, export the CSV, and upload it to a shared Google Sheet. Use Google Sheets formulas to calculate cost-per-token and flag providers that exceeded budget. This takes 15 minutes weekly and suits smaller teams or those with infrequent billing cycles. The trade-off is that you lose real-time visibility and must remember to do it yourself.

Pro Tips

Set cost alerts based on threshold changes.

In n8n, add a Condition node that compares this week's total cost against the previous week.

If the difference exceeds 15%, trigger a Slack warning. This catches runaway costs before they compound.

Track cost per token, not just raw spend.

A provider might be expensive in absolute terms but efficient per token used. Use BurnRate's built-in metrics and calculate the ratio yourself in the Code node. Share both metrics in your dashboard.

Use Claude Opus 4.6 to analyse BurnRate findings.

Feed the weekly cost report into Claude Opus 4.6 via LangChain and ask it to identify patterns. "Which provider has the highest cost growth trend?" or "Which tool has the best cost-to-request ratio?" This saves your team from manually reading spreadsheets.

Cache BurnRate responses to avoid rate limits.

BurnRate's API has a default rate limit of 100 requests per hour. If you have multiple n8n workflows or team members pulling data, use n8n's caching node to store the response for 30 minutes between calls.

Exclude test accounts from cost tracking.

BurnRate tracks all activity, including development and testing. Before pushing data to Deepnote, filter out test accounts in the Code node using a list you maintain in n8n's environment variables.

Cost Breakdown

ToolPlan NeededMonthly CostNotes
BurnRatePro£29Unlimited providers, 23 optimisation rules, PDF reports
DeepnoteFree (Student plan)£0Sufficient for collaborative notebooks with standard hardware; paid plan starts at £49 for more compute
n8nSelf-hosted£0Free and open-source; only pay for hosting if you use a cloud service like Railway (£5–10/month) or similar
ZapierFree tier£0–20Free tier supports 2 tasks per month; Professional plan starts at £20 if you need more frequent syncs