Your engineering team is split across three different AI coding assistants. One developer uses Claude Code because it's strong with refactoring; another swears by Cursor for its speed; a third relies on Copilot as part of Microsoft's enterprise agreement. Meanwhile, your finance person is staring at four separate vendor invoices wondering which tools actually justify their cost. By the time you've pieced together who used what and when, you've already burned through next month's budget without a clear answer. This is the reality for most development teams in 2026. AI coding assistants have become essential, but they've also become invisible from a cost perspective. Unlike traditional software, where you buy a licence for a seat and know exactly what you're paying, AI coding tools often operate on a mix of subscriptions, pay-as-you-go tokens, and usage-based pricing that spans multiple platforms. Without visibility, you're essentially flying blind. The solution isn't to pick one tool and lock everyone into it. Different developers genuinely do get more value from different assistants depending on their workflow and the type of code they write. The solution is to instrument your entire toolchain with proper cost tracking, aggregate the data in one place, and let your team make informed decisions about where money is actually flowing.
The Automated Workflow
We're going to build a system that pulls cost data from BurnRate, your single source of truth for AI coding assistant spending, then feeds it into a collaborative notebook in Deepnote where your team can analyse trends, and finally pipes alerts and summaries into NextStep so you've actually got scheduled check-ins about what you're spending. The orchestration backbone here is n8n. It's self-hosted, it handles complex workflows without needing to write scripts, and it can run on a schedule without being locked to a SaaS vendor's uptime. We'll start by having n8n query BurnRate's API every morning to fetch cost data from the previous day, transform it slightly, then push it into Deepnote and NextStep in parallel.
Setting up the BurnRate connection
First, you'll need a BurnRate account with API access enabled. Go to your settings and generate an API key. You'll use this to authenticate requests to BurnRate's cost endpoint.
GET https://api.burnrate.dev/v1/costs
Headers: Authorization: Bearer YOUR_BURNRATE_API_KEY Content-Type: application/json
This endpoint returns cost data broken down by tool, developer, and date range. We'll query the last 24 hours.
json
{ "period": "2026-03-15", "costs": [ { "tool": "claude_code", "developer": "alice@company.com", "cost_usd": 12.47, "tokens_used": 450000, "duration_minutes": 87 }, { "tool": "cursor", "developer": "bob@company.com", "cost_usd": 8.32, "tokens_used": 290000, "duration_minutes": 54 } ], "total_daily_cost": 20.79
}
Wiring it in n8n
Create a new workflow in n8n. Start with a Schedule trigger set to run daily at 06:00 UTC. This gives overnight processing time and you get your cost report first thing in the morning.
1. Schedule Trigger (Daily, 06:00 UTC) ↓
2. HTTP Request Node (GET to BurnRate) ↓
3. Transform Node (flatten and annotate data) ↓
4. Parallel execution: - Push to Deepnote dataset - Create NextStep workflow item
The HTTP Request node needs these settings:
URL: https://api.burnrate.dev/v1/costs
Method: GET
Authentication: Bearer Token
Token: YOUR_BURNRATE_API_KEY
Query Parameters: start_date: {{new Date(new Date().setDate(new Date().getDate()-1)).toISOString().split('T')[0]}} end_date: {{new Date().toISOString().split('T')[0]}}
The Transform node takes the raw response and reshapes it for downstream tools. You want to add context that makes the data actionable.
json
{ "timestamp": "{{ now() }}", "date": "{{ $json.period }}", "total_cost": "{{ $json.total_daily_cost }}", "items": "{{ $json.costs }}", "by_tool": { "claude_code": "{{ $json.costs.filter(c => c.tool === 'claude_code').reduce((sum, c) => sum + c.cost_usd, 0).toFixed(2) }}", "cursor": "{{ $json.costs.filter(c => c.tool === 'cursor').reduce((sum, c) => sum + c.cost_usd, 0).toFixed(2) }}", "copilot": "{{ $json.costs.filter(c => c.tool === 'copilot').reduce((sum, c) => sum + c.cost_usd, 0).toFixed(2) }}", "windsurf": "{{ $json.costs.filter(c => c.tool === 'windsurf').reduce((sum, c) => sum + c.cost_usd, 0).toFixed(2) }}" }
}
Pushing to Deepnote
Deepnote has a simple API for appending rows to datasets. Set up a node that sends your transformed data directly into a Deepnote dataset where you're building a cumulative cost history.
POST https://api.deepnote.com/v1/datasets/YOUR_DATASET_ID/rows
Headers: Authorization: Bearer YOUR_DEEPNOTE_API_KEY Content-Type: application/json
Body: { "date": "{{ $json.date }}", "total_cost": "{{ $json.total_cost }}", "claude_code": "{{ $json.by_tool.claude_code }}", "cursor": "{{ $json.by_tool.cursor }}", "copilot": "{{ $json.by_tool.copilot }}", "windsurf": "{{ $json.by_tool.windsurf }}" }
Once you've got a few weeks of data in Deepnote, you can build visualisations that show cost trends, utilisation per developer, and cost-per-token efficiency across tools. Deepnote's collaborative interface means your team can annotate spikes or anomalies directly in the notebook.
Creating NextStep workflows for cost reviews
On the same schedule, create a task in NextStep that reminds someone on your team to review yesterday's costs. This isn't about manual data entry; it's about forcing a regular conversation.
POST https://api.nextstep.app/v1/workflows
Headers: Authorization: Bearer YOUR_NEXTSTEP_API_KEY Content-Type: application/json
Body: { "title": "Daily AI Coding Cost Review - {{ $json.date }}", "description": "Total spend yesterday: ${{ $json.total_cost }}\n\nBreakdown:\n- Claude Code: ${{ $json.by_tool.claude_code }}\n- Cursor: ${{ $json.by_tool.cursor }}\n- Copilot: ${{ $json.by_tool.copilot }}\n- Windsurf: ${{ $json.by_tool.windsurf }}\n\nReview trends in Deepnote and flag any unusual patterns.", "assignee": "finance@company.com", "due_date": "{{ now() + 1 day }}", "priority": "normal" }
This creates an accountability loop. Someone actually looks at the numbers every day instead of discovering in month four that one developer's token consumption is 3x the industry average.
The Manual Alternative
If you want more granular control or prefer not to run n8n in-house, you can implement this more manually. Pull BurnRate's cost reports weekly via their web interface (they generate PDF reports with the 23 optimisation rules built in), paste the data into a Google Sheet, then manually create the NextStep tasks. It's slower, but it keeps everything in tools your organisation already uses. The tradeoff is that you lose real-time visibility and you're relying on someone remembering to run the process every week.
Pro Tips
Rate limit the API calls
BurnRate's free tier allows 100 requests per day.
Since you're only querying once daily, you're well under that, but if you scale to multiple scheduled runs, add a rate limiting node in n8n to queue requests. This prevents you from accidentally hitting the limit during testing.
Handle missing data gracefully
Some days a developer might not use any AI coding assistant at all. Your transform node should default missing tool costs to 0.00 rather than null, so your downstream calculations don't break. Add a fallback in the transform logic: {{ $json.by_tool.claude_code || 0 }}.
Cost alerts based on thresholds
Once you've got baseline spending, add a conditional node in n8n. If yesterday's total cost exceeded 110% of your rolling 7-day average, trigger a Slack notification immediately instead of waiting for someone to check NextStep. This catches runaway spending in real time.
Archive and version your Deepnote notebooks
At the end of each quarter, export your Deepnote dataset as a CSV and store it in S3 or your file system. This gives you historical records for auditing and lets you run year-over-year comparisons without the notebook becoming unwieldy.
Use BurnRate's optimisation rules to guide purchases
BurnRate includes 23 specific optimisation rules built into its reports. When NextStep reminds you to review costs, actually read those recommendations. Some are obvious (switching a heavy user from Copilot to Claude Code if they're only doing TypeScript work), others are subtle (batching queries to reduce per-request overhead).
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| BurnRate | Free tier | $0 | Covers up to 100 API requests/day and PDF reports; paid tier starts at $29/month for higher limits |
| Deepnote | Free tier | $0 | Student plan offers more compute; standard free tier sufficient for cost tracking |
| NextStep | Starter | $19 | Covers up to 5 team members and unlimited workflows |
| n8n | Self-hosted | $0 | Open source; costs are your server infrastructure only |
| Claude Code | Variable | Depends on usage | Typically $20/month for regular users, $2/month for light use |
| Cursor | Variable | $20/month (Pro) | Includes unlimited requests; free tier available with limits |
| Copilot | Variable | $10–20/month | Enterprise plans quoted separately |
| Windsurf | Variable | $15/month (Pro) | Free tier available for lighter usage |