Introduction
Development teams often discover their cloud spending problem too late. By the time finance asks why the bill doubled, your engineers have already spun up hundreds of experimental resources, unused databases still churning away, and forgotten test environments draining money. The real problem isn't that costs are high; it's that no one sees them happening in real time.
Most teams patch this with spreadsheets that update weekly, or they manually log into three different dashboards every morning. That's inefficient and gives you a lag between spending and visibility. What if your team received notifications the moment spending patterns changed, with actual recommendations on what to cut, automatically compiled into a report that appears in Slack before your morning standup?
This workflow combines three tools to create a live cost monitoring dashboard that ingests spending data, analyses it for anomalies, and delivers actionable insights without anyone clicking anything after setup.
The Automated Workflow
We will use burnrate to fetch raw cost data from your cloud provider, Deepnote to transform and analyse that data, and Terrakotta-AI to generate recommendations about resource optimisation. We'll orchestrate everything with n8n, which gives you the most straightforward implementation for this particular chain.
Why n8n?
Zapier would work, but you'll hit their task limits quickly if you're monitoring multiple environments. Make is solid but has a steeper learning curve. n8n runs on your infrastructure (or Docker), handles complex JSON transformations natively, and gives you retry logic without begging for enterprise pricing. Claude Code integration is nice for ad-hoc debugging, but n8n's native nodes do the heavy lifting here.
Architecture Overview
Your workflow follows this sequence:
- n8n polls burnrate every 6 hours for cost data.
- burnrate returns spending metrics across services and environments.
- Data gets sent to Deepnote, which runs a Python notebook to detect anomalies and calculate trend lines.
- Deepnote outputs a cleaned JSON file with flagged high-spend items.
- n8n passes that analysis to Terrakotta-AI, which generates specific remediation steps.
- Results land in a Slack channel and update a shared Google Sheet.
Step 1:
Getting Data from Burnrate
First, you'll need a burnrate API key. Log into your burnrate account and generate one from Settings > API Keys. Store this in n8n as an environment variable called BURNRATE_API_KEY.
GET https://api.burnrate.io/v1/costs
Authorization: Bearer YOUR_API_KEY
X-Organization-ID: your-org-id
In n8n, create an HTTP Request node with these settings:
{
"method": "GET",
"url": "https://api.burnrate.io/v1/costs",
"headers": {
"Authorization": "Bearer {{ $env.BURNRATE_API_KEY }}",
"X-Organization-ID": "{{ $env.BURNRATE_ORG_ID }}"
},
"params": {
"period": "last_7_days",
"groupBy": "service"
}
}
Burnrate returns something like this:
{
"period": "2024-01-15T00:00:00Z",
"costs": [
{
"service": "compute",
"provider": "aws",
"cost_usd": 1247.50,
"trend": 0.12,
"alerts": []
},
{
"service": "storage",
"provider": "aws",
"cost_usd": 340.20,
"trend": -0.03,
"alerts": []
},
{
"service": "networking",
"provider": "gcp",
"cost_usd": 89.75,
"trend": 0.25,
"alerts": ["high_egress"]
}
],
"total_cost": 1677.45
}
The trend field is crucial; positive values mean spending is going up.
Step 2:
Transform and Analyse in Deepnote
Deepnote is a collaborative notebook environment. Rather than running Python locally, you're going to trigger a Deepnote notebook from n8n, pass it the cost data, and let it do statistical analysis.
Create a new Deepnote notebook and add this Python block:
import json
import pandas as pd
from scipy import stats
costs_data = json.loads("""{{ $node["HTTP Request"].json.stringify() }}""")
# Convert to DataFrame
df = pd.DataFrame(costs_data['costs'])
# Calculate anomaly score using z-score
df['anomaly_score'] = stats.zscore(df['cost_usd'])
# Flag anything over 2 standard deviations
df['is_anomaly'] = df['anomaly_score'].abs() > 2
# Calculate daily burn rate for each service
df['daily_burn'] = df['cost_usd'] / 7
df['monthly_projection'] = df['daily_burn'] * 30
# Sort by projected monthly cost
df_sorted = df.sort_values('monthly_projection', ascending=False)
# Return actionable insights
insights = {
"analysis_date": "{{ now().toISOString() }}",
"total_monthly_projection": df['monthly_projection'].sum(),
"top_spenders": df_sorted.head(3)[['service', 'provider', 'cost_usd', 'monthly_projection', 'is_anomaly']].to_dict('records'),
"anomalies": df[df['is_anomaly']][['service', 'cost_usd', 'anomaly_score']].to_dict('records'),
"recommendations": []
}
print(json.dumps(insights, indent=2))
You can trigger this notebook via Deepnote's API webhook. Create the webhook in Deepnote (notebook settings > integrations > webhooks) and note the endpoint. In n8n, add a second HTTP Request node:
{
"method": "POST",
"url": "https://deepnote.com/api/v1/notebooks/YOUR_NOTEBOOK_ID/run",
"headers": {
"Authorization": "Bearer YOUR_DEEPNOTE_API_KEY",
"Content-Type": "application/json"
},
"body": {
"costs_data": "{{ JSON.stringify($node['HTTP Request'].json) }}"
}
}
Deepnote runs the notebook and returns the results. You now have structured analysis: which services are consuming the most, which costs are anomalous, and what the monthly projection looks like.
Step 3:
Generate Recommendations with Terrakotta-AI
Terrakotta-AI is a specialised model for infrastructure cost optimisation. It takes your analysis and generates specific, actionable recommendations: "Scale down the database replica in us-east-1b," "Delete 47 unused security groups," "Switch to reserved instances for compute."
Create an HTTP Request node pointing to Terrakotta-AI:
POST https://api.terrakotta-ai.com/v1/optimise
Authorization: Bearer YOUR_TERRAKOTTA_KEY
Content-Type: application/json
The payload combines insights from both previous steps:
{
"analysis": {
"top_spenders": "{{ $node['Deepnote Analysis'].json.top_spenders }}",
"anomalies": "{{ $node['Deepnote Analysis'].json.anomalies }}",
"monthly_projection": "{{ $node['Deepnote Analysis'].json.total_monthly_projection }}",
"cloud_providers": ["aws", "gcp"],
"environment_filter": "production"
},
"optimisation_goals": ["cost_reduction", "resource_cleanup"],
"risk_tolerance": "low"
}
Terrakotta responds with recommendations like:
{
"recommendations": [
{
"priority": "high",
"service": "compute",
"action": "Scale down unused EC2 instances in eu-west-1",
"estimated_savings_monthly": 340,
"risk_level": "low",
"implementation_steps": [
"Identify instances with <5% CPU utilisation",
"Verify no active connections",
"Stop and terminate instances"
]
},
{
"priority": "medium",
"service": "storage",
"action": "Transition old snapshots to Glacier",
"estimated_savings_monthly": 120,
"risk_level": "very_low",
"implementation_steps": [
"Identify snapshots older than 90 days",
"Create lifecycle policy"
]
}
],
"total_potential_savings": 460
}
Step 4:
Format and Deliver Results
Now you have data flowing: costs from burnrate, analysis from Deepnote, and recommendations from Terrakotta-AI. Time to put this in front of people.
Add a Slack node to post a formatted message:
{
"text": "📊 Development Team Cost Report",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Weekly Cost Analysis*\nProjected monthly spend: £{{ $node['Deepnote Analysis'].json.total_monthly_projection.toFixed(2) }}"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Top Spenders*\n{{ $node['Deepnote Analysis'].json.top_spenders.map(s => `• ${s.service}: £${s.monthly_projection.toFixed(2)}/month`).join('\n') }}"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Potential Monthly Savings*\n{{ $node['Terrakotta'].json.total_potential_savings }}"
}
},
{
"type": "divider"
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Recommended Actions*\n{{ $node['Terrakotta'].json.recommendations.slice(0, 3).map(r => `• [${r.priority.toUpperCase()}] ${r.action} (save £${r.estimated_savings_monthly})`).join('\n') }}"
}
}
]
}
Additionally, update a Google Sheet with the analysis data. Use n8n's Google Sheets node to append a new row:
{
"spreadsheet_id": "YOUR_SHEET_ID",
"sheet": "Cost Analysis",
"values": [
"{{ now() }}",
"{{ $node['Deepnote Analysis'].json.total_monthly_projection }}",
"{{ $node['Deepnote Analysis'].json.anomalies.length }}",
"{{ $node['Terrakotta'].json.total_potential_savings }}",
"{{ $node['Terrakotta'].json.recommendations[0].action }}"
]
}
Complete n8n Workflow
Your n8n workflow should look like this in the editor:
- Trigger node: Schedule trigger, every 6 hours.
- HTTP node: GET from burnrate.
- HTTP node: POST to Deepnote notebook.
- HTTP node: POST to Terrakotta-AI.
- Slack node: Post formatted message.
- Google Sheets node: Append row.
- Error handler: Email ops team if any step fails.
Set up error handling on each HTTP node. Add a catch block that emails your ops channel:
{
"type": "error",
"handler": "email",
"recipients": ["ops-team@company.com"],
"subject": "Cost monitoring workflow failed",
"body": "The automated cost analysis failed at step: {{ $node.name }}\nError: {{ $error.message }}"
}
The Manual Alternative
If you want more control or don't need real-time monitoring, you can run each step independently:
- Export cost data from burnrate's web dashboard; download the CSV.
- Upload it to Deepnote and run the analysis notebook on demand.
- Copy the results into Terrakotta-AI's web interface and wait for recommendations.
- Manually copy-paste recommendations into a Slack message or email.
This takes 15 minutes per week and gives you full transparency of each step. It's slower but lets you tweak parameters and review results before they reach the team. Most teams start here and automate once they understand the workflow.
Pro Tips
Rate limit burnrate carefully. The API allows 60 requests per minute for standard plans. If you have more than 50 cost centres, fetch in batches:
{
"batch_size": 10,
"delay_between_batches": 1000
}
Deepnote notebooks timeout after 10 minutes. If your dataset is large, pre-filter before sending. Request only the last 7 days, not the last year.
Terrakotta-AI costs scale with API calls. Each recommendation request costs credits. Don't run it more than twice daily; weekly is usually sufficient for cost optimisation.
Alert thresholds matter. Set Deepnote to flag anomalies only if weekly trend is >20% or absolute cost exceeds a threshold you define. This reduces noise:
# Only flag meaningful changes
threshold_percentage = 0.20
threshold_absolute = 100 # GBP
df['should_alert'] = (df['trend'].abs() > threshold_percentage) & (df['cost_usd'] > threshold_absolute)
Store historical data. The Google Sheet becomes your audit trail. Query it monthly to spot seasonal patterns. Summer costs may always be 10% higher because of increased traffic; that's normal, not an anomaly.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| burnrate | Pro | £49 | Includes up to 50 cost centres and API access. |
| Deepnote | Team | £99 | Collaborative notebooks, API webhooks, enough compute for transformation. |
| Terrakotta-AI | Standard | £79 | 500 API calls/month; sufficient for twice-weekly analysis. |
| n8n | Self-hosted | £0–60 | Free tier if self-hosted on your infrastructure; £60/month for cloud hosting. |
| Google Sheets | Free | £0 | No additional cost. |
| Slack | Existing | £0 | Assumes you already have Slack. |
| Total | £227–287 | One-time setup: 4–6 hours. |
This pays for itself the first month if it catches even one instance of forgotten resources costing £300+. Most teams report savings of £1,500–3,000 monthly within the first quarter.