Software engineering cost tracking and optimisation dashboard
- Published
Software engineering teams waste countless hours gathering cost data across different systems, exporting spreadsheets, and manually calculating burn rates. Your infrastructure costs, cloud spending, developer time, and tooling fees scatter across AWS, GCP, GitHub, Slack, and internal billing systems. The moment you compile last month's numbers, they're already out of date. You have no way to spot runaway costs until the quarterly bill arrives.
This is where automation saves real money. By connecting cost tracking data to a live dashboard, you catch spending spikes within hours rather than months. You identify which projects are draining your budget. You surface trends that your finance team can actually act on. The catch is that no single tool handles this end-to-end. You need to wire together three specialist tools: Burnrate for cost aggregation, Deepnote for dashboard building, and Terrakotta AI for anomaly detection. For more on this, see Terrakotta AI vs Deepnote vs DataRobot: AI Tools for Data....
This workflow requires intermediate technical skills because you'll be working with APIs and configuring data pipelines, but the payoff is substantial. A team running this automation can redirect 3-4 hours per week of manual reporting back to actual engineering work.
The Automated Workflow
You'll build a daily automated pipeline that pulls cost data from Burnrate, enriches it with anomaly detection via Terrakotta AI, and publishes results to a Deepnote dashboard. The orchestration layer ties these together. I recommend n8n for this particular job because it has better error handling and retry logic than Zapier, and it's simpler to self-host than Make if you want to keep API keys local.
Step 1:
Choose Your Orchestration Tool
For this workflow, use n8n. Here's why: you need conditional logic (alert only if spending exceeds threshold), scheduled daily runs (not event-triggered), and the ability to handle retry failures gracefully. n8n's built-in node types support all three patterns without custom code. Zapier would work but becomes expensive with 3+ daily executions across multiple operations. Make requires more visual configuration that becomes fragile when requirements change.
Set up n8n on your infrastructure or use their cloud service. You'll need API keys for Burnrate and Terrakotta AI stored as environment variables in your n8n instance.
Step 2:
Pull Cost Data from Burnrate
Burnrate provides a REST API that returns your aggregated cloud costs. You need to authenticate using an API token, then query the costs endpoint for the previous day.
First, obtain your Burnrate API token from your account settings. Then add an HTTP Request node to your n8n workflow:
GET https://api.burnrate.io/v1/costs
Headers:
Authorization: Bearer YOUR_BURNRATE_API_TOKEN
Content-Type: application/json
Query Parameters:
start_date: 2024-01-14
end_date: 2024-01-15
group_by: service
The response looks like this:
{
"costs": [
{
"service": "compute",
"cost": 1250.45,
"currency": "GBP",
"date": "2024-01-15",
"provider": "AWS"
},
{
"service": "storage",
"cost": 320.10,
"currency": "GBP",
"date": "2024-01-15",
"provider": "AWS"
}
],
"total": 1570.55
}
Configure n8n to run this query every morning at 6 AM UTC. Use a Cron trigger node set to 0 6 * * *. This timing allows your team to see yesterday's complete cost picture before the working day starts.
Store the response in an n8n variable called burnrate_costs for use in the next step.
Step 3:
Enrich with Anomaly Detection via Terrakotta AI
Raw cost numbers mean nothing without context. Is today's £1,570 spend normal or a disaster? Terrakotta AI compares current spending against historical patterns and flags anomalies. It returns a severity score (0 to 100) indicating how unusual the spending is.
Call the Terrakotta API with your Burnrate data:
POST https://api.terrakotta.io/v1/anomaly-detect
Headers:
Authorization: Bearer YOUR_TERRAKOTTA_API_TOKEN
Content-Type: application/json
Body:
{
"data_points": [
{
"timestamp": "2024-01-15T00:00:00Z",
"value": 1570.55,
"dimension": "daily_spend"
}
],
"lookback_days": 30,
"sensitivity": 0.75
}
Terrakotta returns:
{
"anomalies": [
{
"timestamp": "2024-01-15T00:00:00Z",
"value": 1570.55,
"expected_value": 1340.20,
"severity": 62,
"anomaly_type": "spike"
}
],
"is_anomaly": true,
"confidence": 0.89
}
In your n8n workflow, add a Switch node that checks if anomalies[0].severity > 70. If true, trigger an alert (we'll cover this in a moment). If false, continue to the dashboard update.
Step 4:
Prepare Data for Deepnote
Deepnote is your dashboard layer. Rather than pushing data directly via API, you'll write results to a shared CSV file or database table that Deepnote reads. This decouples your pipeline from dashboard layout changes and makes debugging simpler.
Create a Function node in n8n that shapes the data for Deepnote:
const burnrateData = $input.all()[0].json.costs;
const anomalyData = $input.all()[1].json.anomalies[0];
const output = {
date: new Date().toISOString().split('T')[0],
total_spend: burnrateData.reduce((sum, item) => sum + item.cost, 0),
compute_spend: burnrateData.find(item => item.service === 'compute')?.cost || 0,
storage_spend: burnrateData.find(item => item.service === 'storage')?.cost || 0,
anomaly_severity: anomalyData.severity,
is_anomaly: anomalyData.severity > 70,
expected_spend: anomalyData.expected_value,
variance_pct: (((anomalyData.value - anomalyData.expected_value) / anomalyData.expected_value) * 100).toFixed(2)
};
return output;
Next, write this output to a database or file. If you use PostgreSQL (cheap and reliable), add a Postgres node:
INSERT INTO cost_tracking (date, total_spend, compute_spend, storage_spend, anomaly_severity, is_anomaly, expected_spend, variance_pct)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
ON CONFLICT(date) DO UPDATE SET
total_spend = EXCLUDED.total_spend,
compute_spend = EXCLUDED.compute_spend,
storage_spend = EXCLUDED.storage_spend,
anomaly_severity = EXCLUDED.anomaly_severity,
is_anomaly = EXCLUDED.is_anomaly,
expected_spend = EXCLUDED.expected_spend,
variance_pct = EXCLUDED.variance_pct;
Map the Function node output to the query parameters: $1 = date, $2 = total_spend, etc.
Step 5:
Build the Deepnote Dashboard
Deepnote is a collaborative notebook environment that connects to databases and renders interactive charts. Create a new Deepnote project and add a SQL cell that reads from your cost_tracking table:
SELECT
date,
total_spend,
compute_spend,
storage_spend,
anomaly_severity,
variance_pct
FROM cost_tracking
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
ORDER BY date DESC;
Add a second cell to calculate rolling averages:
SELECT
date,
total_spend,
AVG(total_spend) OVER (ORDER BY date ROWS BETWEEN 6 PRECEDING AND CURRENT ROW) as spend_7day_avg,
compute_spend,
storage_spend
FROM cost_tracking
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
ORDER BY date DESC;
Then visualise this data using Deepnote's built-in charting. Add a chart cell configured to:
- X-axis: date
- Y-axis (line 1): total_spend
- Y-axis (line 2): spend_7day_avg
- Colour: choose a colour scheme that's easy to read
This gives your team a glance at whether spending is trending up or down. Add a second chart comparing compute vs storage spend over time using a stacked bar chart. This helps you understand which cost categories are growing.
Step 6:
Alert on Anomalies
Going back to your n8n workflow, you set up a Switch node checking if anomaly severity exceeds 70. When that condition is true, send an alert. Use the Slack node to notify your finance or engineering team:
POST https://hooks.slack.com/services/YOUR/WEBHOOK/URL
Body:
{
"text": "🚨 Cost Anomaly Detected",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Unusual spending detected on <date>*\n• Total spend: £<total_spend>\n• Expected: £<expected_spend>\n• Variance: <variance_pct>%\n• Severity: <anomaly_severity>/100"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "Review the <dashboard_url|cost dashboard> for details."
}
}
]
}
Store your Slack webhook URL as an n8n credential. The Switch node routes the true path to this Slack node, skipping it entirely if the anomaly severity is low.
Complete n8n Workflow Structure
Your final workflow looks like this:
- Cron trigger (daily at 6 AM)
- HTTP Request node (fetch Burnrate data)
- HTTP Request node (call Terrakotta anomaly detection)
- Function node (shape data for Deepnote)
- Postgres node (write to cost_tracking table)
- Switch node (check if anomaly_severity > 70)
- True path: Slack notification node
- False path: Continue (or stop)
Add error handling to each node. Set up a Catch node at the workflow level that sends a failure notification to Slack with the error message and timestamp. This ensures you know when the pipeline breaks rather than discovering it a week later.
Test the entire workflow manually first. Execute each step individually and validate the output matches your expectations. Only then enable the Cron trigger.
The Manual Alternative
If you prefer not to set up automation, you can run these steps monthly:
- Log into Burnrate and export a CSV of costs for the previous month.
- Upload the CSV to Terrakotta's web interface and run anomaly detection manually.
- Copy the results into a Deepnote notebook and create charts by hand.
- Share the notebook with your team via email.
This approach takes 1 to 2 hours per month and gives you zero visibility into daily spending patterns. You'll spot major problems only during retrospectives. Cost spikes that occurred on day 15 won't be noticed until day 35 when you run the report. The automated workflow costs less in tooling than the salary cost of those 1 to 2 hours, especially if it catches even one runaway cloud bill.
Pro Tips
1. Handle Burnrate API Rate Limits
Burnrate allows 100 requests per hour. Your daily pipeline makes only one request, so you're nowhere near the limit. However, if you add real-time cost tracking or query multiple date ranges, you'll need to batch requests. Implement exponential backoff in n8n: if Burnrate returns a 429 status code, wait 30 seconds, then retry. Configure the HTTP Request node to retry automatically with a max of 3 attempts.
2. Validate Data Quality Before Alerting
Before you send a Slack alert, verify that the cost data is complete. If Burnrate's data collection failed, you might get incomplete numbers that trigger a false anomaly. Add a validation step in your Function node: check that the sum of service-level costs equals the reported total (allowing for 0.5% rounding error). If validation fails, add a flag to the database and skip the anomaly check that day.
3. Customise Terrakotta Sensitivity Per Service
Terrakotta's sensitivity parameter controls how aggressive anomaly detection is. Storage spending fluctuates naturally (backups, archive operations). Compute spending can spike when you deploy new features. Set sensitivity to 0.75 for overall spending but run separate Terrakotta queries for each service type with sensitivity adjusted to 0.85. This reduces false positives for services with predictable volatility.
4. Cache Historical Data to Speed Up Queries
Your Deepnote dashboard queries the entire cost_tracking table every time someone opens the notebook. For 90 days of data, this is fast. For 2+ years, query performance degrades. Implement a materialized view in your database that pre-aggregates monthly and weekly summaries. Point one set of Deepnote charts at the raw data (last 90 days) and another at the aggregated view (all historical data). This keeps dashboards responsive.
5. Set Burnrate and Terrakotta Credentials with Expiry Alerts
API tokens eventually expire or need rotation for security. Store expiry dates in a separate n8n variable and add a monthly reminder workflow that checks expiry dates and notifies you 30 days before rotation is required. This prevents silent failures when a credential expires mid-month.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| Burnrate | Professional | £80 | Aggregates costs from AWS, GCP, Azure. Includes API access. |
| Terrakotta AI | Standard | £45 | 10,000 API calls per month included. Sufficient for daily anomaly detection. |
| Deepnote | Team | £30 | Supports 5 team members, unlimited notebooks, database connections. |
| n8n Cloud | Standard | £20 | 400 monthly workflow runs, 5 GB execution storage. Daily run = 30 runs; plenty of buffer. |
| PostgreSQL Database | AWS RDS t3.micro | £12 | 750 free hours per month from AWS free tier if you're a new customer, then ~£12/month. |
| Total | £187 | Cost amortised across your engineering team. One avoided bill error pays this back. |
If you self-host n8n and PostgreSQL on your existing infrastructure, the cost drops to £125 per month. If your team uses Burnrate and Terrakotta already for other purposes, you're only adding Deepnote (£30) and orchestration costs (£20).
More Recipes
Automated Podcast Production Workflow
Automated Podcast Production Workflow: From Raw Audio to Published Episode
Build an Automated YouTube Channel with AI
Build an Automated YouTube Channel with AI
Medical device regulatory documentation from technical specifications
Medtech companies spend significant resources translating technical specs into regulatory-compliant documentation.