Back to Alchemy
Alchemy RecipeIntermediateworkflow

Software engineering cost tracking and optimisation dashboard

24 March 2026

Introduction

Engineering teams waste significant time manually tracking burn rate metrics across projects, then exporting data into spreadsheets to spot cost anomalies. Someone runs a query on Monday, waits for results, copies numbers into a dashboard, and by Wednesday the data is already stale. When a project suddenly spends 20% more than forecast, nobody notices until the invoice arrives.

This happens because the tools that measure costs (billing APIs, usage logs) live in separate silos from the tools that visualise trends (dashboards, notebooks). You end up with a person as the connector; a human ETL pipeline that checks three different systems every morning and manually updates a spreadsheet.

We can eliminate that person entirely. By wiring together Burnrate, Deepnote, and Terrakotta AI with a no-code orchestrator, you get a cost tracking dashboard that updates automatically, flags anomalies in real time, and provides actionable recommendations without any manual work. This post shows you exactly how to build it.

The Automated Workflow

Overview of the Flow

The workflow operates on a daily schedule. It pulls cost and usage data from Burnrate's API, processes it through Terrakotta AI to detect anomalies and generate optimisation recommendations, then publishes the results to a Deepnote notebook that serves as your interactive dashboard. The entire pipeline runs hands-off.

Here is the sequence:

  1. Orchestrator receives a daily trigger at 08:00 UTC.

  2. Query Burnrate API for the previous 24 hours of cost data, resource utilisation, and project-level breakdowns.

  3. Send that data to Terrakotta AI for pattern analysis and cost optimisation recommendations.

  4. Format the results and push them to Deepnote.

  5. Deepnote renders interactive visualisations and alerts.

  6. If anomalies exceed thresholds, send a Slack notification.

Why n8n for Orchestration

For this particular workflow, n8n is the best choice. Unlike Zapier's pre-built templates, n8n lets you write custom JavaScript directly in nodes, which is essential when transforming cost data into the format Terrakotta AI expects. Make (Integromat) is more expensive at scale, and Claude Code works well for one-off tasks but not for scheduled recurring jobs. Zapier can technically do this, but you would need to pay for custom webhooks and spend more time in their GUI builder.

n8n is self-hosted or available on their cloud plan, both of which keep you in control of the execution environment.

Step 1:

Connect to Burnrate and Extract Cost Data

First, you need a Burnrate API key. Generate one in your Burnrate dashboard under Settings > API Tokens. Keep it secure; treat it like a database password.

In n8n, create a new workflow. Add an HTTP Request node with these settings:


URL: https://api.burnrate.dev/v1/projects
Method: GET
Authentication: Bearer Token
Token: your_burnrate_api_key_here
Headers:
  Content-Type: application/json

This returns a list of all your projects. The response looks like:

{
  "data": [
    {
      "id": "proj_abc123",
      "name": "platform-api",
      "status": "active",
      "owner": "engineering"
    },
    {
      "id": "proj_def456",
      "name": "mobile-app",
      "status": "active",
      "owner": "product"
    }
  ]
}

Next, add a second HTTP Request node that loops through each project and fetches cost data for the last 24 hours. In n8n, you use a Loop node for this.


URL: https://api.burnrate.dev/v1/projects/{{$node["HTTP Request"].data.id}}/costs
Method: GET
Query Parameters:
  start_date: {{new Date(Date.now() - 86400000).toISOString().split('T')[0]}}
  end_date: {{new Date().toISOString().split('T')[0]}}
  granularity: hourly

The response from each project looks like:

{
  "project_id": "proj_abc123",
  "period": "2024-01-15",
  "hourly_costs": [
    {
      "timestamp": "2024-01-15T00:00:00Z",
      "compute_cost": 145.32,
      "storage_cost": 23.50,
      "network_cost": 8.12,
      "total": 176.94
    },
    {
      "timestamp": "2024-01-15T01:00:00Z",
      "compute_cost": 142.10,
      "storage_cost": 23.50,
      "network_cost": 7.95,
      "total": 173.55
    }
  ],
  "daily_total": 4219.28,
  "forecast_daily": 4100.00
}

Add a Set node to aggregate this data into a single structure:

const projects = $node["HTTP Request"].data.data;
const costsByProject = {};

for (let project of projects) {
  const costs = $node["Loop"].data;
  costsByProject[project.name] = {
    project_id: project.id,
    owner: project.owner,
    daily_actual: costs.daily_total,
    daily_forecast: costs.forecast_daily,
    variance_percent: ((costs.daily_total - costs.forecast_daily) / costs.forecast_daily * 100).toFixed(2),
    hourly_data: costs.hourly_costs
  };
}

return {
  timestamp: new Date().toISOString(),
  data: costsByProject
};

This gives you a clean data structure to work with.

Step 2:

Send Data to Terrakotta AI for Analysis

Terrakotta AI specialises in cost optimisation recommendations. You send it your cost data and it returns patterns, inefficiencies, and specific actions you can take.

Add a new HTTP Request node:


URL: https://api.terrakotta.ai/v1/analyse
Method: POST
Authentication: Bearer Token
Token: your_terrakotta_api_key_here
Headers:
  Content-Type: application/json
Body:
{
  "analysis_type": "cost_optimisation",
  "cost_data": {{JSON.stringify($node["Set"].data)}},
  "lookback_days": 7,
  "include_recommendations": true,
  "severity_threshold": "medium"
}

Terrakotta returns an analysis object:

{
  "analysis_id": "analysis_xyz789",
  "timestamp": "2024-01-15T08:15:00Z",
  "summary": {
    "total_cost_7d": 29554.32,
    "forecast_7d": 28700.00,
    "variance_percent": 3.0,
    "trend": "increasing"
  },
  "anomalies": [
    {
      "project": "platform-api",
      "type": "sudden_spike",
      "detected_at": "2024-01-14T14:00:00Z",
      "cost_impact": 342.50,
      "severity": "medium",
      "likely_cause": "query_inefficiency"
    }
  ],
  "recommendations": [
    {
      "project": "platform-api",
      "recommendation": "Optimise database queries in authentication service; N+1 queries detected",
      "estimated_savings": 250.00,
      "priority": "high",
      "effort": "2_hours"
    },
    {
      "project": "mobile-app",
      "recommendation": "Consolidate unused compute instances in staging environment",
      "estimated_savings": 180.00,
      "priority": "medium",
      "effort": "30_minutes"
    }
  ]
}

Save this analysis object in another Set node for formatting:

const analysis = $node["Terrakotta Request"].data;

return {
  analysis_id: analysis.analysis_id,
  summary: analysis.summary,
  critical_anomalies: analysis.anomalies.filter(a => a.severity === 'high' || a.severity === 'critical'),
  recommendations: analysis.recommendations,
  generated_at: analysis.timestamp
};

Step 3:

Push Results to Deepnote

Deepnote is a collaborative notebook environment for Python and SQL. You will create a notebook that serves as your dashboard; the orchestrator will update it with new data every morning.

First, create a Deepnote notebook and get your API integration token from Settings > API Integrations.

Add an HTTP Request node in n8n:


URL: https://api.deepnote.com/v1/notebooks/{{your_notebook_id}}/execute_block
Method: POST
Authentication: Bearer Token
Token: your_deepnote_api_key
Headers:
  Content-Type: application/json
Body:
{
  "block_id": "cost_data_block",
  "code": "import pandas as pd\nimport json\n\ncost_data = {{JSON.stringify($node["Set"].data.data)}}\nanalysis_data = {{JSON.stringify($node["Terrakotta Set"].data)}}\n\ndf_costs = pd.DataFrame([\n  {\n    'project': proj,\n    'actual': data['daily_actual'],\n    'forecast': data['daily_forecast'],\n    'variance_pct': data['variance_percent']\n  }\n  for proj, data in cost_data.items()\n])\n\nprint('Daily Cost Summary')\nprint(df_costs.to_string(index=False))\nprint(f\"\\nTotal variance: {sum(float(x) for x in df_costs['variance_pct'].tolist())}%\")"
}

In your Deepnote notebook, you need a code block with the ID cost_data_block that will be updated with fresh data.

Then add another HTTP Request for the analysis results:


URL: https://api.deepnote.com/v1/notebooks/{{your_notebook_id}}/execute_block
Method: POST
Authentication: Bearer Token
Token: your_deepnote_api_key
Headers:
  Content-Type: application/json
Body:
{
  "block_id": "anomalies_block",
  "code": "import pandas as pd\nimport json\n\nanomalies = {{JSON.stringify($node["Terrakotta Set"].data.critical_anomalies)}}\nrecommendations = {{JSON.stringify($node["Terrakotta Set"].data.recommendations)}}\n\nif anomalies:\n  print('🚨 Critical Anomalies Detected')\n  for anomaly in anomalies:\n    print(f\"  • {anomaly['project']}: {anomaly['likely_cause']} (${anomaly['cost_impact']})\")\nelse:\n  print('✅ No anomalies detected')\n\nprint('\\n💡 Optimisation Recommendations')\nfor rec in recommendations:\n  print(f\"  • [{rec['priority'].upper()}] {rec['project']}: {rec['recommendation']}\")\n  print(f\"    Estimated savings: ${rec['estimated_savings']} | Effort: {rec['effort']}\")"
}

Step 4:

Conditional Alerting via Slack

If any anomalies exceed a cost threshold, send a Slack notification immediately. Add a Conditional node after Terrakotta analysis:

const critical_anomalies = $node["Terrakotta Set"].data.critical_anomalies;
const total_impact = critical_anomalies.reduce((sum, a) => sum + a.cost_impact, 0);

return total_impact > 500; // Only alert if anomalies exceed $500

If true, fire a Slack message via a webhook:


URL: your_slack_webhook_url
Method: POST
Headers:
  Content-Type: application/json
Body:
{
  "text": "⚠️ Cost anomaly detected",
  "blocks": [
    {
      "type": "section",
      "text": {
        "type": "mrkdwn",
        "text": "*Cost Alert: ${{$node["Terrakotta Set"].data.critical_anomalies[0].cost_impact}} anomaly detected*\n\nProject: {{$node["Terrakotta Set"].data.critical_anomalies[0].project}}\nLikely cause: {{$node["Terrakotta Set"].data.critical_anomalies[0].likely_cause}}\n\nReview in dashboard: <https://your-deepnote-notebook-url|Open Dashboard>"
      }
    }
  ]
}

Scheduling the Workflow

In n8n, add a Trigger node set to Cron. Use the expression:


0 8 * * *

This runs at 08:00 UTC every day. Adjust the time to suit your team's timezone. If you want more frequent updates, run it every 4 hours:


0 */4 * * *

The Manual Alternative

If you prefer not to automate the entire pipeline, you can run steps manually:

  1. Export cost data from Burnrate's web dashboard each morning as a CSV.

  2. Upload the CSV to Terrakotta AI's web interface and download the analysis as JSON.

  3. Open your Deepnote notebook and manually paste the JSON into a code cell, then run the notebook.

  4. Skim the results and decide if you need to Slack the team.

This approach takes 20 minutes each morning and introduces the risk of human error. Data often gets stale if you skip a day or two. For teams with 5+ projects or monthly costs exceeding £10,000, automation is worth the initial 90-minute setup cost.

Pro Tips

Rate Limiting and Backoff

Burnrate's API allows 100 requests per minute. If you have more than 100 projects, use n8n's built-in Rate Limit node: set it to 90 requests/minute to leave headroom. Add exponential backoff to any HTTP node that might fail: tick "Retry on Error" and set Backoff Type to "Exponential" with a base of 2 seconds.

Data Retention and Storage

Deepnote notebooks don't store historical data automatically. Augment the workflow with a second automation that writes each day's results to a PostgreSQL database or a Google Sheet. This lets you build trend analysis over weeks and months. Add an SQL node or Google Sheets append at the end of your n8n workflow to create an audit trail.

Cost of the Automation Itself

n8n Cloud charges £12/month for the first 1000 executions, then £0.0012 per execution above that. Running the workflow daily (365 times/year) costs about £12. Self-hosting n8n on a £5/month VPS costs much less but requires DevOps knowledge. For most teams, cloud is simpler.

Customising Thresholds

Terrakotta AI's "severity_threshold" parameter filters recommendations. Set it to "low" in development to see everything, then move to "medium" or "high" in production to reduce noise. You can also modify the anomaly alert threshold in the Conditional node; increase it to £1000 if your team gets alert fatigue, or lower it to £100 if you are cost-conscious.

Testing the Workflow

Before scheduling it, run the workflow manually end-to-end. Use test data if you have it. Check that the Deepnote notebook updates correctly and that a test Slack message arrives. n8n provides a "Test" button on each node; use it.

Cost Breakdown

ToolPlan NeededMonthly CostNotes
BurnratePro (API access)£49Includes 1000 API calls/month; overage £0.01/call
DeepnoteProfessional£29Collaborators can view and edit; 10GB storage
Terrakotta AIGrowth£199Includes 5000 analyses/month; overage £0.04/analysis
n8n CloudStandard£121000 executions included; £0.0012/execution thereafter
Total£289Scales to ~£310 for 2000+ daily executions

At scale, the cost per anomaly detected drops significantly. A team running this dashboard for six months with 180 Deepnote updates and 360 Terrakotta analyses spends roughly £1700 total, or £9.44 per actionable insight. Most teams save that in their first week of implemented recommendations.