Alchemy RecipeIntermediatestack

AI cost monitoring dashboard for development team spending

Published

Development teams often discover their AI infrastructure costs months after the fact. You've been experimenting with Claude, running data processing jobs, spinning up notebooks for analysis, and training custom models. Then the invoice arrives and nobody quite knows where the £2,000 went or whether you were getting value for it.

The real problem isn't the cost itself; it's the blind spot. Without visibility into who's spending what and on which tools, you can't make informed decisions about whether to optimise, cut back, or invest more. You might have team members running redundant experiments, overly large model calls, or forgotten scheduled jobs burning through your budget every night.

This Alchemy workflow solves this by connecting Burnrate (your cost tracking foundation), Deepnote (where your team actually works with data and code), and Terrakotta AI (which analyses spending patterns) into a single automated dashboard. No manual CSV exports, no weekly emails asking people what they spent on, no guessing.... For more on this, see Terrakotta AI vs Deepnote vs DataRobot: AI Tools for Data....

The Automated Workflow

Architecture Overview

The workflow runs on a schedule, pulling spending data from Burnrate, enriching it with context from your Deepnote activity logs, and feeding both into Terrakotta AI for analysis. The results populate a shared dashboard that updates automatically each morning. You'll choose either n8n (for self-hosted control) or Make (for managed simplicity), depending on your infrastructure preferences. Zapier works too, but Make and n8n handle the conditional logic more elegantly here.

Why This Combination Works

Burnrate tracks every pound spent across your AI tools with API access to detailed transaction logs. Deepnote gives you a collaboration environment where data scientists already work, so adding cost analysis directly to their workspace keeps it top of mind. Terrakotta AI does the harder analytical work: it identifies cost anomalies, calculates cost per output token, and flags spending that deviates from your baselines.

Step 1: Authenticate and Set Up Connections

Start by creating API keys for each service. In Make or n8n, you'll add these as connection modules before building the workflow.

Burnrate:

Visit your Burnrate account settings and generate an API key. You'll need the organisation ID as well.


API Endpoint: https://api.burnrate.io/v1/transactions
Authentication: Bearer {YOUR_API_KEY}

Deepnote:

Create a Deepnote integration token from your workspace settings. This lets you trigger notebook runs and write results directly to your environment.


API Endpoint: https://api.deepnote.com/v1/projects/{PROJECT_ID}/runs
Authentication: Bearer {YOUR_DEEPNOTE_TOKEN}

Terrakotta AI:

Generate credentials from your Terrakotta dashboard. You'll use this to submit spending data for analysis.


API Endpoint: https://api.terrakotta.ai/v1/analyse-spend
Authentication: Bearer {YOUR_TERRAKOTTA_KEY}

Step 2: Pull Daily Spending from Burnrate

Your workflow triggers every morning at 06:00 UTC. The first module queries Burnrate for all transactions from the previous day.


GET /v1/transactions?date_from=2024-01-15&date_to=2024-01-15&limit=500

Burnrate returns a JSON payload containing every transaction. You're particularly interested in these fields:

{
  "transactions": [
    {
      "id": "txn_abc123",
      "timestamp": "2024-01-15T14:23:00Z",
      "service": "openai",
      "model": "gpt-4",
      "cost_gbp": 12.50,
      "tokens_in": 4000,
      "tokens_out": 800,
      "user_id": "user_sarah_123",
      "project_id": "proj_data_pipeline"
    }
  ]
}

In Make, use the HTTP module; in n8n, use the HTTP Request node. Both allow you to set query parameters and handle JSON responses naturally.

Step 3: Enrich Data with Deepnote Activity Logs

Once you have the transaction list, you need context: which notebooks or scripts triggered these calls? This is where Deepnote integration comes in. For each transaction, you'll query Deepnote's activity log to find which user and project were active at that timestamp.


GET /v1/projects/{PROJECT_ID}/activity-logs?timestamp=2024-01-15T14:23:00Z&limit=100

Deepnote returns:

{
  "activities": [
    {
      "user_id": "user_sarah_123",
      "cell_id": "cell_456",
      "notebook_title": "Q1 Customer Segmentation",
      "execution_start": "2024-01-15T14:20:00Z",
      "execution_end": "2024-01-15T14:25:00Z"
    }
  ]
}

In your orchestration tool, use a filter or conditional step to match transactions to Deepnote activities based on timestamp and user ID. The goal is to annotate each spending record with the notebook name.

Step 4: Submit Enriched Data to Terrakotta AI

Now you have transactions with context. Send this enriched dataset to Terrakotta for anomaly detection and pattern analysis.


[POST](/tools/post) /v1/analyse-spend
Content-Type: application/json

{
  "period": "daily",
  "organisation_id": "org_12345",
  "transactions": [
    {
      "id": "txn_abc123",
      "service": "openai",
      "model": "gpt-4",
      "cost_gbp": 12.50,
      "tokens_in": 4000,
      "tokens_out": 800,
      "user_id": "user_sarah_123",
      "project_id": "proj_data_pipeline",
      "notebook_title": "Q1 Customer Segmentation"
    }
  ]
}

Terrakotta processes this asynchronously. It returns a job ID immediately.

{
  "job_id": "job_xyz789",
  "status": "processing"
}

Store this job ID. You'll poll it in the next step.

Step 5: Poll for Terrakotta Results

Add a delay of 30 to 60 seconds, then poll Terrakotta to fetch the analysis results.


GET /v1/jobs/{job_id}/results

Results include cost per team member, cost per project, anomaly flags, and trend summaries:

{
  "job_id": "job_xyz789",
  "status": "complete",
  "summary": {
    "total_spend": 156.75,
    "average_daily_spend": 145.32,
    "spend_change_percent": 7.9
  },
  "by_user": [
    {
      "user_id": "user_sarah_123",
      "name": "Sarah Chen",
      "daily_spend": 45.50,
      "project_focus": "Q1 Customer Segmentation"
    }
  ],
  "by_project": [
    {
      "project_id": "proj_data_pipeline",
      "name": "Data Pipeline",
      "daily_spend": 89.20,
      "cost_per_token_out": 0.000156
    }
  ],
  "anomalies": [
    {
      "type": "unexpected_spike",
      "project": "proj_data_pipeline",
      "severity": "medium",
      "description": "Spending 23% above baseline for this day of week"
    }
  ]
}

Step 6: Publish Results to Deepnote

Finally, write these results into a Deepnote notebook that acts as your dashboard. You can either append to an existing notebook or trigger a specific notebook run that imports the data.


POST /v1/projects/{PROJECT_ID}/notebooks/{NOTEBOOK_ID}/cells
Content-Type: application/json

{
  "cell_type": "code",
  "source": "import json\ndata = json.loads('''%s''')\ndf = pd.DataFrame(data['by_user'])\nprint(df.to_string())"
}

Alternatively, write the data to a CSV file stored in Deepnote's file system via an API call, then have a scheduled Deepnote notebook read and visualise it using Pandas and Plotly.

Complete Workflow in Make

Here's how the full sequence looks in Make's visual editor:

  1. Schedule trigger (Cron: 0 6 * * *)
  2. HTTP module: GET from Burnrate transactions endpoint
  3. Iterator: loop through each transaction
  4. HTTP module: GET from Deepnote activity logs (nested inside iterator)
  5. Filter: match transactions to activities by timestamp and user
  6. Aggregator: collect enriched transactions
  7. HTTP module: POST enriched data to Terrakotta
  8. Wait 45 seconds
  9. HTTP module: GET job results from Terrakotta
  10. HTTP module: POST/write results to Deepnote
  11. Notification (optional): send Slack alert if anomalies detected

Complete Workflow in n8n

In n8n, the nodes follow the same logic but with slightly different syntax:


Cron → HTTP Request (Burnrate) → Loop Over Items (transactions) → HTTP Request (Deepnote) → Filter → Merge (combine arrays) → HTTP Request (Terrakotta POST) → Wait → HTTP Request (Terrakotta GET) → HTTP Request (Deepnote Write) → Slack (optional)

n8n's expression language lets you reference previous node outputs with {{ $node["HTTP Request"].data.transactions }}, making it straightforward to thread data through steps.

The Manual Alternative

If you prefer not to automate immediately, you can run these steps manually once a week:

  1. Log into Burnrate, export yesterday's transactions as CSV
  2. Open a Deepnote notebook and load the CSV
  3. Manually cross-reference user IDs with notebook titles from your Deepnote activity logs
  4. Copy the enriched data into Terrakotta's web interface
  5. Download the analysis PDF
  6. Distribute it to your team

This approach gives you full control and lets you ask ad hoc questions ("Why did the data pipeline project spike last Tuesday?"), but it takes about 30 minutes per week and is easy to skip when you're busy. The automated version removes that friction entirely.

Pro Tips

1. Handle Timeouts Gracefully

Terrakotta analysis sometimes takes longer than expected if you have thousands of transactions. In Make, add a retry mechanism: set the polling HTTP request to retry up to three times with exponential backoff (5 seconds, then 15 seconds, then 45 seconds). In n8n, use the Retry node with similar settings. If it still fails, send yourself a Slack alert with the job ID so you can manually check the status later.


Retry Configuration:
Max retries: 3
Wait between retries: exponential
Initial delay: 5 seconds
Max delay: 60 seconds

2. Rate Limiting

Burnrate allows 100 requests per minute per API key. If your organisation is large and has many transactions, paginate through results. Use the limit and offset query parameters:


GET /v1/transactions?date_from=2024-01-15&limit=500&offset=0
GET /v1/transactions?date_from=2024-01-15&limit=500&offset=500
GET /v1/transactions?date_from=2024-01-15&limit=500&offset=1000

Build logic into your workflow to loop until offset results are empty. Both Make and n8n support this natively with their Iterator nodes.

3. Cost Baseline Awareness

Have Terrakotta calculate a rolling 14-day average before the analysis runs. When it flags a "spike", compare against the actual baseline, not against an arbitrary number. Ask Terrakotta to return confidence intervals so you know whether a 10% increase is statistically significant or noise. Review these anomalies weekly; some spikes are legitimate (running a major model training job) and shouldn't trigger alerts.

4. Segment by Team

If you have multiple teams within your organisation, ask Deepnote to tag activities by team (add a metadata field to notebooks). When you send data to Terrakotta, include the team field. This lets you show each team their own cost dashboard and avoid centralised blame. Sales team spending on Claude API calls is different from the data science team's GPU costs, and they should see their own numbers.

5. Archive Old Results

After 90 days, Deepnote notebooks become cluttered with historical dashboards. Modify step 6 to archive old data: instead of appending to the same notebook forever, create a new notebook each week named cost-dashboard-2024-w03 and keep the previous week's notebook read-only for reference. This keeps things tidy and lets you compare week-on-week trends easily.

Cost Breakdown

ToolPlan NeededMonthly CostNotes
BurnratePro£45Includes API access and 10,000 transaction queries per month. Cost tracking only; no analysis included.
DeepnoteTeams£120Collaboration and notebook hosting. You're using it for dashboard publication and team access.
Terrakotta AIAnalyst£80Includes 5,000 spend analyses per month. Each daily run counts as one analysis. Overage: £0.02 per analysis.
MakePro£99Supports complex multi-step workflows with 10,000 operations per month. Sufficient for daily runs.
n8nSelf-hosted£0 (labour)If you self-host on your own server, cost is zero but requires DevOps effort to maintain. Cloud version is £20/month for starter tier.
ZapierProfessional£50Works but less suitable for this workflow because the multi-step conditional logic requires premium plan and quickly becomes expensive per action.

Total Cost Estimate: £390 to £410 per month for a team of 8 to 15 people.

That assumes Make or Zapier orchestration. If you self-host n8n on existing infrastructure (e.g., a spare EC2 instance), you drop to around £245 per month and gain full workflow transparency and control.

Implementation Checklist

  1. Generate API keys for Burnrate, Deepnote, and Terrakotta
  2. Create a Make or n8n account and log in
  3. Build modules/nodes step by step, testing each one with sample data
  4. Set the cron trigger to run at 06:00 UTC every weekday initially (use 0 6 * * 1-5 for Monday to Friday)
  5. Run the workflow manually once to verify end-to-end execution
  6. Check that Deepnote receives the results and renders the dashboard
  7. Share the dashboard notebook URL with your team and leadership
  8. Review results every Friday and adjust baseline thresholds based on what's normal for your business
  9. After two weeks of daily runs, extend to weekends as well if useful

By week three, you'll have enough historical data that Terrakotta's anomaly detection becomes meaningful. By week six, patterns emerge clearly: you'll know which projects are cost-efficient, which teams are exploring aggressively, and whether your spending is trending up or down.

The automation means nobody has to think about it. The dashboard refreshes while you're getting coffee on Monday morning.

More Recipes