Back to Alchemy
Alchemy RecipeAdvancedstack

Competitive market intelligence dashboard from pricing and product data

24 March 2026

Introduction

Staying ahead of competitors used to mean hiring a team of analysts to manually track pricing changes, monitor product updates, and synthesise market shifts into actionable intelligence. You'd spend hours each week copying data between spreadsheets, reformatting competitor information, and writing summaries that went stale within days. The larger your competitive set, the more time you lost to manual busywork.

What if you could have a live competitive dashboard updated automatically every morning? Imagine waking up to a Slack message that shows which competitors dropped prices yesterday, what new features launched, and what it all means for your positioning. That dashboard would pull raw data from multiple sources, summarise it intelligently, and surface only the insights that matter to your business.

This is where combining Deepnote, Smmry, and Terrakotta AI becomes powerful. You wire them together through an orchestration layer, and suddenly you have a system that runs without any manual handoff. This post walks through building exactly that: a competitive intelligence dashboard that operates entirely on automation.

The Automated Workflow

The core idea is straightforward: fetch pricing and product data from your competitors, summarise the noise out of it, categorise findings by importance, then visualise everything in Deepnote. Your orchestration tool (we'll use n8n as the example) triggers this chain daily and handles all the data moving between steps.

Architecture Overview

Here is the flow you are building:

  1. Data collection (competitor websites, APIs, feeds)
  2. Summarisation via Smmry (removing fluff, keeping substance)
  3. Classification and insight generation via Terrakotta AI (categorising changes as high, medium, or low impact)
  4. Storage and visualisation in Deepnote
  5. Daily scheduling and alert routing

Why n8n instead of Zapier or Make

Zapier and Make are fine for simple chains, but this workflow needs branching logic, conditional routing, and the ability to process arrays of competitor data in parallel. n8n gives you a visual workflow builder with enough flexibility to handle these requirements without writing much code. If you prefer code or need even more control, Claude Code can generate the entire n8n workflow definition as JSON.

Step 1: Trigger and Data Collection

Start with a scheduled trigger in n8n set to run daily at 06:00 UTC. From there, you need to gather competitor data. This depends on your data sources:

  • If competitors expose pricing via public APIs (rare but possible), fetch directly.

  • If they publish RSS feeds with product updates, poll those.

  • If you have web scraping permissions or use a scraping service, integrate that.

  • If you maintain a manual feed (e.g., a Google Sheet where your team logs competitor moves), read from there.

For this example, assume you are pulling from three sources: a competitor API, an RSS feed, and a manual Google Sheet. n8n has built-in connectors for all three.


GET https://api.competitor.com/pricing
Authorization: Bearer YOUR_API_KEY

Set up three separate HTTP request nodes in n8n, one for each source. Configure the requests like this:

{
  "method": "GET",
  "url": "https://api.competitor.com/pricing",
  "headers": {
    "Authorization": "Bearer YOUR_API_KEY",
    "Accept": "application/json"
  },
  "qs": {
    "since": "{{ $now.toIso().split('T')[0] }}"
  }
}

The since parameter ensures you only fetch updates from the past 24 hours, keeping the data payload manageable.

For the RSS feed, use n8n's RSS connector. For Google Sheets, use the Google Sheets node and query your tracking sheet.

Step 2: Merge and Clean Data

Use n8n's Merge node to combine results from all three sources into a single array. Then add a Function node to standardise the data structure. All records should have the same shape: competitor name, product or feature, change type (price, feature, deprecation), original value, new value, and timestamp.

return items.map(item => ({
  competitor: item.company_name || item.source,
  product: item.product_title || item.name,
  changeType: item.type || 'update',
  originalValue: item.old_price || item.previous_version || null,
  newValue: item.new_price || item.current_version || null,
  timestamp: item.date || new Date().toISOString(),
  rawData: item
}));

Step 3: Summarisation with Smmry

Smmry is a text summarisation API. Many of your data sources will include long descriptions, feature explanations, or marketing copy. Smmry strips this down to the essential points.

For each record, if there is a description field longer than 500 characters, send it to Smmry. Use an HTTP request node configured like this:


POST https://api.smmry.com/SM_API
Authorization: Smmry YOUR_API_KEY
sm_api_input: <competitor_description>
sm_api_length: 3

The sm_api_length parameter sets the number of sentences in the summary. Three sentences is usually enough to capture what changed without padding.

Map this in n8n by adding a Function node before the Smmry call that prepares the request payload:

return items.map(item => ({
  ...item,
  summaryRequest: {
    sm_api_input: item.description || item.rawData.description || '',
    sm_api_length: 3
  }
}));

Then chain an HTTP node that calls Smmry for each item. Use n8n's "Split in Batches" node to avoid hammering the API with thousands of parallel requests. Process items in batches of 10, with a 500ms delay between batches.

Step 4: Categorisation with Terrakotta AI

Terrakotta AI is a structured data classification API. Send it the summarised competitor data along with your business context, and it returns a structured classification: is this a price drop that threatens your margins? A feature launch that you need to respond to? A deprecation that barely affects you?

Create a prompt that Terrakotta understands. Here is an example structure:


Classify this competitor intelligence as HIGH, MEDIUM, or LOW impact for a B2B SaaS company selling project management software.

Competitor: {{ competitor }}
Product: {{ product }}
Change: {{ summary }}
Details: Price {{ originalValue }} → {{ newValue }}, Feature: {{ changeType }}

Respond with JSON:
{
  "impact": "HIGH|MEDIUM|LOW",
  "reason": "brief explanation",
  "recommendedAction": "what your company should do"
}

Set up an HTTP node in n8n that calls Terrakotta:


POST https://api.terrakotta.ai/v1/classify
Authorization: Bearer YOUR_TERRAKOTTA_KEY
Content-Type: application/json

Pass the prompt as the request body. Terrakotta returns structured JSON you can immediately use downstream.

Step 5: Store in Deepnote

Deepnote is a collaborative notebook environment with built-in database and visualisation capabilities. Create a Deepnote project and set up a PostgreSQL connection (Deepnote provides a free tier with limited rows; upgrade if you need more).

Create a table called competitive_intelligence with these columns:


id (serial primary key)
competitor (text)
product (text)
changeType (text)
originalValue (text)
newValue (text)
summary (text)
impactLevel (text)
recommendedAction (text)
fetchedAt (timestamp)

Back in n8n, add a Postgres node to insert the processed records:

INSERT INTO competitive_intelligence 
(competitor, product, changeType, originalValue, newValue, summary, impactLevel, recommendedAction, fetchedAt)
VALUES 
($1, $2, $3, $4, $5, $6, $7, $8, NOW())

Map your n8n variables to the SQL parameters:

{
  "query": "INSERT INTO competitive_intelligence (competitor, product, changeType, originalValue, newValue, summary, impactLevel, recommendedAction, fetchedAt) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, NOW())",
  "queryParams": [
    "{{ $node.classifyData.json.competitor }}",
    "{{ $node.classifyData.json.product }}",
    "{{ $node.classifyData.json.changeType }}",
    "{{ $node.classifyData.json.originalValue }}",
    "{{ $node.classifyData.json.newValue }}",
    "{{ $node.classifyData.json.summary }}",
    "{{ $node.classifyData.json.impact }}",
    "{{ $node.classifyData.json.recommendedAction }}",
    null
  ]
}

Step 6: Visualise and Alert

In Deepnote, create a notebook that queries this table and builds charts:

import pandas as pd
from deepnote import sql

df = sql("""
  SELECT impactLevel, COUNT(*) as count 
  FROM competitive_intelligence 
  WHERE fetchedAt > NOW() - INTERVAL '7 days'
  GROUP BY impactLevel
  ORDER BY impactLevel DESC
""")

df.plot(kind='bar', x='impactLevel', y='count')

You can also build a table view filtered to show only HIGH impact items, pinned to the top of your dashboard.

Finally, add a Slack notification step in n8n. After data is inserted, use n8n's Slack node to send a summary:

{
  "text": "Daily Competitive Intelligence Report",
  "blocks": [
    {
      "type": "section",
      "text": {
        "type": "mrkdwn",
        "text": "HIGH Impact Changes: {{ $node.countHigh.json.count }}\nMEDIUM Impact Changes: {{ $node.countMedium.json.count }}\nLOW Impact Changes: {{ $node.countLow.json.count }}"
      }
    },
    {
      "type": "section",
      "text": {
        "type": "mrkdwn",
        "text": "View full dashboard: https://your-deepnote-url.com"
      }
    }
  ]
}

Schedule this entire workflow to run daily at 06:00 UTC. n8n handles the scheduling; you just set it once.

The Manual Alternative

If you want more control over classifications or prefer to spot-check data before it reaches the dashboard, you can run the first four steps automatically but require human review before data enters Deepnote.

Add an n8n Wait node that pauses the workflow and sends a Slack message asking for approval. Use an interactive Slack button that routes back into n8n via webhook; approved items continue to the database, rejected items go to a separate "review queue" table for later analysis.

This adds a human gate but introduces a delay of several hours. For fast-moving markets, this defeats the purpose. Instead, consider running the full automation and reviewing HIGH-impact findings manually after they land in the dashboard. You get speed and oversight.

Pro Tips

1. Handle API Rate Limits Gracefully

Smmry and Terrakotta both have rate limits. Build retry logic into your n8n workflows. Use the "Retry" node type with exponential backoff: wait 2 seconds, then 4, then 8. After three failures, log the record to a separate "failed to classify" table and continue with the next item.

{
  "retries": 3,
  "delayInterval": 2000,
  "backoffMultiplier": 2
}

2. Deduplicate Before Processing

Competitors often announce the same feature or price change across multiple channels. Before sending to Smmry, check if you have already processed this exact change in the past week. Add a deduplication step using a Function node:

const existing = await db.query(
  "SELECT id FROM competitive_intelligence WHERE competitor = $1 AND product = $2 AND changeType = $3 AND fetchedAt > NOW() - INTERVAL '7 days'",
  [item.competitor, item.product, item.changeType]
);
return items.filter(item => existing.length === 0);

3. Cost Optimisation: Batch Summaries

Smmry charges per API call. If you have 50 competitor updates daily, you will hit their API 50 times. Instead, batch summaries: combine the text from all 50 updates into a single prompt, send one Smmry call, and parse the response. This reduces costs by up to 90% but requires slightly more complex parsing logic.

4. Monitor Data Quality

Add a data quality check node that flags records with missing critical fields (e.g., no summary, no impact classification). Route these to a Slack channel for manual review. Do not let bad data poison your dashboard.

5. Cache Competitor Info

If you query the same competitor APIs every day, consider caching responses for 12 hours. Use n8n's built-in cache node or store the last fetch in a separate table. This avoids redundant API calls and reduces your monthly costs.

Cost Breakdown

ToolPlan NeededMonthly CostNotes
n8nCloud Pro£40Includes 2,000 workflow executions, sufficient for daily runs across multiple sources
SmmryAPI (pay-as-you-go)£10–30Depends on volume; 50 summaries daily ≈ £15/month
Terrakotta AIStandard£25–50Per 1,000 API calls; 50 classifications daily ≈ £38/month
DeepnotePro£30Includes PostgreSQL database, 50 GB storage, collaborative editing
SlackPro£8If not already using; includes workflow integration
Total£113–158All-in monthly cost for full automation

If your team is already using Slack and Deepnote, you only pay for n8n, Smmry, and Terrakotta, bringing the cost down to around £75–80 per month.

This workflow requires upfront configuration but pays for itself quickly. You save five to ten hours per week of manual analysis. Your competitive response time drops from days to hours. And your dashboard is always up to date, sourcing from multiple channels and filtering noise automatically. That is what genuine automation looks like.