Competitive market intelligence dashboard from pricing and product data
- Published
Your sales team spots a competitor's new pricing tier. Your product manager notices feature parity shifting. Your executives ask "what's everyone else doing?" and you scramble to find the answer. Competitive intelligence takes time, and static spreadsheets age faster than yesterday's API docs.
Most companies handle competitive analysis manually: someone checks competitor websites monthly, pastes pricing into a spreadsheet, summarises findings in Slack. The data gets stale, updates get missed, and nobody sees the patterns emerging across markets or customer segments. You're working from yesterday's intelligence in a market that moves weekly....... For more on this, see Competitor pricing analysis and dynamic pricing recommend.... For more on this, see Customer feedback analysis to product roadmap alignment w....
This workflow automates that grind. You'll collect pricing and product data from competitors, distil it into summaries, and feed everything into a live intelligence dashboard. No manual copying. No missed updates. Just fresh competitive data flowing into a tool your team actually uses. We're building this with Deepnote for the dashboard, SMMRY for intelligent text summarisation, and Terrakotta AI for structured data extraction, orchestrated through n8n for reliability. For more on this, see Terrakotta AI vs Deepnote vs DataRobot: AI Tools for Data....
The Automated Workflow
Architecture Overview
Your workflow runs on a schedule (daily or weekly), pulls competitor data from sources you've defined, processes it through summarisation and extraction, then updates a Deepnote dashboard. Think of it as a data pipeline: raw input → cleaned and parsed → analysed and summarised → visualised for your team.
We're using n8n as the orchestrator because it handles scheduled triggers, complex conditional logic, and API calls without requiring code deployment. Zapier and Make both work too, but n8n runs on your own infrastructure, which matters when you're moving data daily and want to avoid transaction overages.
Step 1: Trigger and Data Collection
Your workflow starts on a schedule. n8n's cron trigger fires daily at 6 AM GMT, or weekly on Monday mornings. You define this in the workflow editor:
Trigger: Cron
Expression: 0 6 * * * (daily at 6 AM)
Timezone: Europe/London
From here, you make HTTP requests to collect competitor data. Most companies use a mix of sources: public APIs (if competitors expose them), web scraping endpoints you've built, or manual uploads to a shared folder that n8n monitors.
For this example, assume you're pulling from three sources:
- A pricing API endpoint you've built that scrapes competitor websites regularly.
- Product change feeds (from RSS or a webhook if you're monitoring them).
- A Google Sheets document where your sales team manually logs competitive wins and product observations.
Your n8n workflow makes these requests in parallel:
// In n8n, use HTTP Request nodes for each source
// Node 1: Fetch pricing data
POST https://your-pricing-api.example.com/v1/competitors/current
Headers: {
"Authorization": "Bearer YOUR_API_KEY",
"Accept": "application/json"
}
// Node 2: Fetch product data
GET https://your-product-feed.example.com/changes?days=7
Headers: {
"Authorization": "Bearer YOUR_API_KEY"
}
// Node 3: Fetch from Google Sheets
GET https://sheets.googleapis.com/v4/spreadsheets/SHEET_ID/values/CompetitiveData!A:Z
Headers: {
"Authorization": "Bearer GOOGLE_API_KEY"
}
All three requests happen simultaneously. n8n's "Wait" nodes let you stall the workflow until all complete, then you merge the results into a single object:
{
"pricing_data": [...],
"product_changes": [...],
"manual_observations": [...]
}
Step 2: Text Summarisation with SMMRY
You've now got raw competitive data: product descriptions, feature announcements, pricing pages, market commentary from your sales team. SMMRY condenses verbose text into digestible summaries.
SMMRY is a summarisation API. You send it text, specify a desired sentence count, and it returns a condensed version. It's useful for competing product pages (which tend to be marketing fluff) and for lengthy changelogs.
In n8n, add an HTTP Request node that calls SMMRY for each piece of text:
// SMMRY API call for product description summarisation
POST https://api.smmry.com/SM_API_KEY
Content-Type: application/x-www-form-urlencoded
Body:
sm_api_input: <html>COMPETITOR_PRODUCT_DESCRIPTION_HTML</html>
sm_api_input_type: html
sm_api_length: 4
SMMRY returns:
{
"sm_api_content": "Summarized text here in 4 sentences",
"sm_api_character_count": 312,
"sm_api_item_keywords": ["feature1", "feature2", "feature3"]
}
For each competitor, you're creating a summary layer. Pricing page? Summarised. Feature announcement? Summarised. This reduces noise and makes patterns clearer when you're reviewing the dashboard.
Step 3: Data Extraction and Structuring with Terrakotta AI
Summaries help humans scan faster, but dashboards need structured data. Terrakotta AI extracts and classifies information from text, turning unstructured competitive intelligence into schema you can query.
Terrakotta accepts text and a JSON schema describing what you want to extract. You define a schema for competitor data:
{
"type": "object",
"properties": {
"competitor_name": {
"type": "string",
"description": "Company name"
},
"primary_pricing_model": {
"type": "string",
"enum": ["subscription", "usage-based", "perpetual", "hybrid"],
"description": "How they charge customers"
},
"base_price_usd": {
"type": "number",
"description": "Entry-level monthly or annual price in USD"
},
"key_features": {
"type": "array",
"items": {"type": "string"},
"description": "Top 5 product features mentioned"
},
"target_market": {
"type": "string",
"enum": ["enterprise", "mid-market", "smb", "self-serve"],
"description": "Primary customer segment"
},
"recent_changes": {
"type": "array",
"items": {
"type": "object",
"properties": {
"change": {"type": "string"},
"date": {"type": "string", "format": "date"},
"category": {"type": "string", "enum": ["pricing", "feature", "integration"]}
}
}
}
},
"required": ["competitor_name", "primary_pricing_model", "key_features"]
}
In n8n, you call Terrakotta after summarisation:
POST https://api.terrakotta.ai/v1/extract
Headers: {
"Authorization": "Bearer TERRAKOTTA_API_KEY",
"Content-Type": "application/json"
}
Body:
{
"text": "SUMMARIZED_COMPETITOR_TEXT_HERE",
"schema": {
// Your schema from above
}
}
Terrakotta returns extracted data matching your schema:
{
"competitor_name": "CompetitorCorp",
"primary_pricing_model": "subscription",
"base_price_usd": 99,
"key_features": [
"API access",
"Custom workflows",
"Slack integration",
"Audit logs",
"SSO"
],
"target_market": "mid-market",
"recent_changes": [
{
"change": "Added native Zapier integration",
"date": "2024-01-15",
"category": "integration"
}
]
}
You now have cleaned, structured competitive data.
Step 4: Store in a Database
Before pushing to the dashboard, store this structured data somewhere queryable. Most teams use PostgreSQL or a managed option like Supabase. In n8n, use a "Postgres" node (or equivalent for your database):
INSERT INTO competitor_intelligence (
competitor_name,
pricing_model,
base_price_usd,
key_features,
target_market,
recent_changes,
updated_at
) VALUES (
$1, $2, $3, $4, $5, $6, NOW()
)
ON CONFLICT (competitor_name)
DO UPDATE SET
pricing_model = $2,
base_price_usd = $3,
key_features = $4,
target_market = $5,
recent_changes = $6,
updated_at = NOW()
This upserts (insert or update) competitor records. Each run adds or refreshes the data.
Step 5: Query and Visualise in Deepnote
Deepnote is a collaborative notebook environment with live SQL queries and charting. Create a new Deepnote project and connect it to your Postgres database.
In Deepnote, write SQL to pull your competitive data:
SELECT
competitor_name,
pricing_model,
base_price_usd,
key_features,
target_market,
updated_at
FROM competitor_intelligence
ORDER BY updated_at DESC
Then create charts. A scatter plot comparing base price against number of features:
import pandas as pd
import plotly.express as px
df = pd.DataFrame(query_result)
df['feature_count'] = df['key_features'].apply(len)
fig = px.scatter(
df,
x='feature_count',
y='base_price_usd',
hover_name='competitor_name',
title='Competitor Positioning: Feature Count vs Price',
labels={'feature_count': 'Number of Key Features', 'base_price_usd': 'Base Price (USD)'}
)
fig.show()
A table showing recent changes:
recent_changes_df = []
for _, row in df.iterrows():
for change in row['recent_changes']:
recent_changes_df.append({
'Competitor': row['competitor_name'],
'Change': change['change'],
'Date': change['date'],
'Category': change['category']
})
changes_df = pd.DataFrame(recent_changes_df)
changes_df = changes_df.sort_values('Date', ascending=False)
changes_df
A breakdown of pricing models:
pricing_breakdown = df.groupby('pricing_model').size()
fig = px.pie(
values=pricing_breakdown.values,
names=pricing_breakdown.index,
title='Pricing Model Distribution Among Competitors'
)
fig.show()
Deepnote updates these visualisations automatically each time your n8n workflow runs. Your team sees fresh competitive intelligence without touching a spreadsheet.
Connecting It All: The n8n Workflow Structure
Here's how the nodes flow in n8n:
1. Cron Trigger (daily 6 AM)
↓
2. HTTP Request: Fetch pricing data (parallel)
3. HTTP Request: Fetch product changes (parallel)
4. HTTP Request: Fetch Google Sheets (parallel)
↓ (Wait for all)
5. Merge results into single object
↓
6. Loop over each competitor:
6a. Call SMMRY to summarise text
6b. Call Terrakotta to extract structured data
↓
7. Postgres: Upsert all records
↓
8. Slack notification: Send summary to #competitive-intel
The loop (step 6) is critical. If you have 10 competitors, the workflow processes each one through both SMMRY and Terrakotta sequentially or in batches. n8n's "Loop" node handles this.
Optional: Add error handling. If SMMRY or Terrakotta fails for one competitor, catch the error, log it, and continue. n8n's "Error Handler" nodes let you do this:
// If extraction fails, send alert but don't stop workflow
Try SMMRY call
Catch: Send Slack message to #alerts
Then continue
The Manual Alternative
If you prefer not to automate the full pipeline, you can automate parts. For instance, run the n8n workflow to collect and store data, but build your own Deepnote dashboard with custom SQL and Python analysis. This gives you flexibility if your competitive intelligence needs are unusual or frequently changing.
Alternatively, skip Deepnote entirely. Export data from Postgres to CSV weekly, share in Google Drive, and analysts update a shared sheet with observations. You lose the real-time refresh, but you keep full control and avoid tool dependencies.
Some teams use Claude Code within the orchestration tool to write custom extraction logic instead of calling Terrakotta. Claude can parse complex product comparisons and infer market positioning from raw HTML. This is cheaper but slower; API calls to Claude run synchronously and take seconds per document.
Pro Tips
Rate Limits and Throttling
SMMRY allows 200 requests daily on the free tier, Terrakotta has usage-based billing. If you have 50 competitors and run daily, you'll hit limits fast. In n8n, add delays between API calls. Use the "Wait" node set to 0.5 seconds between requests to spread the load and stay within quotas.
Alternatively, run the full workflow weekly instead of daily. Competitive data doesn't change that rapidly; weekly summaries are sufficient for most teams.
Cost Optimisation
Terrakotta charges per extraction. Before paying for extraction on every field, evaluate what you actually need. Price and features? Extract those. Recent changes? Only if you're actively tracking them. Build your schema lean, not comprehensive. You can always add fields later.
SMMRY is cheap (cents per request), so summarise everything. Deepnote has a free tier for up to 5 collaborators; beyond that, you pay per user. If your team is small, free tier works.
n8n is self-hosted (free) or cloud-based (paid on usage). Self-hosting is best if you run many workflows; cloud is fine for one or two.
Handling Extraction Failures
Not all competitors have clean product pages. Some hide pricing behind a contact form, others list features in unstructured text. Terrakotta will do its best, but mark uncertain extractions. Add a confidence score to your schema:
{
"base_price_usd": {
"type": "number"
},
"price_confidence": {
"type": "string",
"enum": ["high", "medium", "low"]
}
}
When visualising in Deepnote, filter out low-confidence data or flag it differently. This prevents your team from making decisions on bad data.
Monitoring Data Quality
Add a Deepnote cell that checks for data anomalies:
missing_data = df[
(df['key_features'].isna()) |
(df['base_price_usd'].isna())
]
if not missing_data.empty:
print(f"⚠️ Missing data for: {', '.join(missing_data['competitor_name'])}")
print("Manual review needed")
Share this cell with your team so they know which records need human validation.
Staying Compliant
When scraping or monitoring competitor websites, respect robots.txt and terms of service. Most pricing pages are public and fair game, but if a competitor explicitly forbids automated access, honour that. Public APIs and RSS feeds are always safe.
GDPR and similar regulations don't typically apply to competitive intelligence (you're not collecting customer data), but if you're storing third-party data, document where it comes from and how long you keep it. A simple note in Deepnote: "Data refreshed daily from public sources, kept for 90 days."
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| n8n (self-hosted) | Community (self-hosted) | £0 | On your own server; no transaction fees |
| n8n (cloud) | Starter | £20–50 | Cloud-hosted, billed on workflow executions |
| SMMRY | Free or Premium | £0–10 | 200 requests/day free; premium for higher limits |
| Terrakotta AI | Pay-as-you-go | £20–100 | Per extraction; depends on volume |
| Deepnote | Free or Pro | £0–20 | Free for up to 5 users; Pro for more |
| PostgreSQL | Managed (Supabase, etc.) | £10–50 | Depends on data volume; most setups use free tier |
| Total (typical setup) | £50–220 | Self-hosted n8n + free Deepnote saves £20–30/month |
This scales with competitor count. 10 competitors, run weekly: £50. 50 competitors, run daily: £200. Adjust extraction frequency and tool tiers based on your budget.
Start small: pick three competitors, one data source, and a weekly schedule. Once you see the value and your team trusts the data, expand to more competitors and real-time updates. The workflow runs unattended after that; your team gets a living competitive dashboard without someone babysitting spreadsheets.
More Recipes
Academic paper digesting pipeline for research synthesis
Researchers spend hours reading and synthesising papers when they need to extract key findings and citations quickly.
AI cost monitoring dashboard for development team spending
Development teams using multiple AI coding tools lack visibility into spending, token usage, and cost optimisation opportunities.
Build an AI Research Assistant Stack
Build an AI Research Assistant Stack