Competitor pricing analysis and dynamic pricing recommendation engine
- Published
Pricing strategy is one of the most important levers you can pull to improve margins and stay competitive. The problem is that most businesses either set prices once and forget about them, or they manually track competitor pricing and adjust quarterly. Both approaches leave money on the table............. For more on this, see Competitive pricing analysis and dynamic pricing recommen.... For more on this, see Competitive pricing analysis and dynamic pricing recommen....
What if you could monitor your competitors' prices in real time, analyse the data automatically, and get dynamic pricing recommendations delivered to your team every single morning? Not a static report, but intelligent suggestions that account for demand signals, inventory levels, and market movement. That's what this workflow does.
The challenge is that no single tool handles the entire process. You need a data notebook to run analysis scripts, a tool to scrape or fetch competitor pricing data, another to generate pricing recommendations, and an orchestrator to connect everything together. This Alchemy shows you how to wire Deepnote, Finster AI, and Terrakotta AI together with n8n to build a pricing recommendation engine that works while you sleep.
The Automated Workflow
We'll use n8n as the orchestration layer because it has excellent integration with Python notebooks and webhook capabilities. The workflow runs once daily, triggered at 6:00 AM UTC.
Architecture overview
The workflow follows this sequence:
- HTTP trigger fires at 6:00 AM
- Fetch competitor pricing data from Finster AI
- Send raw data to Deepnote for analysis
- Terrakotta AI generates pricing recommendations
- Webhook sends recommendations to Slack or your CRM
This approach means you're not manually running notebooks or copying data between tools. Everything flows automatically.
Step 1:
Set up the n8n workflow trigger
Create a new workflow in n8n and add a Cron trigger node. Configure it to fire daily at 6:00 AM UTC.
Cron expression: 0 6 * * *
Timezone: UTC
This ensures your pricing analysis runs before your team arrives and reviews data.
Step 2:
Fetch competitor pricing with Finster AI
Finster AI specialises in competitive intelligence scraping. You'll authenticate with an API key and query their endpoint for competitor price data. For more on this, see Competitive market intelligence dashboard from pricing an....
First, create a credential in n8n:
- Go to Credentials and create a new "Custom API" credential
- Set Header:
Authorization: Bearer YOUR_FINSTER_API_KEY - Save it
Then add an HTTP Request node to your workflow:
Method: GET
URL: https://api.finsterai.com/v1/competitors/pricing
Headers:
Authorization: Bearer YOUR_FINSTER_API_KEY
Query Parameters:
competitors: ["competitor_a", "competitor_b", "competitor_c"]
product_ids: ["SKU001", "SKU002", "SKU003"]
lookback_days: 7
Configure the node to output JSON. Finster will return something like:
{
"snapshot_date": "2024-01-15",
"competitors": [
{
"name": "competitor_a",
"product_id": "SKU001",
"price": 149.99,
"currency": "GBP",
"in_stock": true,
"last_checked": "2024-01-15T10:23:00Z"
},
{
"name": "competitor_b",
"product_id": "SKU001",
"price": 145.50,
"currency": "GBP",
"in_stock": true,
"last_checked": "2024-01-15T10:24:00Z"
}
]
}
Save this output as a variable called competitor_data. You'll pass it to Deepnote next.
Step 3:
Analyse data in Deepnote
Deepnote is a collaborative data notebook. You'll create a notebook that accepts competitor pricing data, runs statistical analysis, and outputs insights.
In Deepnote, create a notebook called "Pricing Analysis Engine" and add this Python code:
import pandas as pd
import json
from datetime import datetime
competitor_data = json.loads("""{{ $node["HTTP Request"].json.body }}""")
# Convert to DataFrame
competitors = competitor_data['competitors']
df = pd.DataFrame(competitors)
# Calculate key metrics
price_stats = df.groupby('product_id')['price'].agg([
'mean',
'min',
'max',
'std'
]).round(2)
# Identify outliers and trends
analysis_results = []
for product_id in df['product_id'].unique():
product_data = df[df['product_id'] == product_id]
avg_price = product_data['price'].mean()
our_price = 159.99 # Replace with dynamic lookup from your DB
price_gap = ((our_price - avg_price) / avg_price * 100).round(2)
analysis_results.append({
'product_id': product_id,
'competitor_avg': round(avg_price, 2),
'our_price': our_price,
'price_gap_percent': price_gap,
'competitor_count': len(product_data),
'lowest_competitor': product_data['price'].min(),
'highest_competitor': product_data['price'].max(),
'timestamp': datetime.now().isoformat()
})
analysis_df = pd.DataFrame(analysis_results)
analysis_json = analysis_df.to_json(orient='records')
print(analysis_json)
In n8n, add a Deepnote node that calls this notebook:
Deepnote API Endpoint: https://api.deepnote.com/v1/notebooks/{notebook_id}/run
Method: [POST](/tools/post)
Headers:
Authorization: Bearer YOUR_DEEPNOTE_API_KEY
Body:
{
"environment": "production",
"parameters": {
"competitor_data": {{ $node["HTTP Request"].json }}
}
}
Wait for the notebook execution to complete (Deepnote will webhook you back), then extract the analysis results:
{{ $node["Deepnote"].json.output }}
Store this as pricing_analysis.
Step 4:
Generate recommendations with Terrakotta AI
Terrakotta AI specialises in pricing recommendations using machine learning. It takes historical pricing data and market signals to suggest optimal prices.
Add an HTTP Request node:
Method: POST
URL: https://api.terrakottaai.com/v1/pricing/recommend
Headers:
Authorization: Bearer YOUR_TERRAKOTTA_API_KEY
Content-Type: application/json
Body:
{
"analysis_data": {{ $node["Deepnote"].json.output }},
"optimization_goal": "margin_maximization",
"constraints": {
"min_price": 99.99,
"max_price": 199.99,
"min_margin_percent": 35
},
"demand_signals": {
"product_trend": "increasing",
"seasonality": "peak_season",
"inventory_level": "moderate"
}
}
Terrakotta will respond with recommendations:
{
"recommendations": [
{
"product_id": "SKU001",
"current_price": 159.99,
"recommended_price": 164.99,
"confidence_score": 0.92,
"reasoning": "Competitors averaging 155.00. Demand trending up. Recommend premium positioning with high confidence.",
"projected_impact": {
"margin_increase_percent": 3.1,
"volume_risk_percent": -1.2
}
},
{
"product_id": "SKU002",
"current_price": 79.99,
"recommended_price": 72.50,
"confidence_score": 0.78,
"reasoning": "Competitor saturation at 68.00-75.00. Price leader occupying 69.99. Recommend slight reduction to defend volume.",
"projected_impact": {
"margin_increase_percent": -2.1,
"volume_risk_percent": 4.5
}
}
],
"analysis_timestamp": "2024-01-15T06:15:00Z"
}
Store this as pricing_recommendations.
Step 5:
Deliver recommendations to your team
Add a Slack node to send a formatted message to your pricing team:
Channel: #pricing-decisions
Message Format: Block Kit
{
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": "Daily Pricing Recommendations"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Generated:* {{ $node["Terrakotta"].json.analysis_timestamp }}\n*Confidence:* Based on {{ $node["Deepnote"].json.competitor_count }} competitors"
}
},
{
"type": "divider"
}
]
}
Then add context for each recommendation using a loop node, iterating through pricing_recommendations:
const recs = {{ $node["Terrakotta"].json.recommendations }};
return recs.map(rec => ({
type: "section",
text: {
type: "mrkdwn",
text: `*${rec.product_id}*\nCurrent: £${rec.current_price} → Recommended: £${rec.recommended_price}\nConfidence: ${(rec.confidence_score * 100).toFixed(0)}%\n_${rec.reasoning}_\n\`Margin: ${rec.projected_impact.margin_increase_percent > 0 ? '+' : ''}${rec.projected_impact.margin_increase_percent}% | Volume risk: ${rec.projected_impact.volume_risk_percent}%\``
}
}));
Alternatively, you can send data to your CRM or pricing software via webhook. Most modern pricing tools (Stripe, Voucherify, etc.) have APIs that accept price updates.
Wiring it all together
Your final n8n workflow should have:
- Cron trigger (6:00 AM)
- HTTP Request to Finster API
- Deepnote notebook execution
- Wait for completion
- HTTP Request to Terrakotta API
- Slack message builder
- Slack node (or alternative delivery)
Set error handling on critical nodes. If Finster fails to return data, the workflow should retry twice before alerting your team. If Terrakotta's confidence is below 0.70 on any recommendation, flag it as manual review required.
Use n8n's "Error Workflow" feature to send alerts to a dedicated channel if anything breaks.
The Manual Alternative
If you want more control over recommendations before they ship, modify the workflow to send recommendations to a Slack modal instead of directly to your team. Team members review, approve, or reject each recommendation before prices update.
Add a "Wait for Webhook" node after Terrakotta generates recommendations. Create an approval form using n8n's form builder. Only after approval does the workflow trigger price updates in your e-commerce platform.
This adds friction but gives you a safety net, which is sensible if price changes directly affect revenue.
Pro Tips
Rate limits and throttling
Finster AI allows 100 requests per minute on free plans, 1000 per minute on paid plans. If you have hundreds of SKUs, split them across multiple API calls using batching. Add a delay node between requests to stay within limits.
Wait time: 1 second
Type: Between requests in loop
Cost optimisation
Run competitor price fetches on a 12-hour schedule instead of daily if your market moves slowly. Analyse pricing recommendations only for products that have >10% price differences versus competitors; skip staple items with established pricing.
Error handling
Store analysis results in a Google Sheet or database before sending recommendations. If your Slack workspace is offline, you still have a record. Use n8n's "On Error" feature to log failures to a separate Slack channel.
Handling incomplete data
If Finster can't find pricing for a competitor product, Terrakotta will flag it as low confidence. Your workflow should exclude recommendations below 0.75 confidence by default, sending only high-confidence changes to Slack.
Testing and validation
Before deploying to production, run a test workflow on a subset of products (maybe 5 SKUs). Compare Terrakotta's recommendations against your existing pricing strategy to ensure outputs make sense. Adjust Terrakotta's constraints (min/max price, margin targets) based on first run results.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| Finster AI | Pro | £149 | Includes 1000 requests/min and 30-day data history |
| Deepnote | Team | £99 | Collaborative notebook access, API execution |
| Terrakotta AI | Business | £299 | ML-based pricing, up to 5000 product SKUs |
| n8n | Cloud Starter | £30 | 5000 executions/month; each workflow run = ~3 executions |
| Slack | Pro | £7.50 per user | Not strictly necessary but recommended for notifications |
| Total | £576.50 | Approximate monthly cost for this setup |
The ROI calculation is straightforward: if this workflow prevents you from leaving even 2-3% margin on the table across your product range, it pays for itself many times over. A business with £500k monthly revenue and 40% margins would see roughly £6000 in additional margin per month with a 2% improvement. That's 10x the cost of this setup.
This workflow is production-ready but requires careful monitoring in your first month. Test recommendations against actual market conditions, refine Terrakotta's constraints based on results, and gradually increase automation as confidence grows. The goal is not to blindly follow AI recommendations, but to augment your team's pricing decisions with real-time data and intelligent analysis.
More Recipes
Automated Podcast Production Workflow
Automated Podcast Production Workflow: From Raw Audio to Published Episode
Build an Automated YouTube Channel with AI
Build an Automated YouTube Channel with AI
Medical device regulatory documentation from technical specifications
Medtech companies spend significant resources translating technical specs into regulatory-compliant documentation.