Introduction
Your product catalogue shifts daily. Your competitors adjust prices hourly. You're still manually checking competitor rates and updating your own prices weekly. By the time you act, you've already lost margin on some SKUs and left money on the table with others.
Competitive pricing analysis used to require a dedicated analyst, a spreadsheet, and an afternoon of work. Now you can automate it entirely. You collect competitor data, analyse market positioning, and push dynamic pricing recommendations to your catalogue with zero human intervention.
This workflow combines three specialist tools through an orchestration layer. Deepnote handles the data analysis; Parspec-AI extracts competitor pricing data from across the web; Terrakotta-AI generates pricing recommendations based on demand elasticity and margin targets. The result is a pricing engine that wakes up, gathers intelligence, thinks strategically, and acts without waiting for you to open a single tab.
The Automated Workflow
Architecture Overview
You'll use n8n as your orchestration backbone. It's more flexible than Zapier for complex multi-step workflows, cheaper than Make for regular execution, and it integrates cleanly with Claude Code when you need custom logic.
The flow works like this: a daily trigger fires, Parspec-AI crawls competitor sites for pricing data, that data lands in Deepnote for analysis, Terrakotta-AI evaluates the landscape and recommends new prices, and Claude Code formats the output to push into your product database or pricing API.
Step 1: Trigger and Data Collection
Start with a scheduled trigger in n8n. Set it to run daily at 02:00 UTC when server load is typically lowest.
{
"trigger_type": "schedule",
"frequency": "daily",
"time": "02:00",
"timezone": "UTC"
}
Next, add a Parspec-AI node. You'll need an API key from their dashboard. Parspec-AI specialises in extracting structured pricing data from competitor websites without breaking terms of service, it respects robots.txt and rate limits automatically.
POST /api/v1/crawl
Host: api.parspec-ai.io
Authorization: Bearer YOUR_PARSPEC_API_KEY
Content-Type: application/json
{
"urls": [
"https://competitor-a.com/products",
"https://competitor-b.com/products",
"https://competitor-c.com/products"
],
"extract_fields": [
"product_name",
"sku",
"current_price",
"list_price",
"discount_percentage",
"stock_status"
],
"output_format": "json",
"webhook_url": "https://your-n8n-instance.com/webhook/parspec-callback"
}
Parspec-AI will send results back via webhook. Store the response in a variable for the next step.
Step 2: Data Analysis in Deepnote
When Parspec-AI completes, trigger a Deepnote notebook via its API. Deepnote is a collaborative data environment that runs Python notebooks in the cloud, perfect for statistical analysis and data validation.
POST /api/v1/run_notebook
Host: api.deepnote.com
Authorization: Bearer YOUR_DEEPNOTE_API_KEY
Content-Type: application/json
{
"project_id": "your-project-id",
"notebook_id": "your-notebook-id",
"parameters": {
"competitor_data": "<output from Parspec-AI>",
"your_product_catalogue": "<your current prices>",
"analysis_date": "2024-01-15"
},
"wait_for_completion": true
}
Inside your Deepnote notebook, you're doing three things: validating the data (checking for outliers, missing values, obvious scraping errors); calculating competitor average prices by category and price tier; computing your current position against the market (are you premium, value, or mid-market positioned).
Here's a simplified example of the analysis logic:
import pandas as pd
import numpy as np
from scipy import stats
competitor_df = pd.DataFrame(parameters['competitor_data'])
our_catalogue = pd.DataFrame(parameters['your_product_catalogue'])
# Remove obvious outliers (prices more than 3 standard deviations from mean)
for sku in competitor_df['sku'].unique():
prices = competitor_df[competitor_df['sku'] == sku]['current_price']
z_scores = np.abs(stats.zscore(prices))
competitor_df = competitor_df[(z_scores < 3) | (competitor_df['sku'] != sku)]
# Calculate market statistics by category
market_stats = competitor_df.groupby('category').agg({
'current_price': ['mean', 'median', 'min', 'max', 'std'],
'discount_percentage': 'mean'
}).round(2)
# Join with our catalogue
analysis_result = our_catalogue.merge(
market_stats.reset_index(),
left_on='category',
right_on='category',
how='left'
)
# Calculate our positioning
analysis_result['our_vs_market_mean'] = (
(analysis_result['our_price'] - analysis_result['current_price_mean']) /
analysis_result['current_price_mean'] * 100
).round(2)
# Output for next step
output_analysis = analysis_result.to_dict(orient='records')
The Deepnote API returns the results directly, which n8n captures and passes forward.
Step 3: Pricing Recommendation Engine
Now feed the analysis into Terrakotta-AI. Terrakotta-AI is a specialist tool that applies economic modelling to pricing problems; it understands price elasticity, demand curves, and margin constraints.
POST /api/v2/pricing_recommendation
Host: api.terrakotta-ai.io
Authorization: Bearer YOUR_TERRAKOTTA_API_KEY
Content-Type: application/json
{
"products": [
{
"sku": "PROD-001",
"category": "electronics",
"current_price": 299.99,
"cost": 150.00,
"competitor_avg": 289.50,
"competitor_min": 259.99,
"competitor_max": 329.99,
"monthly_volume": 1250,
"last_price_change": "2024-01-10",
"elasticity_estimate": -1.2
}
],
"constraints": {
"minimum_margin": 0.25,
"maximum_price_increase": 0.15,
"maximum_price_decrease": 0.20,
"update_frequency": "daily"
},
"objectives": {
"primary": "profit_maximisation",
"secondary": "market_share_protection"
}
}
Terrakotta-AI returns recommended prices with confidence scores. A confidence score of 0.92 means the algorithm is very confident; 0.65 means the market data is noisier and you might want human review.
{
"recommendations": [
{
"sku": "PROD-001",
"current_price": 299.99,
"recommended_price": 289.99,
"reason": "Undercut competitor average by 0.17% to recapture market share",
"predicted_volume_change": 0.08,
"predicted_margin_impact": 0.02,
"confidence_score": 0.87
},
{
"sku": "PROD-002",
"current_price": 199.99,
"recommended_price": 209.99,
"reason": "No direct competition in this segment; margin improvement opportunity",
"predicted_volume_change": -0.03,
"predicted_margin_impact": 0.11,
"confidence_score": 0.71
}
]
}
Step 4: Formatting and Delivery
Add a Claude Code step to format the recommendations and apply business logic. Claude Code lets you write Python scripts that run serverless in n8n without managing infrastructure.
{
"node_type": "claude_code",
"model": "claude-3-5-sonnet",
"code": """
import json
from datetime import datetime
recommendations = {{terrakotta_output}}
confidence_threshold = 0.80
approved_updates = []
flagged_updates = []
for rec in recommendations['recommendations']:
rec['timestamp'] = datetime.utcnow().isoformat()
rec['update_type'] = 'price_update'
if rec['confidence_score'] >= confidence_threshold:
approved_updates.append(rec)
else:
flagged_updates.append(rec)
output = {
'approved_count': len(approved_updates),
'flagged_count': len(flagged_updates),
'approved_updates': approved_updates,
'flagged_updates': flagged_updates,
'run_timestamp': datetime.utcnow().isoformat()
}
"""
}
The Claude Code step separates high-confidence recommendations (which can auto-apply) from lower-confidence ones (which go to a review queue).
Finally, push approved recommendations to your pricing API or database. If you use Shopify:
PATCH /admin/api/2024-01/products/{product_id}/variants/{variant_id}.json
Host: your-store.myshopify.com
Authorization: Bearer YOUR_SHOPIFY_API_TOKEN
Content-Type: application/json
{
"variant": {
"id": 123456789,
"price": "289.99"
}
}
Or if you manage pricing in a custom database, use n8n's SQL node to batch update:
UPDATE products
SET
price = ?,
price_updated_at = NOW(),
price_source = 'automated_recommendation'
WHERE sku = ?
AND price != ?;
Wrap this in a transaction so all prices update atomically or not at all.
Complete n8n Workflow Structure
Your full n8n workflow has these nodes in sequence:
- Schedule Trigger (daily at 02:00 UTC)
- Parspec-AI HTTP node (POST /api/v1/crawl)
- Wait for webhook (Parspec-AI callback)
- Deepnote HTTP node (POST /api/v1/run_notebook)
- Claude Code node (format and filter recommendations)
- SQL node (batch update database)
- Conditional branch: if flagged_updates exist, send Slack notification
- Log completion to audit table
The Manual Alternative
You can run any of these steps manually if you prefer human oversight at certain gates.
Run Parspec-AI crawls on demand rather than scheduled. Review the raw competitor data in a spreadsheet before uploading to Deepnote. Manually specify constraints in Terrakotta-AI if you want to test different objectives (e.g. "maximise volume" vs "maximise profit"). Deepnote also works brilliantly as an interactive tool; you can explore the data, tweak parameters, and re-run analyses without touching the automation.
Flag all recommendations for manual approval before prices update, rather than auto-applying high-confidence ones. Create a dashboard in Deepnote or Metabase that shows recommended vs current vs competitor prices, then have your pricing team review and click "approve" or "reject" each morning.
This hybrid approach takes longer but gives you a human checkpoint. Choose full automation only once you've validated the system for 2-3 weeks.
Pro Tips
Handle rate limits gracefully. Parspec-AI respects standard rate limits (typically 100 requests per minute). If your competitor list grows beyond what one run can handle, split the URLs into batches within your n8n workflow. Use n8n's built-in rate limiting node before calling Parspec-AI to queue requests properly.
Validate data quality before acting. Add a Deepnote step that checks: are we seeing at least two data points per competitor per SKU? Are price changes within expected bounds (not jumping 50% overnight)? Are we missing any major competitors? Flag any issues to Slack before Terrakotta-AI even runs.
Monitor margin impact. Terrakotta-AI predicts volume and margin changes, but markets don't always behave as models expect. Query your database weekly for actual results: did recommended price of PROD-001 actually increase volume by 8% as predicted? Build a feedback loop. If predictions drift from reality, retrain elasticity estimates quarterly using actual transaction data.
Use confidence scores to manage risk. Don't treat recommendations below your threshold as trash. Instead, batch them into a "review" spreadsheet that your pricing team inspects manually once a week. Some low-confidence recommendations are genuine opportunities your data simply hasn't captured yet.
Cost optimisation: run during off-peak hours. Cloud compute is cheaper at 02:00 UTC than 14:00. Deepnote and n8n both charge less for off-peak execution. Run the full daily analysis at 02:00 and only send Slack summaries at 09:00 when your team wakes up. This saves roughly 20-30% on cloud infrastructure bills compared to daytime execution.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| n8n Cloud | Professional | £30 | 100k executions/month; hosted n8n available at £20/month with lower limits |
| Parspec-AI | Growth | £150 | Up to 10,000 URLs crawled monthly; £500 plan handles 50,000+ |
| Deepnote | Team | £80 | Collaborative notebooks; includes compute hours; £30 starter plan for lighter use |
| Terrakotta-AI | Standard | £200 | Up to 5,000 SKUs analysed monthly; includes elasticity modelling |
| Claude Code (via n8n) | Pay-as-you-go | £0-20 | Minimal; most formatting tasks cost <£1 per run |
| Total | £460-480 | Assumes moderate scale (2-5k SKUs) and daily execution |
If you run this workflow daily, you're looking at roughly 30-35 executions per month across n8n. Each Parspec-AI crawl consumes 50-200 credits depending on URL count. Deepnote charges by compute minutes; most analyses complete in 2-5 minutes. Terrakotta-AI bills per SKU analysed, not per API call.
At scale (10,000+ SKUs), costs do rise, but you save that on hiring a pricing analyst. Scale also lets you negotiate volume discounts on Parspec-AI and Terrakotta-AI.