Competitive pricing analysis and dynamic pricing recommendation engine
- Published
You're scrolling through competitor websites on Monday morning, and by Wednesday your pricing strategy is already outdated. Dynamic pricing works only if you can react faster than your market moves, but manual competitive analysis kills your speed. You end up either chasing prices reactively (undercutting yourself) or ignoring competitor moves entirely (losing margin)....... For more on this, see Competitor pricing analysis and dynamic pricing recommend....
The real cost isn't time spent in spreadsheets; it's the revenue lost while you're still opening tabs. A SaaS company tracking 50+ competitors can waste 15-20 hours per week on price monitoring. That's someone's full-time job, and they're still behind because data refreshes happen every few days, not every few hours.
This workflow automates the entire cycle: pull competitor pricing, analyse market position, recommend adjustments, and push changes to your systems, all without anyone touching a keyboard. We'll build a pipeline using Deepnote for analysis, ParSpec AI for competitive intelligence, and Terrakotta AI for pricing recommendations, orchestrated through n8n for maximum flexibility and cost control. For more on this, see Competitive market intelligence dashboard from pricing an.... For more on this, see Terrakotta AI vs Deepnote vs DataRobot: AI Tools for Data....
The Automated Workflow
Why n8n for this workflow?
We're using n8n over Zapier or Make because this job involves heavy data transformation and conditional logic. You'll be comparing price matrices, calculating elasticity, and sometimes holding data for manual review before pushing live. n8n handles that complexity without sending thousands of API calls per month, which keeps costs reasonable at scale.
Step 1:
Trigger and Data Collection
Your workflow starts on a schedule, typically daily or every 6 hours depending on your market volatility. The first node pulls competitor pricing data via ParSpec AI's REST API.
POST /api/v1/competitors/track
Headers:
Authorization: Bearer YOUR_PARSPEC_TOKEN
Content-Type: application/json
Body:
{
"competitor_ids": ["comp_stripe", "comp_paddle", "comp_lemonsqueezy"],
"metrics": ["list_price", "discount_tiers", "annual_savings", "feature_limits"],
"include_historical": true,
"lookback_days": 30
}
This endpoint returns a JSON object with current and historical pricing for each competitor, organised by product tier. You'll get data like:
{
"competitors": [
{
"id": "comp_stripe",
"name": "Stripe",
"last_checked": "2025-01-09T14:23:00Z",
"products": [
{
"name": "Connect Standard",
"current_price": 2.9,
"currency": "USD",
"price_type": "percentage",
"changes_30d": [
{
"date": "2024-12-15",
"old_price": 2.5,
"new_price": 2.9
}
]
}
]
}
]
}
n8n configuration for this step:
Node Type: HTTP Request
Method: POST
URL: https://api.parspec-ai.com/v1/competitors/track
Authentication: Bearer token (store in n8n credentials)
Body: Use the JSON structure above
Output mapping:
- competitors_data (store full response)
- check_timestamp (for auditing)
Set this node to run on a cron schedule: 0 */6 * * * (every 6 hours).
Step 2:
Data Enrichment and Context Building
Before analysis, you need your own product data and market context. This step uses a second API call to fetch your current pricing and customer segments, then combines everything into a single data structure.......
Deepnote acts as your transformation layer here. Create a Deepnote notebook that accepts the ParSpec data as input, loads your internal pricing table, and outputs a unified dataset ready for analysis.
import pandas as pd
import requests
from datetime import datetime
competitor_data = input_payload['competitors_data']
# Fetch your own pricing from your product API
headers = {'Authorization': f"Bearer {your_api_key}"}
your_pricing = requests.get(
'https://api.yourproduct.com/v1/pricing/current',
headers=headers
).json()
# Build comparison dataframe
comparison_rows = []
for competitor in competitor_data['competitors']:
for product in competitor['products']:
comparison_rows.append({
'competitor': competitor['name'],
'product': product['name'],
'current_price': product['current_price'],
'currency': product['currency'],
'price_type': product['price_type'],
'last_30d_changes': len(product.get('changes_30d', []))
})
comparison_df = pd.DataFrame(comparison_rows)
# Add your products for direct comparison
for your_product in your_pricing['products']:
comparison_rows.append({
'competitor': 'Your Product',
'product': your_product['name'],
'current_price': your_product['price'],
'currency': your_product['currency'],
'price_type': your_product['billing_model'],
'last_30d_changes': 0
})
enriched_df = pd.DataFrame(comparison_rows)
enriched_df.to_json('enriched_comparison.json', orient='records')
In your n8n workflow, add a node that triggers this Deepnote notebook via its API:
Node Type: HTTP Request
Method: POST
URL: https://api.deepnote.com/api/v1/projects/YOUR_PROJECT_ID/notebooks/YOUR_NOTEBOOK_ID/run
Authentication: Bearer token
Body:
{
"input_payload": {
"competitors_data": "{{ $node['ParSpec Request'].json.competitors }}"
}
}
Wait for completion, then extract the enriched data:
Output mapping:
- enriched_comparison (fetch from Deepnote output)
Step 3:
Pricing Recommendation Analysis
Now send the enriched data to Terrakotta AI for recommendation scoring. Terrakotta analyses market position, demand elasticity, and margin risk to suggest specific price changes.
POST /api/v1/pricing/recommend
Headers:
Authorization: Bearer YOUR_TERRAKOTTA_TOKEN
Content-Type: application/json
Body:
{
"your_product_id": "your_saas_tier_professional",
"your_current_price": 99,
"market_comparisons": [
{
"competitor": "Stripe",
"product": "Connect Standard",
"price": 2.9,
"price_type": "percentage"
},
{
"competitor": "Paddle",
"product": "Standard",
"price": 2.49,
"price_type": "percentage"
}
],
"internal_data": {
"monthly_customers": 1240,
"churn_rate_percent": 3.2,
"customer_acquisition_cost": 450,
"gross_margin_target_percent": 75
},
"constraints": {
"max_price_increase_percent": 10,
"min_price": 79,
"max_price": 149,
"blackout_dates": ["2025-01-15", "2025-02-14"]
}
}
Terrakotta responds with scored recommendations:
{
"recommendations": [
{
"action": "increase",
"suggested_price": 109,
"confidence_score": 0.87,
"reasoning": "Competitors averaging $105. Margin improvement: +$0.12 per customer.",
"expected_churn_impact": -0.4,
"revenue_impact_monthly": 2480,
"implementation_risk": "low"
},
{
"action": "maintain",
"suggested_price": 99,
"confidence_score": 0.62,
"reasoning": "Safest option. Minimises customer friction."
}
],
"analysis_timestamp": "2025-01-09T14:45:00Z"
}
Add this HTTP request node to your n8n workflow:
Node Type: HTTP Request
Method: POST
URL: https://api.terrakotta-ai.com/v1/pricing/recommend
Authentication: Bearer token
Body: Use the structure above, pulling internal_data from your backend API
Output mapping:
- recommendations (store full array)
- top_recommendation (select [0])
Step 4:
Filtering and Manual Review Gate
This is critical: you don't automatically push every recommendation live. Instead, filter for high-confidence, low-risk changes, then route others to a human for review.
Add a conditional split node:
Node Type: Switch
Condition 1: Automatic approval path
confidence_score >= 0.85 AND implementation_risk == "low"
Condition 2: Manual review path
All other recommendations
For Condition 1 (auto-approve):
- Proceed directly to Step 5
For Condition 2 (manual review):
- Send Slack notification to #pricing-team
- Store recommendation in database with status: "pending_review"
- Wait for webhook callback (optional, or skip if manual process)
Example Slack notification node:
Node Type: Slack
Method: Send Message
Channel: #pricing-team
Message:
"⚠️ Pricing recommendation needs review\n
Product: {{ $node['Terrakotta'].json.recommendations[0].product }}\n
Current: ${{ your_current_price }}\n
Suggested: ${{ $node['Terrakotta'].json.recommendations[0].suggested_price }}\n
Confidence: {{ $node['Terrakotta'].json.recommendations[0].confidence_score }}\n
Risk: {{ $node['Terrakotta'].json.recommendations[0].implementation_risk }}"
Step 5:
Execute Price Changes
For auto-approved changes, push to your pricing API. This is product-specific, but the pattern is:
POST /api/v1/pricing/update
Headers:
Authorization: Bearer YOUR_INTERNAL_API_KEY
Content-Type: application/json
Body:
{
"product_id": "your_saas_tier_professional",
"new_price": 109,
"effective_date": "2025-01-10T00:00:00Z",
"reason": "automated_competitive_analysis",
"source_recommendation_id": "rec_abc123",
"rollback_enabled": true
}
n8n node:
Node Type: HTTP Request
Method: POST
URL: https://api.yourproduct.com/v1/pricing/update
Authentication: Bearer token
Body: Use structure above
Output mapping:
- update_status (check for 200 response)
- effective_date (confirm timing)
If the API call succeeds, log the change. If it fails, trigger an error notification.
Step 6:
Logging and Audit Trail
Every pricing change, recommendation, and rejection gets logged to a database for future analysis. This creates accountability and helps you understand which recommendations worked versus which missed.
Use a simple database write node:
Node Type: PostgreSQL (or your database)
Query:
INSERT INTO pricing_changes_audit_log (
product_id,
old_price,
new_price,
recommendation_source,
confidence_score,
actual_churn_impact,
revenue_impact,
execution_status,
timestamp
) VALUES (
$1, $2, $3, $4, $5, $6, $7, $8, NOW()
)
Parameters:
$1: product_id
$2: old_price
$3: new_price
$4: 'terrakotta_ai'
$5: confidence_score
$6: NULL (update after 30 days)
$7: revenue_impact_monthly
$8: 'executed'
The Manual Alternative
If full automation feels risky, run the workflow but stop at Step 4. Use n8n's built-in "Wait for Webhook" node to pause execution. Your pricing team reviews recommendations in a simple dashboard, clicks "Approve" or "Reject", and the webhook fires to resume the workflow.
This requires one additional node after the Slack notification:
Node Type: Wait for Webhook
Webhook URL: https://your-n8n.com/webhook/pricing-approval
Timeout: 48 hours
Expected payload:
{
"decision": "approve" | "reject",
"recommendation_id": "rec_abc123",
"approved_by": "sarah@company.com",
"notes": "Optional context"
}
Your pricing team gets a simple HTML form with a link that fires the webhook with their decision. This keeps human oversight while eliminating data-gathering work.
Pro Tips
1. Rate limiting and API quotas
ParSpec and Terrakotta both enforce rate limits. ParSpec allows 100 competitor checks per day on standard plans; Terrakotta allows 50 recommendation requests per day. With multiple products and markets, you'll hit these fast.
Solution: stagger your checks. Run a full analysis for tier 1 products (high revenue) every 6 hours, tier 2 products every 24 hours, and tier 3 products weekly. In your n8n schedule, use tags to separate workflows:
Cron for Tier 1: 0 */6 * * *
Cron for Tier 2: 0 2 * * *
Cron for Tier 3: 0 3 * * 0
2. Handling API failures gracefully
If ParSpec returns no data (network issue, API down), you don't want pricing recommendations based on incomplete competitor data. Add error handling:
Node Type: Switch
Condition: ParSpec returned zero competitors
- Send alert to Slack
- Exit workflow (don't proceed to Terrakotta)
Condition: ParSpec returned data
- Continue normally
Store the timestamp of successful data collection, and skip recommendations if the last successful check was older than 48 hours.
3. Cost savings through batching
Running separate API calls for each product burns quota fast. Instead, batch competitor requests:
# Instead of 10 separate ParSpec calls
for product in products:
call_parspec(product) # 10 API calls
# Do this instead
competitor_ids = ['comp_stripe', 'comp_paddle', 'comp_lemonsqueezy']
call_parspec(competitor_ids=competitor_ids) # 1 API call
ParSpec's batch endpoint supports up to 50 competitors per request. Review their documentation for batch pricing.
4. Version your pricing changes
Don't overwrite your current pricing record; create versions. This lets you trace which recommendation led to which outcome. In your database:
CREATE TABLE pricing_versions (
id UUID PRIMARY KEY,
product_id VARCHAR(255),
price_amount DECIMAL(10,2),
effective_date TIMESTAMP,
recommendation_source VARCHAR(100),
recommendation_confidence DECIMAL(3,2),
created_by VARCHAR(255),
created_at TIMESTAMP DEFAULT NOW()
);
Then, 30 days after a price change, query actual customer churn and revenue impact, and update your audit log with the real result. This trains Terrakotta's model over time (if you're using their feedback API).
5. Avoid pricing thrash
If you change prices every day based on tiny competitor moves, customers notice and lose trust. Add a minimum change threshold to your conditional split:
Only auto-approve if:
- Suggested price differs from current by >= 5% AND
- Days since last price change >= 14 AND
- Confidence score >= 0.85
This prevents "price whiplash" while still reacting to meaningful market moves.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| ParSpec AI | Professional | $299 | 100 competitor checks/day, 30-day historical data, 3 seats included |
| Terrakotta AI | Growth | $199 | 50 recommendations/day, elasticity modelling, feedback API access |
| Deepnote | Team | $150 | Unlimited notebooks, API access, 3 team members |
| n8n | Self-hosted (cloud option) | $0–$49/month | Self-hosted on your infrastructure (free) or n8n Cloud starter ($49) |
| Slack (if needed) | Pro | $12.50/user/month | Only if you don't already have it; assume existing |
| Database (PostgreSQL) | Cloud DB service | $15–$100 | Depends on data volume; assume modest tier |
| Your internal API | Existing | $0 | Assume you already have pricing endpoints |
| Total | Combined | ~$663–$747/month | Scales with complexity; self-hosted n8n keeps costs flat |
This assumes one product line with 3-5 tiers. For companies with 20+ products across multiple markets, add 30–50% to ParSpec costs due to increased competitor tracking, but the core workflow scales without major changes.
The ROI calculation is straightforward: if this workflow prevents just one badly-timed price cut or catches a margin opportunity within 6 hours instead of 6 days, the cost is recovered. For SaaS companies with 1000+ customers and $1M+ ARR, this is a no-brainer investment.
More Recipes
Automated Podcast Production Workflow
Automated Podcast Production Workflow: From Raw Audio to Published Episode
Build an Automated YouTube Channel with AI
Build an Automated YouTube Channel with AI
Medical device regulatory documentation from technical specifications
Medtech companies spend significant resources translating technical specs into regulatory-compliant documentation.