Introduction
Feature request management is one of those tasks that feels simple in theory but becomes a logistical nightmare in practice. Your team receives requests via email, Slack, support tickets, and community forums. Someone has to read each one, categorise it, assess feasibility, identify duplicates, and then synthesise all of that into a coherent roadmap document. That person usually spends hours copying and pasting between tools, manually summarising similar requests, and updating spreadsheets.
The real problem is that without automation, this process introduces human error, takes valuable time away from actual product development, and creates bottlenecks. Requests get lost, duplicates aren't caught, and your roadmap never truly reflects what customers actually want. By the time you've processed everything manually, the landscape has shifted and you're working from stale data.
This Alchemy recipe shows you how to build a fully automated feature request pipeline that ingests requests from multiple sources, analyses them for sentiment and priority, clusters similar requests together, and generates a prioritised roadmap. You'll combine three AI tools (bhava-ai for request ingestion, mindpal-1 for analysis, and terrakotta-ai for roadmap synthesis) and wire them together with either Zapier, n8n, Make, or Claude Code. No manual handoff required.
The Automated Workflow
Architecture Overview
The workflow operates in four stages: collection, analysis, clustering, and synthesis. Requests arrive via webhook, get processed by AI analysis, are grouped by topic, and feed directly into roadmap generation. The entire process runs on a schedule or triggers when new requests arrive, meaning your roadmap stays current automatically.
Here is how the data flows:
Request Input (Email/Slack/Form)
↓
bhava-ai: Parse and Extract
↓
mindpal-1: Analyse Sentiment & Priority
↓
terrakotta-ai: Cluster & Group Similar Requests
↓
Generate Roadmap Output
↓
Slack/Email Notification
Setting Up the Webhook Ingestion
Start by creating a webhook that accepts feature requests from any source. This can be triggered manually or integrated with your support ticketing system.
If you're using n8n (which is my recommendation for this particular workflow because of its robust scheduling and sub-workflow capabilities), set up an incoming webhook node:
POST https://your-n8n-instance.com/webhook/feature-requests
Content-Type: application/json
{
"source": "email",
"customer_name": "Jane Smith",
"email": "jane@example.com",
"request_title": "Dark mode for dashboard",
"request_body": "Users are asking for a dark theme option. Several competitors offer this and our support team receives weekly requests for it.",
"date_received": "2024-01-15T10:30:00Z"
}
The webhook node in n8n captures this data and passes it to the next step without any transformation. This is critical: keep your ingestion layer simple.
Step 1: Request Parsing with bhava-ai
bhava-ai excels at understanding unstructured text and extracting key information. You'll use it to standardise all incoming requests, regardless of source.
Configure a bhava-ai API call in your orchestration tool:
POST https://api.bhava-ai.com/v1/analyse
Authorization: Bearer YOUR_BHAVA_API_KEY
Content-Type: application/json
{
"text": "{{ $node['Webhook'].json.request_body }}",
"model": "feature-extractor",
"instructions": "Extract the following from this feature request: 1) Core feature being requested, 2) Use case or problem being solved, 3) User persona, 4) Estimated impact (if mentioned). Return as structured JSON.",
"output_format": "json"
}
The response from bhava-ai will look like this:
{
"core_feature": "Dark mode interface",
"use_case": "Reduce eye strain during night-time use",
"user_persona": "Power users, developers, night-shift support staff",
"estimated_impact": "high demand, weekly requests",
"confidence_score": 0.94
}
Store this parsed data in a temporary variable or pass it directly to the next node. Don't save it to a database yet; keep it in memory through the workflow.
Step 2: Sentiment Analysis and Priority Scoring with mindpal-1
mindpal-1 is designed for understanding tone and urgency. Use it to automatically score each request's priority and emotional context.
POST https://api.mindpal-1.com/v1/analyse-sentiment
Authorization: Bearer YOUR_MINDPAL_API_KEY
Content-Type: application/json
{
"text": "{{ $node['Webhook'].json.request_body }}",
"model": "sentiment-priority",
"analysis_type": "urgency_score",
"context": "feature_request_prioritisation",
"return_fields": ["sentiment_score", "urgency_level", "customer_pain_points", "business_value_indicators"]
}
This returns:
{
"sentiment_score": 0.72,
"urgency_level": "high",
"customer_pain_points": ["eye strain", "competitive disadvantage"],
"business_value_indicators": ["mentioned by multiple customers", "competitive feature"],
"priority_score": 8.5
}
Combine this with the bhava-ai output. You now have a standardised, priority-scored request ready for clustering.
Step 3: Clustering and Deduplication with terrakotta-ai
This is where the magic happens. terrakotta-ai takes all your requests (either from a batch or accumulated over time) and groups them intelligently.
Before calling terrakotta-ai, you'll need to collect multiple requests. Set up a scheduled trigger that runs every 24 hours or accumulates requests until you have at least 10 pending ones. Store these processed requests in a temporary array:
// In your n8n workflow, after processing each request:
let pendingRequests = context.pendingRequests || [];
pendingRequests.push({
id: generateId(),
core_feature: "{{ $node['bhava-ai'].json.core_feature }}",
use_case: "{{ $node['bhava-ai'].json.use_case }}",
priority_score: "{{ $node['mindpal-1'].json.priority_score }}",
source: "{{ $node['Webhook'].json.source }}",
customer_name: "{{ $node['Webhook'].json.customer_name }}"
});
context.pendingRequests = pendingRequests;
Then, when your scheduled trigger fires, call terrakotta-ai:
POST https://api.terrakotta-ai.com/v1/cluster
Authorization: Bearer YOUR_TERRAKOTTA_API_KEY
Content-Type: application/json
{
"documents": {{ JSON.stringify(pendingRequests) }},
"clustering_type": "semantic",
"num_clusters": "auto",
"output_format": "hierarchical",
"include_metadata": true
}
The response groups similar requests together:
{
"clusters": [
{
"cluster_id": "dark-mode-group",
"theme": "Dark mode and accessibility features",
"member_count": 7,
"combined_priority_score": 8.4,
"members": [
{ "request_id": "req_001", "customer": "Jane Smith", "priority": 8.5 },
{ "request_id": "req_023", "customer": "Alex Johnson", "priority": 8.3 }
]
},
{
"cluster_id": "bulk-export-group",
"theme": "Data export and integration features",
"member_count": 4,
"combined_priority_score": 7.2,
"members": [...]
}
]
}
Step 4: Roadmap Generation and Output
Now that you have clustered, prioritised requests, generate the actual roadmap document. You can do this with a simple template or call an LLM to generate descriptive roadmap items.
Using Claude Code (or any LLM orchestration), construct your roadmap:
POST https://api.openai.com/v1/chat/completions
Authorization: Bearer YOUR_OPENAI_API_KEY
Content-Type: application/json
{
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a product manager writing a feature roadmap based on customer feedback analysis. Be specific about customer impact and business value."
},
{
"role": "user",
"content": "Generate a prioritised roadmap entry for this cluster: {{ JSON.stringify(cluster) }}"
}
]
}
This generates narrative roadmap entries like:
## Q1 2024 Roadmap
### Priority 1:
Dark Mode and Accessibility (7 customer requests, avg priority 8.4/10)
**Why**: Multiple power users and night-shift support staff report eye strain. Competitors already offer this feature. Weekly request volume from support team.
**Impact**: Improved user retention, competitive positioning, support ticket reduction.
**Estimated Effort**: Medium (2 sprints)
**Customers Requesting**: Jane Smith, Alex Johnson, and 5 others
### Priority 2:
Bulk Data Export (4 customer requests, avg priority 7.2/10)
**Why**: Enterprise customers need to integrate our data with their systems. Currently a manual, error-prone process.
**Impact**: Unlock new customer segments, reduce churn in enterprise tier.
**Estimated Effort**: High (4 sprints)
**Customers Requesting**: Acme Corp, TechStart Inc, and 2 others
Save this to a document (Google Docs, Notion, Markdown file) that your team reviews. Push updates to Slack or email at the end of each cycle.
Wiring It All Together in n8n
Here is a condensed version of how the complete workflow looks in n8n:
- Webhook node (receives requests)
- bhava-ai node (parsing)
- mindpal-1 node (priority scoring)
- Conditional check: accumulate or process?
- If accumulating, save to temporary storage
- Scheduled trigger (every 24 hours)
- terrakotta-ai node (clustering)
- LLM node or template (roadmap generation)
- Write node (save to Google Docs or Notion)
- Slack notification node
The workflow handles its own retry logic and error catching. If bhava-ai fails to parse a request, n8n can be configured to mark it for manual review. If terrakotta-ai returns unexpected output, you catch it and notify a human.
Using Make (Integromat) Instead
Make is simpler but less flexible. Use it if you have fewer than 50 requests per week:
Set up a Scenario with:
- Webhook (Feature Request Input)
- HTTP module calling bhava-ai API
- HTTP module calling mindpal-1 API
- Schedule trigger (daily)
- HTTP module calling terrakotta-ai
- Create Document (Google Docs)
- Send Slack Message
Make excels at this because its visual builder is intuitive and it handles API rate limiting automatically. The trade-off is that customisation is more limited.
Using Zapier
Zapier is the quickest to set up but costs more at scale. Create a Zap that:
- Triggers on webhook (custom request)
- Formats data (create a JSON object)
- Calls bhava-ai via Webhooks by Zapier
- Calls mindpal-1 via Webhooks by Zapier
- Creates a row in Airtable (temporary storage)
- Uses a scheduled trigger to call terrakotta-ai
- Updates a Google Sheet with final roadmap
Zapier's advantage is simplicity; its disadvantage is cost (expect £200+ per month at moderate volume).
The Manual Alternative
If you prefer not to automate everything, keep the AI analysis but do the clustering and roadmap writing yourself.
Set up a simpler workflow:
- Requests come in via a Google Form or support ticketing system
- Each request gets parsed by bhava-ai and priority-scored by mindpal-1
- Results export to a spreadsheet
- Your product team reviews the spreadsheet weekly and writes the roadmap manually
This takes 60-90 minutes per week instead of hours, and you retain full control over prioritisation. Many teams find this the right balance between automation and human judgment.
Pro Tips
Rate Limiting and Cost Control
bhava-ai and mindpal-1 have rate limits (usually 100 requests per minute on standard plans). If you receive more requests than that, add a queue node in n8n that batches requests and spreads them across a 60-second window. This prevents API errors and keeps your costs predictable.
// In n8n, add a delay between API calls
{{ delay(50) }} // 50ms delay between requests
Error Handling for Missing Data
Some requests will be incomplete or vague. Build in a fallback: if bhava-ai returns a confidence score below 0.7, flag the request and send it to a human for manual tagging. Don't let bad data corrupt your roadmap.
if (confidence_score < 0.7) {
// Add to manual review queue
context.manualReviewQueue.push(request);
} else {
// Continue to clustering
}
Duplicate Detection Before Clustering
Before terrakotta-ai clustering, run a quick exact-match check on request titles. This speeds up clustering and reduces API costs. Use n8n's built-in array methods:
let uniqueRequests = [];
let seenTitles = new Set();
for (let req of allRequests) {
if (!seenTitles.has(req.core_feature)) {
uniqueRequests.push(req);
seenTitles.add(req.core_feature);
}
}
return uniqueRequests;
Incremental Roadmap Updates
Don't regenerate the entire roadmap every time. Instead, store previous clusters and only process new requests. terrakotta-ai supports incremental clustering; ask for it in your API call. This cuts processing time in half.
Customer Attribution
Always keep customer names tied to requests throughout the workflow. When generating roadmap entries, include the customer list. This creates accountability and helps the sales team upsell based on "we're building what you asked for."
{
"cluster_id": "dark-mode-group",
"customers_requesting": [
"Jane Smith (jane@example.com)",
"Alex Johnson (alex@example.com)"
],
"total_requests": 7
}
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| bhava-ai | Pro (5k API calls/month) | £40 | ~150 requests per day fits comfortably; overage is £0.01 per call |
| mindpal-1 | Standard (10k API calls/month) | £35 | You'll call this once per request; 300+ requests per day requires upgrade to £70 |
| terrakotta-ai | Growth (unlimited clustering) | £60 | Running daily clustering is fine; monthly fee regardless of volume |
| n8n | Cloud Pro (7,500 executions/month) | £30 | Each request triggers roughly 5 executions; 300/day fits here; upgrade to Cloud Team (£100) for 50k executions if you scale |
| Make | Standard (10k operations/month) | £11 | Cheaper upfront but hits limits faster; Team plan (£25) recommended |
| Zapier | Starter (100 tasks/month) | £20 | Each request = ~5 tasks; upgrade to Professional (£50) for 750 tasks/month |
| Claude API / OpenAI | Pay-as-you-go | £5-15 | For roadmap generation; highly variable based on roadmap length |
Total Estimated Cost: £170-£230 per month (using n8n + bhava-ai + mindpal-1 + terrakotta-ai for light to moderate volume, under 300 requests per day).
For teams receiving 1000+ requests monthly, the automation saves easily £3000+ in labour cost and prevents the strategic damage of missing important feature requests.