Back to Alchemy
Alchemy RecipeIntermediateworkflow

Customer feedback analysis to product roadmap alignment workflow

24 March 2026

Introduction

Product teams spend countless hours wrestling with customer feedback. You collect it from support tickets, surveys, reviews, and social media, then manually read through it all to identify patterns. Someone summarises the themes. Someone else cross-references them with your product strategy. A roadmap meeting happens. Nothing gets automated, and the whole cycle repeats next month.

This workflow is the embodiment of wasted effort. The data exists. The insights are there. The connection to your roadmap is obvious. Yet you're paying humans to do work that machines can do faster and more consistently.

This Alchemy workflow connects three specialist AI tools to build a fully automated pipeline from raw feedback to roadmap-aligned priorities. You collect feedback at one end, and actionable insights flow out the other, ready for your next planning cycle. No copy-pasting. No manual summaries. No rewriting the same analysis twice.

The Automated Workflow

We'll build this workflow using four key stages: feedback collection, content summarisation, sentiment and theme extraction, and roadmap alignment scoring. The orchestration backbone uses n8n, which gives you good visibility into data flow and reliable scheduling, though Zapier or Make work just as well with slightly different syntax.

Stage 1: Trigger and Data Gathering

Your workflow kicks off on a schedule (daily, weekly, or whenever works for your team). The trigger pulls feedback from multiple sources simultaneously using webhooks or API calls. For a typical setup, you'd pull from:

  • Your support ticketing system (Zendesk, Intercom, Freshdesk)
  • Customer survey responses (Typeform, SurveyMonkey)
  • Social media mentions (through a tool like Mention or manual export)
  • In-app feedback widgets

In n8n, this looks like multiple HTTP Request nodes firing in parallel, each authenticated to your data source. Here's a generic example of pulling from a support API:


GET /api/v2/tickets?status=solved&created_at>2024-01-15&limit=100
Authorization: Bearer YOUR_API_KEY

Each source returns structured data (ticket ID, customer email, feedback text, timestamp). You'll want to filter for a time window; weekly is common to avoid reprocessing old feedback.

Stage 2: Summarisation with Resoomer AI

Now you have potentially hundreds of feedback snippets, each between 50 and 2000 words. Resoomer AI compresses this into digestible summaries, which becomes your working dataset for analysis. Rather than feed it individual comments, batch your feedback into logical chunks: one request per product area, or one per week.

Resoomer AI's API is straightforward. You send plain text and request a summary with a specific reduction ratio:


POST https://api.resoomer.com/summarize
Content-Type: application/json
Authorization: Bearer YOUR_RESOOMER_API_KEY

{
  "doc": "Customer feedback text here. Customer says the mobile app crashes when...",
  "type": "text",
  "language": "en",
  "ratio": 0.3
}

The response gives you a compressed version at roughly 30% of the original length. This is critical because downstream API costs scale with token usage. A 50-word summary costs a fraction of processing the original 500-word ticket.

In your n8n workflow, add a Loop node that batches your feedback (say, 10 items per request to Resoomer), then passes the summarised output to the next stage. Store these summaries in a temporary variable or database table.

Stage 3: Extraction with Bhava AI

Bhava AI specialises in sentiment analysis and emotion detection. It's more nuanced than simple positive/negative/neutral scoring; it identifies underlying emotional drivers. You send summarised feedback and get back structured data on sentiment, detected emotions, intensity, and key phrases.


POST https://api.bhava.ai/v1/analyze
Content-Type: application/json
Authorization: Bearer YOUR_BHAVA_API_KEY

{
  "text": "Summary text from previous step",
  "analyse_sentiment": true,
  "analyse_emotions": true,
  "extract_entities": true,
  "language": "en"
}

The response looks roughly like this:


{
  "sentiment": {
    "score": 0.65,
    "label": "positive"
  },
  "emotions": [
    {
      "emotion": "frustration",
      "intensity": 0.8
    },
    {
      "emotion": "hopefulness",
      "intensity": 0.4
    }
  ],
  "entities": [
    "mobile app",
    "crash",
    "login"
  ],
  "keyphrases": [
    "crashes on startup",
    "loses data"
  ]
}

Add another Loop node in n8n to send each summarised feedback item to Bhava AI in parallel (your orchestrator handles rate limiting). Store the response alongside the original feedback.

Stage 4: Roadmap Alignment with Terrakotta AI

This is where the workflow gets clever. Terrakotta AI is built for product intelligence; it understands roadmap language and can score feedback against your strategic priorities. You feed it the extracted themes and emotions from Bhava AI, plus a reference document describing your roadmap, product strategy, or quarterly goals.


POST https://api.terrakotta.ai/v1/score-alignment
Content-Type: application/json
Authorization: Bearer YOUR_TERRAKOTTA_API_KEY

{
  "feedback": {
    "text": "Users find the mobile app crashes on startup",
    "sentiment": 0.65,
    "emotion": "frustration",
    "keyphrases": [
      "crashes on startup",
      "loses data"
    ]
  },
  "roadmap_context": "Q1 2024: Mobile app stability and reliability. Q2 2024: New collaboration features.",
  "score_against": [
    "Mobile app stability",
    "New collaboration features",
    "Performance optimisation"
  ]
}

The response assigns alignment scores and priority recommendations:


{
  "alignment_scores": {
    "Mobile app stability": 0.95,
    "New collaboration features": 0.1,
    "Performance optimisation": 0.85
  },
  "recommended_priority": "high",
  "reasoning": "Directly aligns with Q1 stability goal; high sentiment intensity suggests user frustration"
}

In n8n, use a Loop node to send each piece of feedback through Terrakotta AI, then aggregate the results by roadmap priority.

Stage 5: Aggregation and Output

Your final step consolidates everything into a structured output document. This is where you create the actual deliverable: a weekly or monthly feedback report aligned to your roadmap. Use a Merge node to combine data from all upstream steps, then group by roadmap priority or product area.


{
  "report_period": "2024-01-15 to 2024-01-22",
  "generated_at": "2024-01-22T09:00:00Z",
  "summary_by_roadmap_priority": {
    "Mobile app stability": {
      "total_feedback_items": 34,
      "average_sentiment": 0.58,
      "dominant_emotions": ["frustration", "disappointment"],
      "key_themes": [
        "crashes on startup",
        "data loss",
        "slow performance"
      ],
      "sample_quotes": ["...", "..."]
    },
    "New collaboration features": {
      "total_feedback_items": 8,
      "average_sentiment": 0.72,
      ...
    }
  }
}

Push this output to wherever your team consumes product insights: a Google Doc, a shared Slack channel, a database table, or a custom dashboard. Use n8n's Send Email or Slack nodes to notify stakeholders when the report is ready.

Complete n8n Workflow Structure

Here's a conceptual flow:


[Schedule Trigger] 
  ↓
[HTTP Request: Support API] [HTTP Request: Survey API] [HTTP Request: Social Media]
  ↓                          ↓                         ↓
  ← [Merge] →
  ↓
[Loop: Batch to Resoomer AI]
  ↓
[Loop: Each Summary to Bhava AI]
  ↓
[Loop: Each Result to Terrakotta AI]
  ↓
[Merge + Aggregate Results]
  ↓
[Send to Google Drive / Slack / Database]

Each API call should have error handling (retry logic, timeout management). In n8n, use Try/Catch blocks and set reasonable timeouts (30 seconds per API request is sensible).

Configuration Details

Set up your n8n credentials securely in the Credentials panel. For each tool:

  • Create separate credential entries for each API key
  • Use environment variables in production (n8n supports $ENV syntax)
  • Test each connection before deploying the full workflow

For scheduling, use the Schedule node set to run weekly on Monday morning at 9am UTC. Adjust the cron expression to your preference:


0 9 * * 1

This means 9am UTC, every Monday.

The Manual Alternative

If you prefer more human oversight in the loop, you can modify this workflow to generate draft reports that pause for manual review before distribution. Add a human approval step using n8n's Wait node, which can be triggered by an email approval link.

Replace the final "Send to Slack" node with a Slack message that includes approval buttons. Clicking "Approve" resumes the workflow and sends the report. Clicking "Reject" pauses it and sends you a form to note what needs changing.

This hybrid approach sacrifices full automation for quality control. It's worth it if your team regularly disagrees with AI-generated categorisations or if you need to manually tweak recommendations before sharing them with leadership.

You can also use Claude Code (Claude's integrated coding environment) to write custom logic for grouping or filtering feedback before aggregation. This is helpful if your feedback has unusual structure or if you need domain-specific rules (e.g., "treat any mention of data loss as critical, regardless of sentiment score").

Pro Tips

1. Batch Your API Calls Wisely

Sending 500 individual feedback items to Bhava AI one at a time wastes time and money. Instead, batch 10-20 items per request where the API supports it. This reduces API calls by an order of magnitude and speeds up your workflow. Check each tool's API docs for batch endpoints; Bhava AI supports up to 25 items per request.

2. Cache Duplicate Feedback

If the same feedback appears multiple times (common with survey tools or support tickets from the same customer), process it once and reuse the result. Add a de-duplication step early in your workflow using a hash of the feedback text. This is especially valuable if you're running daily workflows; weekly feedback might include some carryover from previous weeks.

3. Monitor Rate Limits Carefully

Each tool has rate limits. Resoomer AI allows 100 requests per hour on most plans. Bhava AI allows 1000 requests per day. Terrakotta AI allows 500 per day on standard plans. In n8n, add delay nodes between batches:


[Set Delay to 3 seconds] → [API Request] → [Repeat]

This ensures you stay well below rate limits without throttling the workflow unnecessarily. For large feedback volumes, spread your workflow over multiple days or adjust your data collection window.

4. Handle Errors Gracefully

Use Try/Catch blocks to handle timeout or API errors. If Bhava AI fails on one item, don't let it kill the entire workflow. Log the error and move on:


Try
  [Loop: Send to Bhava AI]
Catch
  [Log Error: Item ID, Error Message]
  [Add to Error Queue]

Retry failed items once at the end of the workflow. If they fail twice, flag them for manual review.

5. Cost Optimisation

The biggest cost driver is API calls. Optimise by:

  • Processing feedback less frequently (monthly instead of weekly) if your volume is small
  • Filtering out low-value feedback before processing (filter out one-word responses, spam, duplicates)
  • Using cheaper summarisation tools for initial pass; only run expensive sentiment analysis on promising items
  • Negotiating volume discounts with tool providers if you're processing thousands of items monthly

Start with weekly runs and move to daily only if your team can act on daily insights.

Cost Breakdown

ToolPlan NeededMonthly CostNotes
Resoomer AIProfessional$30-50Pay per request or subscription; bulk cheaper
Bhava AIStandard$40-601000 requests/day included; extra $0.02 per request
Terrakotta AIGrowth$60-80500 requests/day included; extra $0.05 per request
n8n (Orchestration)Cloud Standard$201,000 executions/month; most weeks use 8-12
Zapier (Alternative)Professional$99Better for simple workflows; pricier at scale
Make (Alternative)Standard$9.99Cheapest orchestration; slightly steeper learning curve
Total Estimated Monthly Cost$150-210Varies by feedback volume and execution frequency

If you're processing more than 500 feedback items weekly, consider upgrading Bhava AI and Terrakotta AI to higher tiers. If you're processing fewer than 50 items weekly, use Make instead of n8n to save $10-15.

This automation replaces roughly 8-12 hours of manual work per week. At a loaded cost of £50-70 per hour, you're looking at £400-840 in labour savings weekly, or roughly £1,600-3,360 monthly. The payback period is weeks, not months.

You now have a feedback-to-roadmap pipeline that runs on schedule, doesn't require babysitting, and produces consistent, comparable output every cycle. Your product team gets weekly or monthly insights instead of ad-hoc summaries based on whoever had time to read the feedback. That's not just efficiency; it's better product decisions.