Most product teams drown in feedback. Your Slack fills with feature requests. Your support inbox bulges with complaints. Your survey platform holds hundreds of responses. But here's the problem: you have no systematic way to spot what customers actually care about, what's broken, or what should ship next quarter. You read the loudest voices, guess at trends, and hope your roadmap lands right. It rarely does. What if you could process all of it at once? Pull every piece of customer feedback from every source, automatically identify patterns across thousands of data points, and generate prioritised roadmap recommendations without touching a single row of a spreadsheet. That's not fantasy. It's a workflow you can build in an afternoon. The workflow below combines PDF analysis, paper-style explanation tools, AI agent orchestration, and content summarisation to turn raw feedback into actionable intelligence. No manual copying and pasting. No spreadsheet hell. Just data in, insights out.
The Automated Workflow
You'll use Make (Integromat) as the orchestration backbone because it handles file uploads, API calls, and conditional logic in a straightforward way. The workflow moves like this: feedback enters from multiple sources, gets consolidated into a single document, flows through Chat With PDF by Copilot.us for initial analysis, then branches into two parallel processes: one using Explainpaper to clarify complex feedback, another using MindPal to coordinate a multi-agent team that identifies themes and priorities, and finally Smmry to generate a clean summary. The output lands in your email or a shared Slack channel.
Step 1: Collect feedback from multiple sources
Start by gathering feedback from your existing systems. Export survey responses as CSV from Typeform or SurveySparrow, download support tickets from your helpdesk (Zendesk, Freshdesk, etc.), and pull user review data from your app store or review aggregator. Create a simple Python script or use Make's built-in HTTP modules to fetch data from each source:
POST https://api.zendesk.com/api/v2/search.json?query=type:ticket%20status:open
Authorization: Bearer YOUR_ZENDESK_API_TOKEN
Content-Type: application/json
Merge all three data sources into a single plain-text document, organised by source and date. Make's text aggregation module can handle this, or you can write a quick script that concatenates everything:
python
import json
from datetime import datetime survey_data = json.load(open('surveys.json'))
tickets_data = json.load(open('tickets.json'))
reviews_data = json.load(open('reviews.json')) output = []
output.append("=== CUSTOMER FEEDBACK ANALYSIS ===\n")
output.append(f"Generated: {datetime.now().isoformat()}\n\n") output.append("
## SURVEY RESPONSES\n")
for survey in survey_data: output.append(f"Date: {survey['date']}\n") output.append(f"Response: {survey['text']}\n\n") output.append("
## SUPPORT TICKETS\n")
for ticket in tickets_data: output.append(f"ID: {ticket['id']}\n") output.append(f"Issue: {ticket['description']}\n\n") output.append("
## APP STORE REVIEWS\n")
for review in reviews_data: output.append(f"Rating: {review['rating']}/5\n") output.append(f"Review: {review['text']}\n\n") with open('feedback_dump.txt', 'w') as f: f.write(''.join(output))
Save this as feedback_dump.txt and upload it to your Make workflow. Make will pass the file to the next step.
Step 2: Upload to Chat With PDF by Copilot.us
Chat With PDF lets you upload documents and ask questions of them using language models. In Make, use the HTTP module to POST your feedback document to Copilot.us:
POST https://api.copilot.us/v1/upload
Authorization: Bearer YOUR_COPILOT_API_KEY
Content-Type: multipart/form-data file: feedback_dump.txt
This returns a document ID. Store it in a Make variable so you can reference it in the next steps.
Step 3: Run initial analysis through Chat With PDF
Now query the document to extract themes and problem areas:
POST https://api.copilot.us/v1/chat
Authorization: Bearer YOUR_COPILOT_API_KEY
Content-Type: application/json { "document_id": "doc_xyz123", "query": "What are the top 10 issues or feature requests mentioned across all feedback? List them with frequency count.", "model": "gpt-4o"
}
Make will receive a structured response listing the main themes. Save this output.
Step 4: Clarify ambiguous feedback using Explainpaper
Some feedback will be vague or technical. Explainpaper helps you highlight confusing passages and get plain-English explanations. Create a Make HTTP module that calls Explainpaper's API:
POST https://api.explainpaper.com/api/papers
Authorization: Bearer YOUR_EXPLAINPAPER_API_KEY
Content-Type: application/json { "file": feedback_dump.txt, "highlight_text": "segments that are unclear"
}
This returns clarified versions of tricky feedback. It's optional but valuable when customers use jargon or describe problems obliquely.
Step 5: Deploy MindPal multi-agent orchestration
This is where the real work happens. MindPal lets you build a team of AI agents, each with a specific role. Create three agents: an Analyst Agent that categorises feedback, a Prioritisation Agent that ranks by business impact, and a Strategist Agent that generates roadmap recommendations. In Make, call MindPal's workflow API:
POST https://api.mindpal.space/v1/workflows/run
Authorization: Bearer YOUR_MINDPAL_API_KEY
Content-Type: application/json { "workflow_id": "feedback_analysis_workflow", "input": { "feedback_text": "<content from Chat With PDF output>", "clarifications": "<content from Explainpaper>", "analysis_date": "2026-03-15" }, "agents": [ { "role": "Analyst", "instruction": "Group feedback into feature requests, bug reports, and complaints. Count frequency for each group." }, { "role": "Prioritisation", "instruction": "Rank each group by business impact (revenue, retention, churn risk). Use 1-5 scale." }, { "role": "Strategist", "instruction": "Generate quarterly roadmap recommendations. Include rationale for top 5 priorities." } ]
}
MindPal coordinates these agents, passes outputs from one to the next, and returns a structured recommendation document.
Step 6: Summarise and format with Smmry
Your roadmap recommendation might be long and wordy. Use Smmry to distil it to essentials:
POST https://api.smmry.com/summarise
Authorization: Bearer YOUR_SMMRY_API_KEY
Content-Type: application/json { "text": "<MindPal roadmap output>", "length": 5
}
This returns a concise version, ideal for sending to stakeholders.
Step 7: Send results to Slack or email
In Make, add a final HTTP module to post results to Slack or send an email:
POST https://hooks.slack.com/services/YOUR/WEBHOOK/URL
Content-Type: application/json { "channel": "#product-insights", "username": "Product Analyst Bot", "text": "Weekly feedback analysis complete. See thread for roadmap recommendations.", "blocks": [ { "type": "section", "text": { "type": "mrkdwn", "text": "<Smmry summary output>" } } ]
}
Schedule this entire Make workflow to run weekly or monthly. Set a cron trigger, and you'll have fresh roadmap recommendations every cycle without lifting a finger.
The Manual Alternative
If you want more control over how each step works, skip Make and run the workflow manually. Export feedback monthly, upload it to Chat With PDF yourself, query it with custom questions tailored to your current strategy, and review Explainpaper clarifications line-by-line before passing them to MindPal. This takes 2-3 hours but gives you space to interject your own judgment at each stage. It's slower but sometimes necessary if your product strategy shifts mid-cycle or you need to focus analysis on a specific customer segment.
Pro Tips
Watch your API rate limits.
Chat With PDF and MindPal bill per request.
If you have 10,000+ feedback items, break them into batches. Process 2,000 items per run rather than all at once. Make can loop through batches without overloading any single API call.
Store all outputs in a versioned knowledge base.
Don't discard the roadmap recommendations after you read them. Archive them monthly in a shared folder (Google Drive, Notion, or a simple database). This gives you a historical record of how customer priorities have shifted and proves whether your roadmap actually responded to feedback.
Use GPT-4o for Chat With PDF queries, not GPT-4o mini.
The larger model catches subtle patterns across thousands of lines of feedback. GPT-4o mini will miss context. The cost difference is small; the insight difference is large.
De-duplicate feedback before uploading.
If the same customer submits the same request twice, remove duplicates. Chat With PDF will treat each instance as equally important, which skews your frequency counts. A quick Python script using fuzzy matching (the difflib library) takes seconds and improves accuracy.
Set thresholds for roadmap inclusion.
Don't recommend a feature unless at least 5% of feedback mentions it, or unless the impact score is above 4 out of 5. This filters out noise. Adjust the threshold based on your feedback volume; smaller teams may need lower thresholds.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| Chat With PDF by Copilot.us | Pay-as-you-go | £10-50 | £0.01 per API call; scales with feedback volume. |
| Explainpaper | Free or Pro (£8) | £0-8 | Free tier covers up to 10 paper uploads/month. Pro unlimited. |
| MindPal | Professional (£49) | £49 | Includes 50k tokens/month. Multi-agent workflows included. |
| Smmry | Pay-as-you-go | £5-20 | £0.05 per summarisation request. 100 requests = £5. |
| Make (Integromat) | Standard (£9.99) | £9.99 | Includes 10k operations/month. Sufficient for weekly automation. |
| Total | £74-136 | Varies based on feedback volume and frequency. |