Introduction
Building a SaaS application traditionally takes weeks of planning, design sprints, and development cycles. You need backend infrastructure, database schemas, authentication systems, and frontend interfaces all built in concert. Most solo founders or small teams would need at least a month of focused work, assuming you already know how to code.
The landscape has shifted. Modern AI coding assistants paired with no-code orchestration platforms now allow you to assemble functional SaaS applications in days rather than months. The catch: you need to know which tools to combine and how to wire them together without manual handoffs. This Alchemy recipe shows you exactly that.
We'll build a real example: a simple customer feedback analysing tool that collects submissions via a web form, processes them through AI for sentiment analysis and categorisation, stores results in a database, and sends weekly summary emails to administrators. The entire workflow runs automatically once deployed.
The Automated Workflow
Architecture Overview
Your application will consist of five main components:
- A form endpoint (hosted via Vercel or similar)
- An orchestration engine (Zapier, n8n, or Make) that triggers when submissions arrive
- AI processing layers (Claude API for analysis, GPT-4 for categorisation)
- Data persistence (Supabase PostgreSQL, Airtable, or similar)
- Notification dispatch (SendGrid or Resend for emails)
The entire chain runs without human intervention. Submissions flow in, get processed, stored, and summarised automatically.
Choosing Your Orchestration Tool
For this weekend-build scenario, n8n offers the best balance. It's self-hostable (free tier available on Railway or Render), has superior API flexibility compared to Zapier, and costs nothing if you host it yourself. Make works if you prefer a fully managed solution, though pricing escalates quickly. Zapier is the most intuitive but becomes expensive beyond basic workflows. Claude Code is excellent for backend logic but doesn't handle scheduling or webhook management as elegantly.
We'll use n8n as our primary orchestrator.
Step 1: Create Your Form Endpoint
First, you need a place for users to submit feedback. Deploy a simple Next.js form to Vercel:
// pages/api/feedback.js
export default async function handler(req, res) {
if (req.method !== 'POST') {
return res.status(405).json({ error: 'Method not allowed' });
}
const { email, message, company } = req.body;
// Send webhook to n8n
const response = await fetch(process.env.N8N_WEBHOOK_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
timestamp: new Date().toISOString(),
email,
message,
company,
source: 'web_form'
})
});
if (!response.ok) {
return res.status(500).json({ error: 'Failed to process feedback' });
}
return res.status(200).json({ success: true });
}
Deploy this to Vercel (free tier works fine). Your webhook URL will be something like https://your-instance.n8n.cloud/webhook/feedback-submission.
Step 2: Configure n8n Workflow
Create a new n8n workflow with these nodes in sequence:
- Webhook trigger node (receives POST from your form)
- Claude API node (analyses sentiment)
- GPT-4 node (categorises feedback type)
- Supabase insert node (stores in database)
- Email digest node (queues for weekly send)
Set up your webhook node first. In n8n, create a new workflow and add a Webhook trigger node. Configure it to listen for POST requests.
{
"path": "feedback-submission",
"httpMethod": "POST",
"responseMode": "onReceived"
}
This generates your webhook URL that you'll use in the form endpoint above.
Step 3: Add Claude for Sentiment Analysis
Next, insert an HTTP Request node to call Claude's API. Claude excels at nuanced sentiment analysis without needing fine-tuning:
curl https://api.anthropic.com/v1/messages \
-X POST \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": "Analyse the sentiment of this customer feedback as either positive, neutral, or negative. Also provide a confidence score from 0-100. Return only valid JSON.\n\nFeedback: {{ $json.body.message }}"
}
]
}'
In n8n, configure the HTTP Request node with these settings:
URL: https://api.anthropic.com/v1/messages
Method: POST
Headers:
x-api-key: {{ $env.ANTHROPIC_API_KEY }}
anthropic-version: 2023-06-01
content-type: application/json
Body:
(use the JSON above, referencing previous node output)
Map the Claude response to extract sentiment and confidence score.
Step 4: Add GPT-4 for Categorisation
Add another HTTP Request node for OpenAI's API. This categorises what type of feedback each submission represents (feature request, bug report, pricing complaint, etc.):
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{
"role": "system",
"content": "Categorise the following feedback into one of these categories: FEATURE_REQUEST, BUG_REPORT, PRICING, SUPPORT, OTHER. Respond with only the category name."
},
{
"role": "user",
"content": "{{ $json.body.message }}"
}
]
}'
Configure this similarly in n8n. The response will be the category name.
Step 5: Store Everything in Supabase
Create a PostgreSQL table in Supabase to persist all feedback with analysis results:
CREATE TABLE feedback (
id BIGSERIAL PRIMARY KEY,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
email VARCHAR(255),
company VARCHAR(255),
message TEXT,
sentiment VARCHAR(20),
sentiment_confidence INTEGER,
category VARCHAR(50),
source VARCHAR(50)
);
CREATE INDEX idx_created_at ON feedback(created_at DESC);
In your n8n workflow, add a Supabase Insert node. Configure it with your Supabase project URL and API key:
Service: Supabase
Operation: Insert Row
Table: feedback
Data to Insert:
email: {{ $json.body.email }}
company: {{ $json.body.company }}
message: {{ $json.body.message }}
sentiment: {{ $json.steps[1].output.sentiment }}
sentiment_confidence: {{ $json.steps[1].output.confidence }}
category: {{ $json.steps[2].output.category }}
source: {{ $json.body.source }}
Step 6: Queue Weekly Summaries
Add a final Email Send node in n8n (or use SendGrid). Rather than sending immediately, store digest jobs using a separate table:
CREATE TABLE digest_queue (
id BIGSERIAL PRIMARY KEY,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
admin_email VARCHAR(255),
digest_date DATE,
sent BOOLEAN DEFAULT FALSE
);
Insert into this table at the end of your feedback workflow. Then create a separate n8n workflow triggered every Monday morning (using a Cron trigger node) that:
- Queries all feedback from the past week
- Generates a summary using Claude
- Sends the summary email to all admins
Here's the summary generation prompt:
{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 2048,
"messages": [
{
"role": "user",
"content": "Generate a brief executive summary of this week's customer feedback. Include: total submissions, sentiment distribution (positive/neutral/negative percentages), top 3 feature requests, any critical bugs reported, and notable trends.\n\nFeedback data:\n{{ $json.feedbackData }}"
}
]
}
Full n8n Workflow JSON
Here's what your complete workflow looks like (export/import this into n8n):
{
"name": "Feedback Analysis & Storage",
"nodes": [
{
"name": "Webhook Trigger",
"type": "n8n-nodes-base.webhook",
"position": [100, 300],
"webhookId": "feedback-submission"
},
{
"name": "Claude Sentiment",
"type": "n8n-nodes-base.httpRequest",
"position": [300, 200],
"parameters": {
"url": "https://api.anthropic.com/v1/messages",
"method": "POST",
"headers": {
"x-api-key": "={{ $env.ANTHROPIC_API_KEY }}",
"anthropic-version": "2023-06-01"
},
"body": "={{ {model: 'claude-3-5-sonnet-20241022', max_tokens: 1024, messages: [{role: 'user', content: 'Analyse sentiment as positive/neutral/negative with confidence 0-100. JSON only.\\n\\n' + $json.body.message}]} | @json }}"
}
},
{
"name": "GPT Categorise",
"type": "n8n-nodes-base.httpRequest",
"position": [300, 400],
"parameters": {
"url": "https://api.openai.com/v1/chat/completions",
"method": "POST",
"headers": {
"Authorization": "Bearer {{ $env.OPENAI_API_KEY }}"
},
"body": "={{ {model: 'gpt-4o-mini', messages: [{role: 'system', content: 'Categorise as: FEATURE_REQUEST, BUG_REPORT, PRICING, SUPPORT, OTHER'}, {role: 'user', content: $json.body.message}]} | @json }}"
}
},
{
"name": "Store in Supabase",
"type": "n8n-nodes-base.postgres",
"position": [500, 300],
"parameters": {
"operation": "insert",
"table": "feedback",
"columns": "email, company, message, sentiment, sentiment_confidence, category, source",
"values": "{{ $json.body.email }}, {{ $json.body.company }}, {{ $json.body.message }}, {{ $json.steps[1].output.sentiment }}, {{ $json.steps[1].output.confidence }}, {{ $json.steps[2].output.category }}, {{ $json.body.source }}"
}
}
],
"connections": {
"Webhook Trigger": { "main": [["Claude Sentiment"], ["GPT Categorise"]] },
"Claude Sentiment": { "main": [["Store in Supabase"]] },
"GPT Categorise": { "main": [["Store in Supabase"]] }
}
}
Deploying n8n
Host n8n on Railway (easiest) or Render (cheapest). Both offer free tiers that support simple workflows. Connect your environment variables:
ANTHROPIC_API_KEY=sk-ant-xxx
OPENAI_API_KEY=sk-xxx
SUPABASE_URL=https://xxxx.supabase.co
SUPABASE_KEY=eyJxxx
The Manual Alternative
If you prefer more control or need custom business logic, use Claude Code (or GitHub Copilot) to generate Python scripts that run on a schedule. This trades automation for flexibility.
Create a Python script using the same API calls:
import anthropic
import openai
import supabase
import os
from datetime import datetime
def analyse_feedback(message):
client = anthropic.Anthropic(api_key=os.getenv('ANTHROPIC_API_KEY'))
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{
"role": "user",
"content": f"Analyse sentiment as positive/neutral/negative with confidence 0-100. JSON only.\n\n{message}"
}]
)
return response.content[0].text
def categorise_feedback(message):
client = openai.OpenAI(api_key=os.getenv('OPENAI_API_KEY'))
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Categorise as: FEATURE_REQUEST, BUG_REPORT, PRICING, SUPPORT, OTHER"},
{"role": "user", "content": message}
]
)
return response.choices[0].message.content
def store_feedback(email, company, message, sentiment, confidence, category):
db = supabase.create_client(
os.getenv('SUPABASE_URL'),
os.getenv('SUPABASE_KEY')
)
db.table('feedback').insert({
'email': email,
'company': company,
'message': message,
'sentiment': sentiment,
'sentiment_confidence': confidence,
'category': category,
'source': 'manual_script',
'created_at': datetime.now().isoformat()
}).execute()
if __name__ == "__main__":
# Fetch unprocessed feedback from your form
# Process each one
# Store results
pass
Deploy this to AWS Lambda (free tier) or Cloud Run and trigger it every 15 minutes. You'll lose the real-time trigger but gain complete control over error handling and custom logic.
Pro Tips
Error Handling and Retries
n8n's built-in error handling will save you hours. Configure each HTTP node with automatic retries on failure:
Error Handling: Retry
Max Retries: 3
Retry Delay: Exponential (5s base)
For rate limits, add a delay node between Claude and GPT requests (1-2 seconds). Claude has higher limits than OpenAI, so order matters.
Cost Optimisation
Use Claude 3.5 Sonnet for sentiment (cheaper than GPT-4) and only fall back to GPT-4o-mini for categorisation when you need better accuracy. Test with smaller models first. You'll spend roughly £40-80 monthly if processing 1000+ feedback items. Supabase free tier covers 500 MB storage; Vercel free tier covers your form.
Database Indexing
After a few weeks, your feedback table will grow. Add indices on frequently queried columns:
CREATE INDEX idx_sentiment ON feedback(sentiment);
CREATE INDEX idx_category ON feedback(category);
CREATE INDEX idx_company ON feedback(company);
Webhook Security
Protect your n8n webhook from random POST requests. Add basic authentication by generating a secret token and validating it in your form endpoint:
const token = req.headers['x-webhook-token'];
if (token !== process.env.WEBHOOK_SECRET) {
return res.status(401).json({ error: 'Unauthorised' });
}
Pass this same token in your form submissions.
Monitoring and Logging
Enable n8n's built-in execution logs. Check them daily for the first week to catch bugs. Set up a simple Slack integration to receive notifications when any workflow fails:
Slack Webhook -> n8n Slack Node -> Message: "Feedback workflow failed on {{ $json.body.email }}"
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| Claude API | Pay-as-you-go | £15-40 | Based on token usage; sentiment analysis is cheap |
| OpenAI (GPT-4o-mini) | Pay-as-you-go | £5-15 | Minimal cost for simple categorisation |
| n8n (self-hosted) | Free | £0 | Deploy on Railway/Render free tier |
| Supabase | Free tier | £0 | Includes 500MB storage; upgrade at £25/month if needed |
| Vercel | Hobby | £0 | Perfect for form endpoint |
| SendGrid | Free tier | £0 | 100 emails/day; upgrade at £20/month for more |
| Total (small scale) | All free/minimal | £20-55 | Process 500-1000 feedback items monthly |
Optional upgrades after launch:
- n8n Cloud (managed): £25-100/month if you don't want to self-host
- Supabase Pro: £25/month for higher limits and custom support
- Claude API volume: Negotiate pricing if processing >100k items monthly
Building a functional SaaS in a weekend is now entirely realistic. The work isn't in coding anymore; it's in connecting existing services without introducing manual steps. Start with this feedback tool, then adapt the pattern to invoice processing, lead qualification, content moderation, or any workflow that benefits from AI analysis plus database storage plus notifications. The orchestration layer remains the same.