Introduction You've just finished recording a two-hour podcast episode.
Now comes the tedious part: transcribing it, extracting the best moments for TikTok and Instagram, writing show notes with timestamps, and maybe even generating a quick intro in a different voice for your next episode. If you're doing this manually, you're looking at four to six hours of work that could be compressed into minutes. The problem gets worse at scale. Once you're publishing weekly, that's hundreds of hours per year spent on production tasks that follow the same pattern every single time. You're not adding creative value during these hours; you're performing mechanical labour that machines were designed to handle. The good news is that modern AI tools have matured enough to handle the entire pipeline without you touching it once. Your podcast can go from raw MP3 file to a folder full of social clips, polished show notes, and alternative audio variations, all while you're working on your next recording.
The Automated Workflow This workflow uses n8n as the orchestration backbone because it offers reliable webhook handling, excellent error recovery, and granular control over timing and retries.
The flow moves like this: audio file arrives, it gets transcribed and summarised, clips are identified and cut, show notes are formatted, and everything lands in a Google Drive folder. Setting up the webhook trigger in n8n Create a new workflow and add a Webhook node as your trigger. This gives you a unique URL where you'll send your audio files.
POST https://your-n8n-instance.com/webhook/podcast-production { "audio_url": "https://storage.example.com/episode-42.mp3", "episode_number": 42, "episode_title": "Understanding AI Workflows", "episode_date": "2026-03-15"
}
The webhook should expect a JSON payload with the audio file URL and episode metadata. If you're recording directly into a tool like Riverside FM or Descript, you can configure their native webhooks to post to n8n automatically when recording completes. Transcription and summarisation with Shownotes AI Add an HTTP Request node to call Shownotes AI. This tool specialises in podcast processing; it handles audio transcription, generates chapter breakdowns, and produces summaries.
POST https://api.shownotes.ai/v1/process { "audio_url": "{{ $json.audio_url }}", "language": "en", "output_format": "json", "include_chapters": true, "include_summary": true
}
Shownotes AI returns structured output with the full transcript, timestamped chapters, and a concise summary. Store this in n8n's memory for the next steps using a Set node. Clip identification and generation This is where many producers manually decide which moments deserve social amplification. Instead, use Claude Opus 4.6 to analyse the transcript and identify compelling 15-60 second segments. Add a node that calls Claude via OpenAI-compatible endpoint:
POST https://api.anthropic.com/v1/messages { "model": "claude-opus-4.6", "max_tokens": 2048, "messages": [ { "role": "user", "content": "Analyse this podcast transcript and identify 5 to 8 moments that would work well as 30-45 second social media clips. For each moment, provide: start timestamp, end timestamp, a hook sentence for the caption, and which platform it suits best (TikTok, Instagram Reels, YouTube Shorts). Return as JSON array.\n\n{{ $json.transcript }}" } ]
}
Claude will return structured JSON with clip suggestions and their timestamps. Parse this output and feed it into Clipwing, which handles the actual cutting. Cutting clips with Clipwing Clipwing takes a video or audio file and timestamps, then generates short clips automatically. Set up an HTTP node for each suggested clip:
POST https://api.clipwing.com/v1/clips { "source_url": "{{ $json.audio_url }}", "clips": [ { "start_time": "12:34", "end_time": "12:58", "title": "The AI Insight That Changed Everything", "output_format": "mp4" }, { "start_time": "28:15", "end_time": "28:47", "title": "Why Everyone Gets This Wrong", "output_format": "mp4" } ]
}
Clipwing's API returns download URLs for each generated clip. Store these in a variable for the next step. Formatting show notes Create a formatted document with the transcript, chapters, summary, and clip links. Use a Set node to build a Markdown template:
Uploading to Google Drive Add a Google Drive node to save the show notes and clips folder. Configure it to create a timestamped folder structure:
/Podcasts/Episode {{ $json.episode_number }} - {{ $json.episode_title }}/ - show-notes.md - transcript.txt - clip-1-the-ai-insight.mp4 - clip-2-why-everyone-gets-this-wrong.mp4
Optional: Voice variant generation If you want to experiment with different intros or outros, add an ElevenLabs node to generate audio variations using their Turbo v2.5 model:
POST https://api.elevenlabs.io/v1/text-to-speech/{{ voice_id }} { "text": "You're listening to Episode {{ $json.episode_number }}: {{ $json.episode_title }}. Today we explore {{ $json.summary }}.", "model_id": "eleven_turbo_v2_5", "voice_settings": { "stability": 0.7, "similarity_boost": 0.75 }
}
This gives you a fresh audio intro that matches your host's voice without manual recording. Error handling and retries Add error handling nodes after each API call. Most podcast workflows fail silently when a single step breaks. In n8n, use Try/Catch blocks to catch transcription timeouts or Clipwing failures:
Try { Call Shownotes AI Wait for response (max 5 minutes)
} Catch { Log error Send Slack notification Retry after 30 minutes
}
Configure Clipwing requests to retry up to 3 times with exponential backoff.
The Manual Alternative If automation feels overkill for your current volume, you can handle this semi-automatically.
Shownotes AI alone saves substantial time; it delivers transcript and summary in minutes. Cut clips manually using Clipwing's web interface. This hybrid approach works well if you're publishing less than weekly or if you want to maintain editorial control over which moments become clips.
Pro Tips Rate limiting and quota management. Shownotes AI caps requests based on your plan.
If you publish multiple episodes weekly, batch your API calls using n8n's "Split in Batches" node to avoid hitting limits. Leave 10 second gaps between concurrent requests. Timestamps matter more than you'd expect. If Clipwing receives incorrect start/end times, it generates unusable clips. Always validate Claude's timestamp suggestions against the actual transcript before passing them downstream. Build a validation step that checks format (MM:SS) and ensures end time exceeds start time. Cache your transcripts. Once Shownotes AI produces a transcript, store it in n8n's state node or a database like Airtable. This prevents re-processing if you need to re-run the workflow for show notes regeneration. Monitor API costs. Claude Opus 4.6 is more expensive than Sonnet 4.6 but faster at complex analysis. If you're processing 50+ episodes monthly, switching to Claude Sonnet 4.6 for clip identification could save £40-60 per month with minimal quality loss. Test with a sample episode first. Run your workflow on a recent episode before setting it to automatic. This catches configuration errors before you've committed hours to the system.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| n8n | Pro (self-hosted) or Cloud Pro | £30 (self-hosted) or £20/workflow (cloud) | Cloud option simpler for beginners; self-hosted saves money at scale |
| Shownotes AI | Starter | £25 | Covers 20 episodes/month; Pro tier available for higher volume |
| Claude API (Opus 4.6) | Pay-as-you-go | £5-15 | Clip identification per episode; exact cost depends on transcript length |
| Clipwing | Pro | £40 | Unlimited clips per month; essential for batch processing |
| ElevenLabs Turbo v2.5 | Pro | £11 | Optional; only if generating voice variants |
| Google Drive | Free or Workspace | £0-7 | Free plan works unless you need team collaboration |
| Total | £111-138 | Pays for itself after 3-4 episodes if you'd otherwise spend 4+ hours per episode |