Introduction
Creating patient education videos from clinical guidelines is a task most healthcare organisations handle manually, and it shows. A clinical team writes guidelines, someone transcribes them into a script, an editor reviews it, then you commission a video production company or struggle through screen recordings. The whole process takes weeks and costs hundreds or thousands of pounds per video.
The friction happens at handoff points. Guidelines sit in PDFs. Scripts get emailed back and forth. Video files need uploading to your learning management system. Each step requires someone to review, approve, or manually move files around. For healthcare organisations managing multiple conditions, dozens of guidelines, and constant updates, this approach doesn't scale.
What if instead you could feed clinical guidelines directly into a system that automatically generates a polished patient education video, complete with professional voiceover, and dumps it ready-to-use into your content repository? No emails, no manual file transfers, no waiting for production crews. This workflow does exactly that by combining content generation, voice synthesis, and video assembly into a single automated pipeline.
The Automated Workflow
This workflow takes a clinical guideline (as text or a PDF extract) and produces a finished patient education video with minimal human intervention. Here's what happens at each stage:
- AI-Boost generates an educational script tailored for patients.
- ElevenLabs converts that script into professional audio with natural pacing.
- Hour One creates the video with a presenter and visual elements.
- An orchestration tool connects everything and handles the data flow.
Choosing Your Orchestration Tool
For this workflow, I'd recommend n8n if you want to self-host and own your data completely, or Make if you want minimal infrastructure overhead. Zapier works but has limitations on complex JSON handling. Claude Code works brilliantly for building a custom backend if you're comfortable with a bit of Python.
Let's walk through the implementation using n8n, since it's well-suited to healthcare workflows where data privacy matters.
Setting Up the n8n Workflow
Start by creating a new workflow in n8n. You'll need to:
- Create credentials for each API: AI-Boost, ElevenLabs, and Hour One.
- Add a trigger (webhook or manual trigger for testing).
- Chain the nodes together with proper error handling.
Here's the basic workflow structure:
[Webhook Trigger]
→ [AI-Boost Script Generation]
→ [ElevenLabs Audio Generation]
→ [Hour One Video Creation]
→ [Save to Storage/LMS]
Step 1: Webhook Trigger and Input Validation
Create a webhook that accepts the clinical guideline. You'll want to validate that the input contains what you need.
POST /webhook/patient-education-video
Content-Type: application/json
{
"guideline_text": "Type 2 diabetes management in primary care...",
"condition_name": "Type 2 Diabetes",
"target_audience": "Newly diagnosed patients",
"language": "en-GB",
"video_duration_target": 3
}
In n8n, add a Webhook node with POST method, then add a Set node to validate required fields.
{
"if_missing_required_field": "error",
"required_fields": ["guideline_text", "condition_name"]
}
Step 2: Generate Patient-Friendly Script with AI-Boost
AI-Boost's API takes raw clinical text and rewrites it for patient comprehension. You'll call their endpoint with the guideline and request a script.
POST https://api.ai-boost.io/v1/generate
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
{
"input_text": "{{ $json.guideline_text }}",
"output_type": "patient_education_script",
"tone": "friendly_but_authoritative",
"reading_level": "GCSE",
"max_length": 800,
"include_key_messages": true,
"format": "markdown"
}
In n8n, create an HTTP Request node configured as:
- Method: POST
- URL:
https://api.ai-boost.io/v1/generate - Headers: Add
Authorization: Bearer YOUR_API_KEY - Body: Use the above JSON, replacing
{{ $json.guideline_text }}with the dynamic reference to your webhook input.
The response will include a script field containing the patient-friendly script. Store this output; you'll need it next.
{
"script": "Welcome to your guide on managing Type 2 diabetes...",
"word_count": 650,
"estimated_read_time_seconds": 180,
"key_messages": ["Take medication as prescribed", "Monitor blood sugar", "Attend appointments"]
}
Step 3: Convert Script to Audio with ElevenLabs
ElevenLabs generates the voiceover. Their API is straightforward: you send text and get audio back.
POST https://api.elevenlabs.io/v1/text-to-speech/21m00Tcm4TlvDq8ikWAM
Authorization: xi-api-key YOUR_ELEVEN_LABS_API_KEY
Content-Type: application/json
{
"text": "{{ $json.script }}",
"model_id": "eleven_monolingual_v1",
"voice_settings": {
"stability": 0.5,
"similarity_boost": 0.75
}
}
The voice ID 21m00Tcm4TlvDq8ikWAM is Rachel, a clear, professional British female voice. If you prefer a different voice, check ElevenLabs' voice library and swap the ID.
In n8n, create another HTTP Request node:
- Method: POST
- URL:
https://api.elevenlabs.io/v1/text-to-speech/21m00Tcm4TlvDq8ikWAM - Headers: Add
xi-api-key: YOUR_ELEVEN_LABS_API_KEY - Body: JSON as above
- Set "Return Binary Data" to true, since the response is audio.
The API returns MP3 audio as binary data. n8n will handle this automatically if you flag it as binary.
Step 4: Create Video with Hour One
Hour One's API lets you create videos with a virtual presenter reading your script. You'll provide the audio (from ElevenLabs) and optionally background visuals.
POST https://api.hourone.com/v1/videos
Authorization: Bearer YOUR_HOUR_ONE_API_KEY
Content-Type: application/json
{
"title": "Patient Education: {{ $json.condition_name }}",
"script": "{{ $json.script }}",
"presenter_id": "amy",
"audio_url": "{{ $json.elevenlabs_audio_url }}",
"background_template": "medical_clinic",
"subtitles": true,
"subtitle_language": "en-GB",
"duration_target_seconds": "{{ $json.video_duration_target * 60 }}",
"output_resolution": "1080p"
}
Hour One can either generate audio itself or use the audio you provide. Since you're using ElevenLabs audio, pass the audio URL. The presenter will lip-sync to it.
In n8n, you need to first upload the audio file to a temporary location (AWS S3, Google Cloud Storage, or Hour One's own storage) and get a URL. Here's where it gets slightly complex:
- Use an AWS S3 node (or equivalent) to upload the binary audio from ElevenLabs.
- Get the resulting URL.
- Pass that URL to Hour One.
Alternatively, Hour One accepts base64-encoded audio in the request body, avoiding the S3 step:
{
"title": "Patient Education: {{ $json.condition_name }}",
"script": "{{ $json.script }}",
"presenter_id": "amy",
"audio_base64": "{{ $json.elevenlabs_audio_base64 }}",
"background_template": "medical_clinic",
"subtitles": true,
"subtitle_language": "en-GB",
"output_resolution": "1080p"
}
In n8n, add an HTTP Request node for Hour One:
- Method: POST
- URL:
https://api.hourone.com/v1/videos - Headers: Add
Authorization: Bearer YOUR_HOUR_ONE_API_KEY - Body: Use the JSON above
- Wait for response: Check the "Wait for Response" option with a timeout of 300 seconds (videos take time to render).
Hour One's response includes a video_id and status. Initially, status will be processing. You'll need to poll for completion.
Step 5: Poll for Video Completion
Hour One doesn't complete instantly. Add a Loop node that checks the video status every 30 seconds until it's ready.
GET https://api.hourone.com/v1/videos/{{ $json.video_id }}
Authorization: Bearer YOUR_HOUR_ONE_API_KEY
In n8n, use a Loop node set to:
- Max iterations: 20 (10 minutes of polling)
- Wait between iterations: 30 seconds
Inside the loop, call the GET endpoint above. When the response shows status: "completed", break out of the loop and continue.
Step 6: Download and Store the Video
Once complete, Hour One provides a download URL or you can fetch the video file directly.
GET https://api.hourone.com/v1/videos/{{ $json.video_id }}/download
Authorization: Bearer YOUR_HOUR_ONE_API_KEY
Save this video to your storage backend. For healthcare, that's typically:
- Your own secure file server (SFTP/SSH)
- AWS S3 with encryption
- Your learning management system's API (most LMS platforms have video upload endpoints)
Here's an example of uploading to Moodle's LMS API:
POST https://your-moodle-instance.ac.uk/webservice/rest/server.php
Content-Type: multipart/form-data
wstoken=YOUR_MOODLE_TOKEN
wsfunction=core_files_upload
userid=123
itemid=0
filecontent={{ video_binary_data }}
filename={{ condition_name }}_patient_education.mp4
In n8n, use an HTTP Request node configured for multipart file upload, pointing to your storage destination.
Step 7: Notification and Logging
Finally, send a notification (email, Slack, Teams) confirming the video is ready, and log the entire workflow execution for audit purposes.
POST https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
Content-Type: application/json
{
"text": "Patient education video created",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "Video: *{{ $json.condition_name }}*\nStatus: Completed\nDuration: {{ $json.video_duration }} minutes\nURL: {{ $json.video_url }}"
}
}
]
}
In n8n, add a Slack node at the end to notify your team.
Complete n8n Workflow JSON
Here's a condensed version of the full workflow you can import:
{
"nodes": [
{
"parameters": {
"path": "patient-education-video",
"method": "POST",
"options": {}
},
"name": "Webhook",
"type": "n8n-nodes-base.webhook",
"typeVersion": 1,
"position": [100, 200]
},
{
"parameters": {
"method": "POST",
"url": "https://api.ai-boost.io/v1/generate",
"authentication": "genericCredentialType",
"genericCredentials": "=YOUR_AI_BOOST_KEY",
"sendBody": true,
"bodyParameters": {
"parameters": [
{
"name": "input_text",
"value": "={{ $json.guideline_text }}"
},
{
"name": "output_type",
"value": "patient_education_script"
},
{
"name": "tone",
"value": "friendly_but_authoritative"
}
]
}
},
"name": "AI-Boost Script Generation",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [300, 200]
}
],
"connections": {
"Webhook": {
"main": [[{"node": "AI-Boost Script Generation", "index": 0}]]
}
}
}
(The full workflow is large; this shows the pattern. You'd continue adding nodes for ElevenLabs, Hour One, polling, and storage in the same manner.)
The Manual Alternative
If you prefer more control at each step, you can run the workflow manually using curl commands or Postman. This approach is slower but gives you a chance to review outputs before proceeding.
- Call AI-Boost and save the script to a file.
- Review the script for accuracy and patient suitability.
- Call ElevenLabs to generate audio.
- Listen to the audio and verify the pacing and accent.
- Call Hour One to generate the video.
- Download the video, review it, and manually upload to your LMS.
This approach works if you're creating a handful of videos and want assurance at each stage. For routine, high-volume creation, automation eliminates delays and human error.
Pro Tips
1. Handle Rate Limits Gracefully
ElevenLabs and Hour One have rate limits. ElevenLabs allows 100,000 characters per month on the free tier and has higher limits on paid plans. Hour One limits concurrent video renders.
In n8n, add exponential backoff retry logic:
{
"retryType": "exponentialBackoff",
"maxTries": 5,
"backoffBase": 2,
"backoffMultiplier": 1000
}
This will retry failed requests with increasing delays: 1s, 2s, 4s, 8s, 16s.
2. Test with Short Inputs First
Before running the full workflow on a 5,000-word guideline, test with a 200-word sample. This saves costs and reveals API errors quickly.
3. Store Intermediate Outputs
Save the script, audio URL, and video ID at each step. If something fails, you can resume without regenerating earlier outputs. Use n8n's built-in database or an external database.
4. Monitor Audio Quality
ElevenLabs' voice stability setting affects how natural the voiceover sounds. For medical content, a stability of 0.5-0.65 works well. Too high (0.9+) and the voice sounds robotic. Test different settings on sample scripts.
5. Batch Multiple Guidelines
If you're creating videos for several conditions, trigger the workflow for each guideline concurrently. n8n handles parallel execution, so you can generate multiple videos simultaneously and reduce total runtime.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| AI-Boost | Starter | £20-50 | Per-API-call pricing; typically £0.02-0.05 per 1,000 words |
| ElevenLabs | Starter | £9-99 | Free tier covers 10,000 characters; Pro tier £99 for 100k characters |
| Hour One | Creator | £100-300 | 5-10 video renders per month; custom presentations available |
| n8n (self-hosted) | Open Source | £0-50 (infrastructure) | Free software; you pay for server hosting (AWS t2.small ~£7/month) |
| Make (alternative) | Free-Pro | £0-299 | Cloud-hosted; free tier allows limited operations, Pro tier needed for complex workflows |
| Storage (AWS S3) | Standard | £0.023 per GB | Negligible unless storing hundreds of videos; ~£1-5/month for typical use |
Total Monthly Cost: £130-500 depending on volume and tool choices. A healthcare organisation producing 10-15 patient videos monthly would spend around £250-350, compared to £2,000-5,000 using traditional video production.
This workflow eliminates weeks of manual handoff. Clinical teams push guidelines into the system, and within minutes (not weeks), polished patient education videos appear ready for deployment. The automation scales instantly if you need to create 50 videos instead of five; costs remain proportional to output rather than locked into production contracts.
The key is starting with one guideline, refining the process, then scaling to your full content library.