Children's Educational App Content from Learning Objectives
- Published
Creating educational content for children is time-consuming. Teachers and curriculum developers spend weeks writing learning objectives, then weeks more scripting narratives, recording audio, and illustrating scenes. Each step requires different skills and tools, which means context-switching, version management, and inevitable bottlenecks when one person is waiting on another....
What if you could feed a learning objective into a system and have it automatically produce a complete educational app scene? Input something like "Learn about photosynthesis" and receive a illustrated storybook with narrated audio, all formatted and ready to integrate into your app. No manual handoffs, no coordinating between specialists, no waiting.
This Alchemy demonstrates exactly that. We'll connect four AI tools into an automated production line that transforms educational learning objectives into fully-formed content assets. You'll use aicut-ai to generate age-appropriate narratives, elevenlabs for natural-sounding narration, fairytailai for illustration, and icoloring to create interactive colouring activities. An orchestration tool ties it all together, handling the logic, data transformation, and error handling so your team doesn't have to. For more on this, see Automated Podcast Production Workflow.
The Automated Workflow
We'll build this using n8n because it offers good visual debugging, native HTTP request support, and straightforward error handling. The same approach works in Zapier or Make, though the syntax differs slightly.
The workflow structure:
- Receive a learning objective (via webhook or form submission)
- Generate an educational narrative using aicut-ai
- Convert that narrative to speech via elevenlabs
- Create illustrations via fairytailai
- Generate a colouring page via icoloring
- Bundle everything and save to your app's backend
Setting up the trigger:
Your workflow begins with an HTTP POST webhook. This receives the learning objective and metadata about your target audience.
POST /webhook/content-generation
Content-Type: application/json
{
"learning_objective": "Understand how plants absorb water through roots",
"target_age": 8,
"story_length": "medium",
"language": "en-GB",
"app_id": "learning_app_001"
}
In n8n, create an HTTP Request node and set it to POST. This becomes your webhook URL that external systems call. Configure it to accept JSON and extract these fields into variables.
Step 1: Generate the narrative with aicut-ai:
aicut-ai specialises in creating structured, age-appropriate educational content. Its API accepts learning objectives and returns a narrative formatted for children.
POST https://api.aicut-ai.com/v1/generate-narrative
Authorization: Bearer YOUR_AICUT_API_KEY
Content-Type: application/json
{
"learning_objective": "Understand how plants absorb water through roots",
"target_age": 8,
"format": "children_story",
"word_count": 300,
"style": "engaging_educational",
"include_comprehension_questions": true
}
The response includes a narrative, comprehension questions, and metadata:
{
"narrative": "Rosie the Root loved her job underground. Every day, she absorbed water from the soil and sent it up through the stem to feed her plant. 'Without water,' Rosie said, 'plants cannot grow tall or make food from sunshine.' The water travelled through tiny tubes, like roads inside the plant...",
"comprehension_questions": [
"What does Rosie the Root do?",
"Why do plants need water?"
],
"reading_level": "7-9 years",
"narrative_id": "narr_789xyz",
"estimated_narration_time_seconds": 45
}
In n8n, add an HTTP Request node. Set Method to POST, and paste the endpoint above. In the Headers tab, add Authorization. In the Body tab (set to JSON), structure your request using variables from the webhook trigger: {{ $json.learning_objective }} and {{ $json.target_age }}.
Step 2: Convert narrative to speech with elevenlabs:
The narrative becomes audio using elevenlabs, which supports multiple voices and accents, including British English voices suitable for children's content.
POST https://api.elevenlabs.io/v1/text-to-speech/21m00Tcm4TlvDq8ikWAM
Authorization: xi-api-key YOUR_ELEVENLABS_API_KEY
Content-Type: application/json
{
"text": "{{ $json.narrative }}",
"model_id": "eleven_monolingual_v1",
"voice_settings": {
"stability": 0.5,
"similarity_boost": 0.75
}
}
The voice ID 21m00Tcm4TlvDq8ikWAM is a British English female voice suitable for younger audiences. Elevenlabs returns binary audio data (MP3 format).
In n8n, add another HTTP Request node. Set Method to POST. Use the narrative output from step 1 with {{ steps.step1.data.narrative }}. Set Response type to Binary to handle the audio file. This node will output audio binary data that you'll store later.
Step 3: Create illustrations with fairytailai:
fairytailai generates illustrations based on text prompts. Pass key scenes from the narrative.
POST https://api.fairytailai.com/v1/generate-illustration
Authorization: Bearer YOUR_FAIRYTAIL_API_KEY
Content-Type: application/json
{
"prompt": "A detailed illustration of Rosie the Root absorbing water from soil, with bright colours and a friendly character design suitable for children aged 8",
"style": "children_book_illustration",
"dimensions": "512x512",
"num_variants": 2
}
The response provides URLs to generated images.
{
"status": "success",
"illustrations": [
{
"image_url": "https://cdn.fairytailai.com/img/abc123.png",
"variant": 1,
"prompt_used": "A detailed illustration..."
},
{
"image_url": "https://cdn.fairytailai.com/img/abc124.png",
"variant": 2,
"prompt_used": "A detailed illustration..."
}
],
"generation_id": "gen_456def"
}
In n8n, add an HTTP Request node for fairytailai. Create a prompt by combining elements from the narrative. You might extract key scenes automatically or use a fixed scene description. Set this node to return JSON.
Step 4: Generate interactive colouring page with icoloring:
icoloring takes the narrative and creates line-art colouring pages for interactive learning.
POST https://api.icoloring.com/v1/create-coloring-page
Authorization: Bearer YOUR_ICOLORING_API_KEY
Content-Type: application/json
{
"prompt": "Create a colouring page illustration of a plant with roots absorbing water, showing the water pathways. Style: simple line-art suitable for children aged 8. Include labels for roots, stem, and water.",
"format": "svg",
"complexity": "medium",
"include_labels": true
}
The response includes SVG code or a URL to the colouring page, plus metadata for interactivity.
{
"status": "success",
"coloring_page_url": "https://cdn.icoloring.com/pages/xyz789.svg",
"coloring_page_id": "page_xyz789",
"format": "svg",
"interactive_elements": [
{
"region": "roots",
"label": "Roots",
"editable": true
},
{
"region": "water_pathway",
"label": "Water pathway",
"editable": true
}
]
}
In n8n, add an HTTP Request node for icoloring. Pass a prompt derived from the original learning objective and the narrative.
Step 5: Bundle and save everything:
You now have narrative, audio, images, and a colouring page. Create a final node that packages these into a structured JSON object and saves it to your backend.
{
"content_package": {
"app_id": "learning_app_001",
"learning_objective": "Understand how plants absorb water through roots",
"target_age": 8,
"created_at": "2025-01-15T10:30:00Z",
"assets": {
"narrative": {
"text": "Rosie the Root loved...",
"reading_level": "7-9 years",
"comprehension_questions": [...]
},
"audio": {
"file_path": "s3://your-bucket/audio/narr_789xyz.mp3",
"duration_seconds": 45,
"voice": "en-GB-female-child-friendly"
},
"illustrations": [
{
"image_url": "https://cdn.fairytailai.com/img/abc123.png",
"variant": 1
}
],
"coloring_page": {
"url": "https://cdn.icoloring.com/pages/xyz789.svg",
"interactive_regions": [...]
}
}
}
}
In n8n, add an HTTP Request node that POSTs this bundle to your backend API. Alternatively, use the Code node to format the data and then use a database node to save directly.
Complete n8n workflow structure:
Webhook trigger (HTTP POST)
↓
aicut-ai narrative generation
↓
elevenlabs text-to-speech
↓
fairytailai illustration
↓
icoloring colouring page
↓
Code node (format bundle)
↓
Save to backend API
↓
Send confirmation webhook to client
Create each step as a separate node in n8n. Connect them left to right. For the confirmation step, add a final HTTP Request node that alerts your app when the content is ready, or save the bundle ID to your database and let your app poll for completion.
Error handling and retries:
Add error handlers between steps. If aicut-ai fails, log the error and retry up to 3 times with exponential backoff. If fairytailai times out, fall back to a simple placeholder illustration. In n8n, right-click each node and add an Error Workflow that catches failures.
{
"retry_policy": {
"max_attempts": 3,
"backoff_ms": [1000, 3000, 9000]
},
"fallbacks": {
"illustration": "use_placeholder_svg_if_fairytailai_fails",
"audio": "use_text_to_speech_fallback_if_elevenlabs_slow"
}
}
The Manual Alternative
If your workflow requires human review before publishing, insert a manual approval step. After bundling, send the content package to an editor via email or a web interface. The workflow pauses until they click "Approve" or "Reject". In n8n, use the Wait node (set to webhook trigger) to pause until external approval arrives. Rejected content triggers a notification to regenerate specific components.
This approach trades speed for control. Useful when your audience is younger (under 6) and you need stricter content review, or when educational accuracy is critical (medical or scientific content).
Pro Tips
Rate limiting and cost control: elevenlabs and fairytailai both enforce rate limits. Queue requests using n8n's scheduling features rather than processing all requests simultaneously. Process one learning objective every 5 minutes rather than 10 at once. This also reduces your monthly bill because both services charge per API call.
Caching narrative generation: aicut-ai occasionally returns similar narratives for similar learning objectives. Store previous narratives in a simple database. Before calling aicut-ai, check if a narrative for that objective already exists. If yes, reuse it and skip straight to step 2. This saves API calls and speeds up the workflow.
Audio file storage: elevenlabs returns binary MP3 data, which is large. Don't store it directly in your n8n workflow. Upload the binary to cloud storage (S3, Google Cloud Storage) and save only the URL in your JSON bundle. Add a code node that converts elevenlabs binary output to a file and uploads it.
// In n8n Code node, using node-fetch and aws-sdk
const fetch = require('node-fetch');
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY
});
const audioBuffer = Buffer.from(items[0].binary.data);
const params = {
Bucket: 'your-bucket',
Key: `audio/${items[0].json.narrative_id}.mp3`,
Body: audioBuffer,
ContentType: 'audio/mpeg'
};
s3.upload(params, (err, data) => {
if (err) console.log(err);
else console.log('Uploaded to:', data.Location);
});
Fallback images: fairytailai sometimes generates low-quality illustrations. Set a manual approval rule. If fairytailai returns an image but it's below your quality threshold, save it as a variant and also generate a backup using a simpler illustration service. Present both to your editor for selection.
Language and locale variations: The workflow above uses British English. If you need to support multiple languages or locales, add a conditional node early on. Route requests with language: "en-US" to a different elevenlabs voice, or add a language-specific prompt prefix to aicut-ai (e.g., "Generate narrative in simple Australian English for...").
Cost Breakdown
| Tool | Plan Needed | Monthly Cost (approx.) | Notes |
|---|---|---|---|
| aicut-ai | Standard | £25–50 | ~1,000 narrative generations per month; overage ~£0.03 per request |
| elevenlabs | Starter or Pro | £11–99 | Starter: 10,000 characters/month; Pro: 100,000 characters/month; ~£0.08 per 10,000 characters overage |
| fairytailai | Flex or Creator | £30–100 | Flex: 100 images/month; Creator: 500 images/month; ~£0.20 per image overage |
| icoloring | Basic | £20–40 | ~500 pages/month; overage ~£0.04 per page |
| n8n | Cloud Free or Cloud Pro | Free–£25 | Free tier: 400 executions/month; Pro: 40,000 executions/month |
| Storage (S3 or GCS) | Standard | £5–15 | Depends on total asset volume; ~1 GB per 50 content packages |
| Total (small volume) | — | ~£120–250/month | Supports ~200–300 content packages/month with some tools on starter plans |
| Total (large volume) | — | ~£300–600/month | Supports ~1,000+ packages/month; scale up tool plans as needed |
Costs assume British pricing and typical usage. Your actual costs depend on:
-
How many learning objectives you process per month.
-
Average narrative length (longer text costs more for audio).
-
Number of image variants you generate per objective.
-
Whether you use multiple languages (elevenlabs charges per-character regardless of language).
Estimate your needs, then start on the smallest plans and upgrade as volume grows.
Getting Started
Begin with a single, simple learning objective. Test the workflow end-to-end using a staging environment. Confirm that data flows correctly between each tool. Check that the final bundle is valid JSON and that your backend can parse it.
Once you're confident, add error handling and retry logic. Set up monitoring to alert you if any step fails. Finally, expose your webhook and begin feeding real learning objectives through.
The workflow requires minimal maintenance. Most failures are temporary (API rate limits, transient network errors), which your retry logic handles automatically. Occasionally, fairytailai may generate an image you dislike, but you can always regenerate it by re-running just that step without processing the entire workflow again.
More Recipes
Automated Podcast Production Workflow
Automated Podcast Production Workflow: From Raw Audio to Published Episode
Build an Automated YouTube Channel with AI
Build an Automated YouTube Channel with AI
Medical device regulatory documentation from technical specifications
Medtech companies spend significant resources translating technical specs into regulatory-compliant documentation.