Back to Alchemy
Alchemy RecipeIntermediateautomation

Website copy optimisation and A/B testing workflow

24 March 2026

Introduction

Website copy that converts is hard to write, harder still to perfect. Most teams either settle for good enough or spend weeks running A/B tests manually: writing variations, setting them up in their testing platform, monitoring results, and then manually implementing the winner. If you're testing multiple pages or multiple hypotheses simultaneously, the work multiplies quickly.

The real friction isn't in writing the copy or even running the test. It's in the handoffs. Someone writes variations in a doc. Someone else uploads them to your testing tool. A third person exports results, analyses them, and decides what actually won. This breaks down at scale, especially when you're testing continuously rather than in isolated campaigns.

This workflow cuts those handoffs entirely. You'll chain together Copy.AI for rapid variation generation, Hyperwrite for smart copywriting assistance, and TruConversion for A/B testing and analysis. An orchestration layer connects them all, moving copy automatically from generation through testing to winning variant implementation. No manual uploads. No manual analysis. No waiting.

The Automated Workflow

Which Orchestration Tool to Use

For this particular workflow, n8n is the best choice. Here's why: you need conditional logic (testing has to complete before you analyse results), you need to wait for external events (test completion), and you might run this daily or weekly on a schedule. n8n handles all three elegantly without expensive credits. Zapier would work but would cost significantly more at the workflow complexity level needed. Make (formerly Integromat) is capable but n8n's interface is cleaner for this kind of multi-step process.

If you're already deep in Claude's ecosystem and comfortable with prompting, Claude Code can replace n8n for one-off runs, though it won't scale to automated recurring workflows.

The Complete Flow

Here's how data moves through your system:

  1. n8n triggers on a schedule (e.g., every Monday at 9 AM) or a manual webhook call.

  2. Copy.AI generates 3-5 variations of your target landing page headline or hero copy.

  3. Hyperwrite reviews each variation and scores them for persuasiveness and brand alignment.

  4. All variations get pushed to TruConversion as a new A/B test.

  5. You set a timer. After 7 days (or 500 visitors, whichever comes first), the workflow checks if the test is complete.

  6. Once complete, TruConversion returns results. n8n parses them, identifies the winner, and logs the outcome.

  7. Optionally, n8n can push the winning copy back to your CMS or send a Slack notification with results.

Let's build this step by step.

Step 1: Trigger and Input

Your n8n workflow starts with either a schedule node or a webhook. For ongoing optimisation, a schedule makes sense. Set it to run weekly. Define your input: the current page headline, some brand guidelines, and the URL you're testing.


{
  "page_url": "https://yoursite.com/landing",
  "current_headline": "The fastest project management tool for remote teams",
  "brand_tone": "professional but friendly",
  "character_limit": 60
}

Store these as n8n workflow variables or pass them in via webhook body when you trigger manually.

Step 2: Generate Copy Variations with Copy.AI

Copy.AI's API lets you request variations of a given piece of copy. You'll need an API key from their dashboard. The endpoint is straightforward:


POST https://api.copy.ai/api/v1/generate

Your n8n node will make this request:

{
  "prompt": "Generate 4 alternative headlines for a SaaS landing page. Current headline: 'The fastest project management tool for remote teams'. Brand tone: professional but friendly. Character limit: 60 characters. Return only the headlines, one per line, without numbering.",
  "model": "gpt-4",
  "temperature": 0.7
}

Copy.AI returns generated variations in plain text:


Fast teams ship faster
Remote work, simplified
Manage projects, not chaos
Your team's competitive edge

Save this output as a variable in n8n, then split it into an array. This matters for the next step.

Step 3: Evaluate with Hyperwrite

Hyperwrite's API scores copy for clarity, persuasiveness, and brand fit. This step is optional but valuable; it filters out weak variations before they enter testing, saving testing time and budget.


POST https://api.hyperwrite.com/v1/analyse-copy

For each variation from Step 2, send it through Hyperwrite:

{
  "copy": "Fast teams ship faster",
  "context": "SaaS landing page headline",
  "criteria": ["persuasiveness", "clarity", "brand_fit"],
  "brand_guidelines": "professional but friendly, no jargon, action-oriented"
}

Hyperwrite returns scores (0–100) for each criterion:

{
  "copy": "Fast teams ship faster",
  "scores": {
    "persuasiveness": 82,
    "clarity": 95,
    "brand_fit": 88
  },
  "overall_score": 88.3,
  "feedback": "Strong, action-oriented. Resonates with target audience."
}

In n8n, filter out any variations scoring below 75. This step takes seconds but prevents wasting test budget on weak copy.

Step 4: Push Test Variants to TruConversion

TruConversion is your A/B testing platform. Its API creates new tests and loads variants:


POST https://api.truconversion.com/v1/tests

The request body includes your control (current copy), the variants (generated alternatives), and test configuration:

{
  "test_name": "Homepage Headline Test - Week of Nov 18",
  "page_url": "https://yoursite.com/landing",
  "test_type": "ab",
  "variations": [
    {
      "name": "Control",
      "headline": "The fastest project management tool for remote teams"
    },
    {
      "name": "Variant A",
      "headline": "Fast teams ship faster"
    },
    {
      "name": "Variant B",
      "headline": "Remote work, simplified"
    },
    {
      "name": "Variant C",
      "headline": "Manage projects, not chaos"
    }
  ],
  "traffic_allocation": "equal",
  "primary_metric": "conversion_rate",
  "sample_size_target": 500,
  "duration_days": 7
}

TruConversion returns a test ID:

{
  "test_id": "test_1234567890",
  "status": "live",
  "created_at": "2024-11-18T09:00:00Z",
  "estimated_completion": "2024-11-25T09:00:00Z"
}

Store this test ID. You'll need it in the next step.

Step 5: Wait for Test Completion

This is where scheduling matters. Your n8n workflow needs to pause and wait for the test to finish. You have two options:

Option A: Use n8n's "Wait" node set for 7 days, then continue. This is simple but rigid; if your test finishes early due to statistical significance, you'll still wait the full 7 days.

Option B: Schedule a separate n8n workflow to run daily and check test status until complete. This is more sophisticated but faster.

For Option B, create a new workflow that runs daily:


GET https://api.truconversion.com/v1/tests/{test_id}
{
  "test_id": "test_1234567890"
}

TruConversion returns:

{
  "test_id": "test_1234567890",
  "status": "live",
  "progress": 78,
  "visitors": 389,
  "estimated_completion": "2024-11-24T14:30:00Z"
}

Check if status equals "complete". If not, the workflow stops and tries again tomorrow. If yes, move to Step 6.

Step 6: Retrieve and Analyse Results

Once complete, fetch detailed results:


GET https://api.truconversion.com/v1/tests/{test_id}/results

TruConversion returns:

{
  "test_id": "test_1234567890",
  "status": "complete",
  "results": [
    {
      "variation_name": "Control",
      "visitors": 125,
      "conversions": 18,
      "conversion_rate": 0.144,
      "confidence": 0.68
    },
    {
      "variation_name": "Variant A",
      "visitors": 128,
      "conversions": 22,
      "conversion_rate": 0.172,
      "confidence": 0.82
    },
    {
      "variation_name": "Variant B",
      "visitors": 126,
      "conversions": 17,
      "conversion_rate": 0.135,
      "confidence": 0.55
    },
    {
      "variation_name": "Variant C",
      "visitors": 121,
      "conversions": 19,
      "conversion_rate": 0.157,
      "confidence": 0.71
    }
  ],
  "winner": "Variant A",
  "winner_confidence": 0.82,
  "statistical_significance": true
}

In n8n, parse this JSON. The winner is identified, and you have confidence scores. If statistical_significance is true and confidence is above your threshold (say, 75%), the test is conclusive.

Step 7: Implement the Winner (Optional)

This is where you automate the final step. If you manage your CMS via API (most modern platforms do), you can update the page copy automatically:


PATCH https://yoursite.com/api/pages/{page_id}
Authorization: Bearer YOUR_CMS_API_KEY

{
  "headline": "Fast teams ship faster"
}

Alternatively, send a Slack message to your team with the results and a link to implement manually. Automation is powerful, but some teams prefer human approval before pushing changes live.


POST https://hooks.slack.com/services/YOUR/WEBHOOK/URL

{
  "text": "Headline test complete. Winner: 'Fast teams ship faster' (17.2% conversion, 82% confidence)",
  "blocks": [
    {
      "type": "section",
      "text": {
        "type": "mrkdwn",
        "text": "Test Results\n*Winner:* Variant A\n*Conversion Rate:* 17.2%\n*Confidence:* 82%\n*Lift:* +19% vs control"
      }
    }
  ]
}

Tying It Together in n8n

Here's a rough outline of your workflow structure:


Trigger (Weekly Schedule)
  ↓
Copy.AI HTTP Node (Generate variations)
  ↓
Loop through variations → Hyperwrite HTTP Node (Score each)
  ↓
Filter Node (Remove low scorers)
  ↓
TruConversion HTTP Node (Create test)
  ↓
Wait 7 days OR use daily check workflow
  ↓
TruConversion HTTP Node (Fetch results)
  ↓
Conditional Node (Check statistical significance)
  ↓
Slack notification (or CMS update)

Each HTTP node stores its response. Pass data between nodes using n8n's variable syntax: {{ $node["Node Name"].json.field_name }}.

The Manual Alternative

If orchestration feels like overkill, you can do this in stages without full automation. Generate variations in Copy.AI manually, paste them into a spreadsheet, run them through Hyperwrite one by one if you want scoring, then manually upload them to TruConversion. This takes an hour or two per test but gives you full control and no integration setup cost.

Alternatively, use Claude's interface directly with prompting. Ask Claude to generate headlines, evaluate them, and produce a test plan you then execute manually in TruConversion. This works well if you're testing infrequently (once or twice per month) or running one-off experiments.

The manual route makes sense only if you're testing rarely or have unique requirements that the integrated approach doesn't fit. For any ongoing, regular testing programme, automation pays for itself immediately in time saved.

Pro Tips

Error Handling and Retries

API calls fail. Networks flake. Configure n8n to retry failed HTTP requests with exponential backoff. Set 3 retries with 2-second delays between them for Copy.AI and Hyperwrite, which are usually fast. TruConversion calls can timeout during heavy test loads; give them 30 seconds and allow 5 retries.

In n8n, go to HTTP node settings and enable "Retry on Error". Set maxRetries to 3 and backoff multiplier to 2.

Rate Limits and Throttling

Copy.AI allows 100 API calls per minute on most plans. Hyperwrite allows 50 per minute. TruConversion has generous limits but throttles large result fetches. If you're testing multiple pages simultaneously, stagger your requests. Use n8n's "Delay" node between iterations. Add a 1-second delay between Hyperwrite calls; this prevents hitting rate limits.

Monitor your API usage in each platform's dashboard weekly. Budget calls conservatively; assume you'll hit limits if you're at 80% capacity.

Cost Optimisation

Generate 4 variations per test, not 10. Every variation increases cost and complexity. Four is statistically robust and manageable. If Hyperwrite scoring feels redundant after a few tests (you'll develop intuition about what works), skip it; it's expensive per call and saves testing time only marginally.

TruConversion charges per test, not per page view. Keep test duration fixed (7 days) so you can predict monthly costs. More frequent tests don't necessarily cost more; they just redistribute your budget across more experiments.

Handling Test Incompleteness

Not every test reaches statistical significance. If your test runs for 7 days but confidence is only 60%, what do you do? Build a conditional into your workflow: if confidence is below 75%, extend the test by 3 more days and re-check. Set a maximum duration (say, 14 days) to prevent tests running indefinitely. Log incomplete tests separately so you can analyse patterns.

Scaling to Multiple Pages

This workflow targets one page at a time. To scale, modify your trigger input to accept a list of pages:

{
  "pages": [
    {
      "url": "https://yoursite.com/landing",
      "current_headline": "The fastest project management tool for remote teams"
    },
    {
      "url": "https://yoursite.com/pricing",
      "current_headline": "Simple, transparent pricing"
    }
  ]
}

Wrap the entire flow in a loop over this array. Each page gets its own test, running in parallel. TruConversion handles this natively. n8n's "Loop Over Items" node manages the concurrency.

Cost Breakdown

ToolPlan NeededMonthly CostNotes
Copy.AIProfessional£49Unlimited API calls. Essential for daily variation generation.
HyperwritePro API£99~1000 calls per month at this rate. Optional; skip if cost-conscious.
TruConversionStarter£79~15 tests per month included. ~£5 per additional test. Works for most SaaS.
n8nSelf-Hosted Free£0Free tier sufficient. Host on Render, Railway, or your own server.
n8nCloud Pro£20If you prefer managed hosting instead of self-hosting.
TotalAll-in£247–267Assumes ~15 tests per month and self-hosted n8n.

If you use Zapier instead of n8n, add £29–80 monthly depending on workflow complexity. Make (Integromat) sits at £15–50 depending on operations. For teams running 20+ tests monthly, the cost difference between platforms matters; for most, n8n self-hosted is cheapest.