Alchemy RecipeIntermediateautomation

Real estate listing production from photos to virtual tour

Published

Real estate agents spend hours each week converting property photographs into listing assets. You shoot photos on-site, upload them manually to multiple platforms, edit them individually, generate floor plans, and finally create a virtual tour. Each step requires switching between applications and waiting for processing to finish. A single property listing can consume four to six hours of labour, and if you're managing ten properties weekly, that's a serious productivity drain.......... For more on this, see Real estate listing automation with property photos and d.... For more on this, see Real estate listing automation from property inspection r....

The problem isn't a lack of AI tools. It's the lack of connection between them. You have brilliant solutions like AI-Boost for image enhancement, Hour One for automated video creation, and PixelMotion AI for spatial analysis, but they sit in isolation. Building an automated pipeline that moves your photos through enhancement, analysis, and tour generation without human intervention is entirely possible, and this guide shows you exactly how.

We're going to wire together a workflow that takes a folder of property photos, enhances them intelligently, generates metadata about the space, and produces a clickable virtual tour, all triggered by a single upload. You'll need an intermediate grasp of APIs and webhooks, but no coding expertise beyond copying configuration blocks.

The Automated Workflow

Choosing Your Orchestration Platform

You have three solid choices here: Zapier, n8n, and Make. Zapier is the most user-friendly but has per-task costs that stack quickly when you're processing multiple images. n8n and Make offer better value at scale and give you more granular control over the workflow logic. For this guide, I'll show the core logic in a platform-agnostic format, then provide specific n8n configuration because it handles image pipelines particularly well and won't surprise you with costs at high volume.

Claude Code is a fourth option if you want to write Python orchestration yourself, but that requires hosting and maintenance overhead that most agents want to avoid.

The Overall Flow

Here's what happens when you upload property photos:

  1. Photos arrive in a cloud folder (Google Drive, Dropbox, or AWS S3).
  2. AI-Boost processes them for colour correction, shadow recovery, and staging optimisation.
  3. PixelMotion AI analyses the enhanced images to extract room dimensions, layout, and spatial relationships.
  4. Hour One ingests the metadata and creates a narrated video walkthrough of the property.
  5. The completed tour gets saved to your listing management system or website.

No manual steps. No downloading files and re-uploading them. No waiting between stages wondering whether something finished processing.

Setting Up the Trigger

Most workflows start with a file upload trigger. In n8n, you'd use a Google Drive node, Make uses a similar Google Drive trigger, and Zapier has native support. For this example, assume you're uploading photos to a folder called "Property Photos to Process" in Google Drive.

The trigger fires when a new file arrives, passes the file metadata downstream, and immediately fetches the actual image binary. This matters because you'll need the file content, not just the filename, to send to AI-Boost.

Step 1: Image Enhancement via AI-Boost

AI-Boost's API accepts image URLs or base64-encoded image data. If your source file is in Google Drive, you'll either create a shareable link or fetch the binary and encode it. Here's the endpoint:


POST https://api.ai-boost.com/v1/enhance
Content-Type: application/json
Authorization: Bearer YOUR_API_KEY

{
  "image": "base64_encoded_image_string_here",
  "enhancement_profile": "real_estate",
  "options": {
    "auto_colour_correction": true,
    "shadow_recovery": 0.8,
    "saturation_boost": 0.15,
    "staging_mode": true
  }
}

The real_estate profile is specifically tuned for property photography. It avoids over-processing (which looks fake) whilst bringing out detail in dim corners and fixing colour casts from artificial lighting.

In n8n, you'd use an HTTP Request node to call this endpoint. If you're in Zapier, use the Webhooks by Zapier action to make the POST request. Here's roughly how the data flows in n8n:


Google Drive Trigger
  ↓
Read File from Drive (get binary content)
  ↓
Base64 Encode the Image
  ↓
HTTP Request to AI-Boost API
  ↓
Parse Enhancement Response

The response from AI-Boost is a JSON object containing the enhanced image as a base64 string:

{
  "status": "success",
  "enhanced_image": "iVBORw0KGgoAAAANSUhEUgAAAAUA...",
  "processing_time_ms": 2340,
  "enhancements_applied": [
    "colour_correction",
    "shadow_recovery",
    "staging"
  ]
}

Save this enhanced image to a temporary location (or store the base64 directly in your workflow state if your orchestration tool supports it). You'll need it for the next step.

Step 2: Spatial Analysis via PixelMotion AI

PixelMotion AI's primary function is understanding room geometry, furniture placement, and spatial relationships from a single photograph. For real estate, this is valuable because it extracts metadata that feeds into your listing description and tour narration.

The API endpoint looks like this:


POST https://api.pixelmotion.ai/v1/analyse-space
Content-Type: application/json
Authorization: Bearer PIXELMOTION_API_KEY

{
  "image": "base64_or_url",
  "analysis_type": "real_estate",
  "extract_dimensions": true,
  "identify_furniture": true,
  "estimate_square_footage": true
}

PixelMotion returns structured data about what it detected:

{
  "status": "success",
  "room_type": "living_room",
  "estimated_dimensions": {
    "length_feet": 16.2,
    "width_feet": 14.8,
    "confidence": 0.87
  },
  "furniture_detected": [
    {
      "item": "sofa",
      "colour": "grey",
      "condition": "good"
    },
    {
      "item": "coffee_table",
      "material": "wood"
    }
  ],
  "estimated_square_feet": 240,
  "lighting_quality": "bright_natural",
  "overall_condition": "excellent"
}

You'll want to store this analysis result. Create a variable or database record that maps filename to analysis data. You're building a property profile as you process images.

Step 3: Video Generation via Hour One

Hour One turns structured data plus optional narration into video presentations. You give it a script (or let it generate one from your property metadata), and it produces a polished walkthrough.

First, construct the script from your property data. If you're processing multiple rooms, the script might look like this:


Living Room: This spacious 16 by 15 foot living room features bright natural lighting and premium hardwood flooring. The room comfortably accommodates a full seating area. 

Kitchen: Modern kitchen with stainless steel appliances and ample counter space...

Hour One's API endpoint:


POST https://api.hourone.com/v1/videos
Content-Type: application/json
Authorization: Bearer HOUR_ONE_API_KEY

{
  "script": "Your narration text here",
  "avatar": "professional_female_1",
  "voice": "en-US-neural",
  "background_images": ["base64_image_1", "base64_image_2"],
  "video_style": "real_estate_tour",
  "duration_seconds": 120,
  "background_music": "subtle_ambient"
}

Hour One's processing is asynchronous, so you'll get back a job ID:

{
  "status": "processing",
  "job_id": "job_abc123def456",
  "estimated_completion_seconds": 45
}

Your workflow needs to poll this endpoint until the video is ready:


POST https://api.hourone.com/v1/videos/job_abc123def456/status
Authorization: Bearer HOUR_ONE_API_KEY

Once complete, the response includes a download URL:

{
  "status": "completed",
  "video_url": "https://videos.hourone.com/job_abc123def456.mp4",
  "duration_seconds": 118,
  "file_size_mb": 42
}

In your orchestration tool, you'll use a polling loop or delay node. n8n has a built-in Wait node that lets you retry after N seconds, which is perfect here. Make and Zapier require more manual configuration, but it's straightforward.

Step 4: Save and Distribute

Download the completed video and save it somewhere accessible. Google Drive, AWS S3, or your own web server all work. If you're integrating with a listing management system, you might POST it directly to their API.

Here's a rough n8n configuration for the complete workflow:


{
  "nodes": [
    {
      "name": "Google Drive Trigger",
      "type": "n8n-nodes-base.googleDrive",
      "operation": "watch",
      "folder": "Property Photos to Process"
    },
    {
      "name": "Read File Content",
      "type": "n8n-nodes-base.googleDrive",
      "operation": "download",
      "input": "{{ $json.id }}"
    },
    {
      "name": "Encode to Base64",
      "type": "n8n-nodes-base.code",
      "language": "python",
      "script": "import base64\nfile_content = $input.all()[0].binary.data\nencoded = base64.b64encode(file_content).decode()\nreturn {'image': encoded}"
    },
    {
      "name": "AI-Boost Enhancement",
      "type": "n8n-nodes-base.httpRequest",
      "method": "POST",
      "url": "https://api.ai-boost.com/v1/enhance",
      "headers": {
        "Authorization": "Bearer {{ env.AI_BOOST_KEY }}"
      },
      "body": "{{ $json }}"
    },
    {
      "name": "PixelMotion Analysis",
      "type": "n8n-nodes-base.httpRequest",
      "method": "POST",
      "url": "https://api.pixelmotion.ai/v1/analyse-space",
      "headers": {
        "Authorization": "Bearer {{ env.PIXELMOTION_KEY }}"
      }
    },
    {
      "name": "Build Video Script",
      "type": "n8n-nodes-base.code",
      "language": "python",
      "script": "analysis = $json\nscript = f'Room: {analysis[\"room_type\"]}. Dimensions: {analysis[\"estimated_dimensions\"][\"length_feet\"]} by {analysis[\"estimated_dimensions\"][\"width_feet\"]} feet.'\nreturn {'script': script}"
    },
    {
      "name": "Hour One Video Creation",
      "type": "n8n-nodes-base.httpRequest",
      "method": "POST",
      "url": "https://api.hourone.com/v1/videos",
      "headers": {
        "Authorization": "Bearer {{ env.HOUR_ONE_KEY }}"
      }
    },
    {
      "name": "Wait for Video Processing",
      "type": "n8n-nodes-base.wait",
      "waitTime": 30
    },
    {
      "name": "Check Video Status",
      "type": "n8n-nodes-base.httpRequest",
      "method": "POST",
      "url": "https://api.hourone.com/v1/videos/{{ $json.job_id }}/status"
    },
    {
      "name": "Save to Google Drive",
      "type": "n8n-nodes-base.googleDrive",
      "operation": "upload",
      "folderId": "Processed Tours",
      "fileName": "{{ originalFileName }}_tour.mp4"
    }
  ]
}

This is simplified, but it shows the node sequence. In production, you'd add error handling nodes and conditional logic to retry failed steps.

Data Transformation Between Steps

Each tool returns different formats. You'll need transformation nodes to map outputs to inputs:

AI-Boost returns a base64 image string. PixelMotion expects base64 or a URL. Hour One wants a script string plus image URLs. Build explicit transformation nodes between each step that extract, format, and route data correctly. This prevents frustrating failures where data types don't match expectations.

The Manual Alternative

If you prefer hands-on control or need custom editing at intermediate steps, you can break the workflow into smaller pieces and trigger them manually.

Upload your photos to a folder, manually run the AI-Boost batch processor through their web interface, download the enhanced images, upload those to PixelMotion's analyser, copy the extracted metadata into a spreadsheet, manually craft a property description, then feed that into Hour One. This gives you quality checkpoints where you can review results before proceeding.

The trade-off is obvious: you get control but lose the time savings. For most agents handling 10+ properties per week, full automation pays for itself in a month. For occasional listings, manual processing might be acceptable.

You can also use a hybrid approach: automate enhancement and analysis, but pause the workflow to let you review the generated script before video creation. This is a sensible middle ground. In n8n or Make, add an approval node that sends you an email with the generated script and waits for your manual approval before proceeding to Hour One.

Pro Tips

Rate Limiting and Throttling

AI-Boost allows 10 requests per second on their standard tier. If you're processing 20 photos simultaneously, add a throttle node that delays requests to stay under the limit. This prevents rejections and keeps your workflow reliable.


Delay: 100ms between AI-Boost requests
Delay: 500ms between PixelMotion requests (they're slower)
Delay: 1000ms between Hour One polling attempts

Error Handling and Retries

Network requests fail. APIs timeout. Build retry logic into every external call. In n8n, use the Try/Catch pattern. In Make, add error branches that retry after a short delay. Don't let one failed image tank your entire workflow.


Try: Send request to AI-Boost
Catch: Wait 5 seconds
Retry: Send request again
If Retry Fails: Log error and mark image for manual review

Cost Optimisation

Hour One charges per video minute. If you're creating 120-second walkthroughs for every property, costs add up. Consider batch processing: collect 3-5 property videos in a single Hour One request, split them into chapters. Depending on your volume and pricing tier, this could save 30-40% on video generation costs.

Also, be selective about which images get analysed by PixelMotion. You probably don't need spatial analysis for every photo, only key rooms (kitchen, master bedroom, living areas). Skip the smaller closets and hallway shots.

Storing Metadata for Future Use

As you process properties, save the PixelMotion analysis results to a database or Google Sheet. Over time, you'll build a dataset of property characteristics. This becomes invaluable for comparables analysis and pricing guidance.

Handling Failed Jobs Gracefully

Don't assume everything succeeds. Some images might be too dark or blurry for AI-Boost to enhance properly. PixelMotion might fail on unusual room angles. Hour One's video generation occasionally times out. Build a retry queue that stores failed items and attempts them during off-peak hours (e.g., 2 AM). Send yourself a daily summary of what failed so you can investigate.

Cost Breakdown

ToolPlan NeededMonthly CostNotes
AI-BoostProfessional£120Includes 5,000 image enhancements. Most agents need 2-3 plans at scale.
PixelMotion AIStandard£802,000 analyses per month. Rarely hits this ceiling.
Hour OneCreator£200Includes 120 minutes of video generation. Scale to Business (£500) if processing 20+ videos monthly.
n8nCloud Pro£40Workflow automation with 400,000 executions monthly. Sufficient for 100+ properties. Self-hosted instance is free but requires DevOps.
Google DriveBusiness Standard£12Per user. Needed for cloud storage.
TotalAll Tools Combined£452Supports 100-150 property workflows monthly with room to scale.

These are UK prices as of early 2025. Costs vary by region and tier. If you self-host n8n, drop £40 and add roughly £20-50 monthly for server infrastructure.

For a solo agent processing 5 properties weekly, £452 monthly covers the entire pipeline. At £500 per property in time savings (using a conservative 4 hours at £125 per hour), you're looking at net savings of £9,548 per month. The ROI case is straightforward.

More Recipes