Introduction
Candidate screening and interview preparation typically involves at least three distinct manual steps: reviewing CVs, extracting key information, and preparing structured interview notes. Most hiring teams handle this as a tedious handoff process, copying information between tabs and documents. Someone reads a PDF CV, another person summarises it, and a third prepares interview questions. Each handoff introduces errors and delays.
The good news is that this entire workflow can be automated without writing custom code or paying for expensive enterprise recruitment platforms. By connecting Chat with PDF by Copilotus, Cogram, and HyperBound AI through an orchestration layer, you can process candidate documents from initial upload to final interview preparation in minutes. The workflow runs end-to-end with zero manual interventions once triggered.
This guide shows you how to build exactly that. We'll wire together three specialised tools into a production-ready system that extracts structured data from CVs, identifies key competencies, and generates customised interview frameworks.
The Automated Workflow
High-Level Architecture
The workflow follows this sequence: a candidate CV arrives (via email, form, or direct upload); Chat with PDF by Copilotus extracts and summarises the document; the structured data feeds into HyperBound AI to identify interview topics; Cogram generates detailed interview notes and follow-up action items; results land in a shared document or database for your team.
Choosing Your Orchestration Tool
For this workflow, we recommend n8n or Make. Zapier works but has stricter limitations on request payloads and conditional logic. Claude Code is less suitable since this requires scheduled or webhook-triggered execution rather than interactive sessions.
Use n8n if you want full control over error handling and prefer self-hosted infrastructure. Use Make if you need cloud-only deployment and want a visual builder with reasonable pricing at this scale.
We'll show both approaches below.
Step 1: Trigger and File Upload
The workflow starts when a CV PDF arrives. You can trigger this three ways:
- Email integration (new email with attachment)
- Form submission (job application portal)
- Direct webhook (your ATS system posts the candidate data)
For this example, we'll use Make's email watch trigger:
Trigger: Watch Emails (Gmail)
Configuration:
- Gmail account: your.recruiter@company.com
- Search query: "from:candidates@company.com has:attachment filename:pdf"
- Limit: 1 email per trigger
Output fields: Email subject, From address, Attachment file (base64 encoded)
In n8n, the equivalent node is "Gmail trigger":
{
"nodeType": "n8n-nodes-base.gmail",
"operation": "checkForNewEmails",
"parameters": {
"includeAttachments": true,
"markAsRead": true,
"limit": 1
}
}
Step 2: Extract Data from PDF with Chat with PDF by Copilotus
Once you have the PDF, send it to the Chat with PDF API. This tool uses OCR and AI to read PDFs and answer structured questions about their contents.
First, obtain your Copilotus API key from your account dashboard. Then call the document processing endpoint:
POST https://api.copilotus.com/v1/documents/process
Headers:
Authorization: Bearer YOUR_COPILOTUS_API_KEY
Content-Type: application/json
Body:
{
"file_url": "https://your-file-storage.com/cv-12345.pdf",
"extraction_template": {
"name": "string",
"email": "string",
"phone": "string",
"experience_years": "number",
"key_skills": "array of strings",
"previous_roles": "array of objects with title, company, duration",
"education": "array of objects with degree, institution, year",
"salary_expectation": "string or null",
"notice_period": "string or null"
}
}
The API returns structured JSON within 10-15 seconds. In your orchestration tool, store this response in a variable called candidate_data.
Important API note: Copilotus accepts either file URLs or base64-encoded file content. If you're receiving the PDF as a base64 string from your email trigger, include it directly:
Body:
{
"file_content": "JVBERi0xLjQKJeLjz9MNCjEgMCBvYmo...",
"file_name": "cv-john-smith.pdf",
"extraction_template": { ... }
}
Allow 15 seconds for processing before moving to the next step. Most orchestration tools have a "wait" or "delay" node for this.
Step 3: Identify Interview Topics with HyperBound AI
HyperBound AI specialises in mapping competencies to interview questions. Feed it the structured candidate data from Step 2.
POST https://api.hyperbound.ai/v1/interview/analyse
Headers:
Authorization: Bearer YOUR_HYPERBOUND_API_KEY
Content-Type: application/json
Body:
{
"candidate_name": "candidate_data.name",
"job_title": "Software Engineer",
"candidate_background": {
"years_experience": "candidate_data.experience_years",
"key_skills": "candidate_data.key_skills",
"previous_roles": "candidate_data.previous_roles"
},
"focus_areas": [
"technical_competency",
"culture_fit",
"problem_solving",
"team_collaboration"
]
}
HyperBound returns a JSON object with recommended interview topics and risk areas:
{
"candidate_id": "12345",
"interview_topics": [
{
"topic": "Distributed Systems Experience",
"duration_minutes": 15,
"priority": "high",
"rationale": "CV shows 3 years with microservices architecture"
},
{
"topic": "Leadership in Previous Role",
"duration_minutes": 10,
"priority": "medium",
"rationale": "Managed team of 2-3 engineers; verify depth"
}
],
"risk_factors": [
"Employment gap 2019-2020 (needs clarification)",
"Frequent job changes in first 5 years (pattern concern)"
],
"estimated_interview_duration": 45
}
Store this as interview_topics.
Step 4: Generate Interview Notes with Cogram
Cogram is a meeting transcription and note-taking tool, but it also has a preparatory API for generating structured meeting agendas. Use it to turn the interview topics into a detailed script and preparation guide.
POST https://api.cogram.com/v1/meetings/prepare
Headers:
Authorization: Bearer YOUR_COGRAM_API_KEY
Content-Type: application/json
Body:
{
"meeting_type": "interview",
"candidate_name": "candidate_data.name",
"candidate_email": "candidate_data.email",
"job_title": "Software Engineer",
"interview_focus_areas": "interview_topics.interview_topics",
"candidate_background_summary": "candidate_data",
"risk_factors": "interview_topics.risk_factors",
"duration_minutes": "interview_topics.estimated_interview_duration"
}
Cogram returns a detailed agenda with suggested questions, conversation flow, and notes for the interviewer:
{
"agenda": [
{
"section": "Introduction",
"duration": 5,
"talking_points": [
"Welcome and brief company overview",
"Outline the interview structure"
]
},
{
"section": "Distributed Systems Deep Dive",
"duration": 15,
"questions": [
"Walk us through your experience with microservices. How did you handle service communication?",
"Tell us about a time you had to debug a distributed system issue.",
"How do you approach consistency and availability trade-offs?"
],
"follow_ups": [
"What would you do differently now?",
"Did you measure latency improvements?"
]
},
{
"section": "Employment Gap Clarification",
"duration": 5,
"notes": "Neutral, factual tone. Avoid assumptions.",
"suggested_approach": "I notice you took some time between roles in 2019-2020. What were you working on during that period?"
}
],
"key_reminders": [
"Listen carefully about team dynamics; check for collaboration skills",
"Probe on the two short stints in 2015-2016; understand why they left"
],
"follow_up_items": [
"Send technical assessment (if progressing)",
"Check references from last two employers",
"Confirm availability for next round"
]
}
Step 5: Store Results and Notify Team
Create a document in Google Docs or save JSON to your database. Most orchestration tools have built-in connectors for this.
In Make or n8n, use the "Google Docs" module to append a new document:
Module: Google Docs > Create Document
Parameters:
- Title: "Interview Prep: [candidate_name] - [date]"
- Content: Format the Cogram output as readable text
- Share with: hiring.team@company.com
Example content format:
CANDIDATE: John Smith
EMAIL: john.smith@email.com
PHONE: 07700 123456
EXPERIENCE: 6 years
KEY SKILLS: Python, Go, Kubernetes, PostgreSQL
INTERVIEW AGENDA (45 minutes estimated)
1. INTRODUCTION (5 min)
- Welcome and company overview
- Outline the interview structure
2. DISTRIBUTED SYSTEMS DEEP DIVE (15 min)
Questions:
- Walk us through your experience with microservices...
- Tell us about a time you had to debug...
RISK FACTORS:
- Employment gap 2019-2020 (needs clarification)
- Frequent job changes in first 5 years
Send a Slack message to notify the hiring manager:
POST https://hooks.slack.com/services/YOUR/WEBHOOK/URL
Body:
{
"text": "Interview prep ready for John Smith",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*New Interview Prep Generated*\nCandidate: John Smith\nRole: Software Engineer\n<https://docs.google.com/document/...>View prep document</>"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Key Points*\n• 6 years experience in distributed systems\n• Ask about 2019-2020 employment gap\n• Estimated interview: 45 minutes"
}
}
]
}
Complete n8n Workflow Definition
If you're using n8n, here's the approximate node structure (pseudocode):
Node 1: Gmail Trigger
└─> Node 2: Extract attachment (base64 decode)
└─> Node 3: HTTP Request - Copilotus PDF extraction
└─> Node 4: Wait 15 seconds
└─> Node 5: HTTP Request - HyperBound interview analysis
└─> Node 6: HTTP Request - Cogram meeting prep
└─> Node 7: Google Docs - Create interview prep doc
└─> Node 8: Slack - Send notification
Each HTTP Request node includes error handling; if any API fails, send an alert email to your recruiter rather than silently failing.
The Manual Alternative
If you prefer more control or want to run this partially automated, you can:
-
Use Chat with PDF by Copilotus manually via their web interface, copy the extracted data, then trigger the workflow from Step 3 onwards.
-
Generate interview topics with HyperBound AI manually, then let Cogram handle agenda creation and notification.
-
Run the entire workflow weekly on a batch of candidates instead of real-time, reducing API costs and allowing time for human review between steps.
This approach trades speed for control; useful if your candidate volume is low or if you want hiring managers to validate extracted data before Cogram generates the full agenda.
Pro Tips
1. Handle OCR Failures Gracefully
Some scanned CVs have poor OCR quality. Copilotus will return lower confidence scores. Add logic to flag these for manual review:
If Copilotus response confidence < 0.75:
- Mark as "Requires Manual Review"
- Send email to recruiter with the original PDF
- Stop automated workflow, don't proceed to Step 3
This prevents bad data cascading through your pipeline.
2. Rate Limit and Batch Processing
Copilotus, HyperBound, and Cogram all have rate limits (typically 100-500 requests per hour per plan). If you receive more than 10 CVs per day, batch them and stagger API calls:
Process maximum 2 candidates per minute
Queue surplus candidates for next batch window
Send notification: "Queued for processing at [time]"
This keeps you within free-tier limits and avoids 429 errors.
3. Cache Interview Topics for Similar Roles
If you're hiring multiple people for the same role, HyperBound generates very similar topic suggestions. Cache the response and reuse it for the same job title within 30 days:
Before calling HyperBound:
Check database: "Do we have cached topics for [job_title] from [today - 30 days]?"
If yes: Use cached response
If no: Call HyperBound API, store result with timestamp
This cuts API costs by roughly 40% for high-volume hiring rounds.
4. Store Structured Data in a Database
Don't rely solely on Google Docs. Store the extracted candidate data and interview notes in a structured database (PostgreSQL, MongoDB, or even Airtable):
candidates table:
- id (primary key)
- name, email, phone
- cv_file_url
- extracted_skills (array)
- years_experience
- created_at
interview_preps table:
- id (primary key)
- candidate_id (foreign key)
- interview_topics (JSON)
- risk_factors (array)
- cogram_agenda (JSON)
- document_url
- created_at
This allows filtering, sorting, and reporting by skill, experience level, or hiring stage. Your recruiting team can search for candidates by specific skills instead of searching through 200 Google Docs.
5. Implement Feedback Loops
After interviews, capture whether the prep document was useful. Did the interviewer ask questions from Cogram's suggestions? Did the risk factors matter? Use this feedback to improve future workflows:
Post-interview survey (Slack):
"Was this interview prep useful? [👍 Yes / 👎 No]"
"Did the risk factors identified match your interview experience? [Yes / No]"
Store responses linked to candidate_id.
Analyse every 3 months: Which prep points correlate with successful hires?
Adjust HyperBound focus areas based on results.
Over time, your workflow becomes more tailored to your actual hiring patterns.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| Chat with PDF by Copilotus | Pro (500 documents/month) | £29 | Scales to £79 at 2,000 docs/month. Most hiring teams stay in Pro tier. |
| HyperBound AI | Starter (100 analyses/month) | £39 | Includes historical competency database. Mid-market plan is £99. |
| Cogram | Teams (unlimited meetings) | £99 | Includes meeting transcription; you'll use 2-3% of capacity for interview prep. |
| n8n (self-hosted) | Open source | £0 | Or £20/month cloud execution if you prefer not to self-host. |
| Make | Standard (1,000 operations/month) | £9.99 | 1 CV triggers roughly 4-5 operations (email, 3 API calls, storage). Scales to £29 at higher volume. |
| Google Workspace / Slack | Existing subscription | £0 assumed | If not already in use, add ~£10/month (Slack) + £6/user/month (Google). |
| Total (low-volume hiring) | — | £177-207/month | Assuming 50-100 candidates/month, self-hosted n8n. |
| Total (high-volume hiring) | — | £250-300/month | Assuming 300+ candidates/month, cloud Make, higher API tiers. |
For comparison, a single recruiter costs £25,000-40,000 per year. This workflow replaces roughly 10-15 hours of manual CV screening and interview prep per week, making it cost-effective from month one.
Next Steps
-
Create API keys for Copilotus, HyperBound, and Cogram.
-
Set up your orchestration tool (n8n or Make) and build the workflow step-by-step, testing each API call independently first.
-
Start with a pilot: run the workflow on 5-10 CVs manually and gather feedback from your hiring team about the quality of generated interview prep.
-
Once validated, enable the email trigger and let it run live.
-
Track API usage and refine caching logic after your first high-volume hiring week.
The workflow is modular; you can add steps later (background checks, skills assessments, offer generation) without rebuilding the core pipeline.