Introduction
Product requirement documents (PRDs) are notoriously verbose. A well-written PRD might run 20 pages, but the actual technical specifications often amount to a few paragraphs buried in the middle. Engineers then spend hours extracting the signal from the noise: parsing acceptance criteria, identifying API constraints, extracting data model requirements.
This repetitive extraction work is perfect for automation. Instead of manually copying specifications from a PRD into a requirements tracker, you can chain three AI tools together to identify technical requirements, parse them into structured data, and commit them directly to your source control. The entire process runs without human intervention between the initial upload and the final output.
This Alchemy guide shows you how to wire Bhava AI, ParSpec AI, and SourceAI together using n8n to extract technical specifications from your product requirements in under two minutes per document.
The Automated Workflow
The workflow in four steps:
- A PRD arrives (email, webhook, or manual trigger)
- Bhava AI identifies the technical requirement sections
- ParSpec AI structures them into machine-readable specs
- SourceAI generates a formatted specification file and pushes it to Git
We will use n8n as our orchestration platform because its visual interface handles multi-step API calls without requiring you to write glue code, and its built-in error handling prevents partial failures from corrupting your data.
Setting Up the N8N Workflow
Preparing your credentials:
First, obtain API keys from each service. Store these in n8n's Credential Manager:
- Bhava AI: Create a credential of type "Custom API" with header "Authorization: Bearer YOUR_BHAVA_KEY"
- ParSpec AI: Same approach with "Authorization: Bearer YOUR_PARSPEC_KEY"
- SourceAI: Create a GitHub credential with your personal access token (scope: repo, workflow)
Step 1:
Trigger and Input
Your workflow starts when a new PRD lands in a monitored location. Create an HTTP webhook node that accepts multipart form data:
POST /webhook/prd-upload
Content-Type: multipart/form-data
file: <binary>
project_name: string
Configure the webhook to return a 200 status immediately, so the client does not wait for the full extraction pipeline to complete.
Alternatively, use a Google Drive or Slack trigger if your team already stores PRDs in those systems. The trigger itself is not the bottleneck; the important thing is that you capture the document and a project identifier.
Step 2:
Extract Requirements with Bhava AI
After receiving the document, send it to Bhava AI's document analysis endpoint. Bhava AI is trained to identify sections of technical importance within longer documents.
Create a new HTTP Request node in n8n with the following configuration:
Node Configuration:
Method: POST
URL: https://api.bhava-ai.io/v1/documents/analyse
Headers:
Authorization: Bearer {{ $credentials.bhavaAPI.key }}
Content-Type: application/json
Body:
{
"document_id": "{{ $json.file.filename }}",
"content_type": "application/pdf",
"analysis_type": "technical_requirements",
"extract_sections": [
"api_specifications",
"data_models",
"performance_constraints",
"security_requirements",
"acceptance_criteria"
]
}
Bhava AI returns a JSON object with identified sections and their confidence scores. You will receive something like:
{
"document_id": "PRD-2024-Q1.pdf",
"analysis_timestamp": "2024-01-15T09:42:00Z",
"sections": [
{
"type": "api_specifications",
"confidence": 0.94,
"content": "The API must support REST endpoints for user authentication...",
"location": {
"page": 8,
"paragraph": 3
}
},
{
"type": "data_models",
"confidence": 0.87,
"content": "User objects contain id, email, created_at, role field..."
}
],
"summary": "Document contains 5 major technical sections"
}
Store this output in n8n's internal memory. You will reference it in the next step.
Step 3:
Structure Specifications with ParSpec AI
ParSpec AI takes the unstructured text Bhava AI identified and converts it into a formal specification format. This is where the data becomes machine-readable.
Create a second HTTP Request node:
Node Configuration:
Method: POST
URL: https://api.parspec-ai.io/v1/specs/parse
Headers:
Authorization: Bearer {{ $credentials.parspecAPI.key }}
Content-Type: application/json
Body:
{
"source_sections": {{ $nodes["Bhava Analysis"].json.sections }},
"specification_format": "openapi_3.0",
"include_constraints": true,
"generate_examples": true,
"project_context": {
"name": "{{ $json.project_name }}",
"domain": "backend"
}
}
ParSpec AI processes each identified section and outputs OpenAPI 3.0 compatible YAML. A typical response looks like:
{
"status": "success",
"specification": {
"openapi": "3.0.0",
"info": {
"title": "PRD-2024-Q1 API Specification",
"version": "1.0.0"
},
"paths": {
"/auth/login": {
"post": {
"summary": "User login endpoint",
"requestBody": {
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"email": { "type": "string" },
"password": { "type": "string" }
},
"required": ["email", "password"]
}
}
}
}
}
}
},
"components": {
"schemas": {
"User": {
"type": "object",
"properties": {
"id": { "type": "string", "format": "uuid" },
"email": { "type": "string" },
"created_at": { "type": "string", "format": "date-time" },
"role": { "type": "string", "enum": ["admin", "user", "guest"] }
},
"required": ["id", "email", "created_at"]
}
}
}
},
"extraction_confidence": 0.91,
"warnings": [
"Section 'performance_constraints' was incomplete; verify response time requirements manually"
]
}
This structured output is ready for code generation or integration into your development workflow.
Step 4:
Commit to Git with SourceAI
The final step commits the generated specification to your repository. SourceAI handles the Git interaction and formats the file appropriately.
Create a third HTTP Request node:
Node Configuration:
Method: POST
URL: https://api.sourceai.dev/v1/git/commit
Headers:
Authorization: Bearer {{ $credentials.sourceAIAPI.token }}
Content-Type: application/json
Body:
{
"repository": {
"owner": "{{ $json.github_owner }}",
"name": "{{ $json.github_repo }}",
"branch": "specs/auto-generated"
},
"commit": {
"message": "Auto-generated specs from PRD: {{ $json.project_name }}",
"author": {
"name": "Specification Bot",
"email": "specs@yourcompany.internal"
}
},
"files": [
{
"path": "specs/{{ $json.project_name }}-api.yaml",
"content": {{ $nodes["ParSpec Parsing"].json.specification | jsonStringify }},
"action": "create"
},
{
"path": "specs/{{ $json.project_name }}-metadata.json",
"content": {
"source_document": "{{ $json.file.filename }}",
"generated_at": "{{ now() }}",
"extraction_confidence": "{{ $nodes['ParSpec Parsing'].json.extraction_confidence }}",
"bhava_warnings": "{{ $nodes['Bhava Analysis'].json.sections[*].warnings }}"
},
"action": "create"
}
],
"create_pull_request": true,
"pull_request": {
"title": "Specification: {{ $json.project_name }}",
"description": "Automatically extracted and structured technical specifications from product requirements."
}
}
SourceAI creates a new branch, commits both the OpenAPI specification and metadata, and opens a pull request for human review. This prevents automatically pushing unverified specs to your main branch.
Connecting the Nodes
In n8n's visual editor, arrange the nodes in sequence:
[HTTP Webhook Trigger]
↓
[Bhava AI Analysis]
↓
[ParSpec Parsing]
↓
[SourceAI Git Commit]
↓
[Error Handler]
Set the Bhava node to "Always execute on input", so it runs even if earlier steps encounter warnings. Configure each node to pass its JSON output to the next node automatically.
For error handling, add a final node that sends Slack notifications if any step fails:
Method: POST
URL: https://hooks.slack.com/services/YOUR/WEBHOOK/URL
Body:
{
"text": "Specification extraction failed for {{ $json.project_name }}",
"attachments": [
{
"color": "danger",
"fields": [
{
"title": "Error",
"value": "{{ $json.error }}"
},
{
"title": "Document",
"value": "{{ $json.file.filename }}"
}
]
}
]
}
The Manual Alternative
If your team prefers human-in-the-loop processing, modify the workflow to pause before the Git commit step. Instead of automatically opening a pull request, generate the specifications and send them to a Slack channel for review.
Create a conditional node that checks for high-confidence extractions:
if ($nodes["ParSpec Parsing"].json.extraction_confidence > 0.85) {
// Auto-commit
} else {
// Send to Slack for review
}
This approach trades speed for control, allowing technical leads to verify specifications before they become part of your codebase. The entire process still eliminates the initial extraction labour; humans only review the final structured output.
Pro Tips
Rate limiting and throttling:
Bhava AI allows 100 requests per minute on the free tier. If you process multiple PRDs simultaneously, implement a delay between requests. In n8n, add a "Wait" node between the trigger and Bhava analysis:
Wait node: {{ 60000 / 100 }} milliseconds (600ms between requests)
ParSpec AI is more generous at 500 requests per minute, so it is not a bottleneck.
Cost optimisation:
Request only the extraction types you need. If your team ignores performance constraints, remove that from the extract_sections array. Fewer sections mean faster processing and lower API costs.
Handling incomplete sections:
ParSpec AI returns confidence scores and warnings for each section. Store these warnings in your metadata file (as shown in Step 4) so engineers reviewing the pull request know which sections need manual verification. Do not discard low-confidence extractions; instead, flag them clearly.
Testing the workflow:
Start with a small batch of five to ten PRDs before enabling automation for your entire backlog. Monitor the generated specifications for accuracy, then adjust the parameters in each tool as needed.
Storing API responses:
Keep the full Bhava and ParSpec responses in your repository alongside the final specification. This creates an audit trail and helps you improve the extraction rules over time.
Cost Breakdown
| Tool | Plan Needed | Monthly Cost | Notes |
|---|---|---|---|
| Bhava AI | Starter (100 req/min) | £29 | Sufficient for teams processing 1,000+ documents monthly |
| ParSpec AI | Pro (500 req/min) | £59 | Handles structured parsing for all Bhava outputs with headroom |
| SourceAI | Free (GitHub integration) | £0 | Uses your existing GitHub token; no separate billing |
| n8n (self-hosted) | Community edition | £0 | Free and open-source; runs on your infrastructure |
| n8n (cloud) | Starter | £20 | If you prefer managed hosting instead |
Total: £108 to £148 per month for teams running this workflow continuously. Breaking even on manual labour typically takes two to three weeks.
Summary
This workflow removes the specification extraction step from your PRD review process. Instead of engineers spending two to three hours per document hunting for technical requirements, the system identifies them automatically, structures them into machine-readable formats, and presents them as a pull request for verification.
The key is choosing the right tool for each step. Bhava AI excels at identifying relevant sections in long documents. ParSpec AI specialises in converting natural language into formal specifications. SourceAI handles the Git integration without requiring you to write custom commit logic.
Start by processing a single PRD manually using this workflow to verify each tool's output matches your standards. Then enable automation for your full document backlog.