A weekly competitive intel doc in Notion, built from Perplexity research
Pull fresh competitor pricing, product launches, and positioning changes from Perplexity every week, drop it into a structured Notion page, and have Notion AI summarise the key movements for your team. Under an hour of setup.
- Time saved
- Saves 2-3 hrs per week
- Monthly cost
- ~£28 / $35 (Perplexity Pro + Notion)/mo
- Published
Most product teams agree they should be tracking their competitors' pricing changes, product launches, and positioning updates. Very few actually keep up with it past week three. The problem is not that the information is hard to find. It is that the research is tedious, the output needs to be structured so the team can skim it on Monday, and the whole thing has to happen without anyone volunteering to own it.
This workflow automates the research and writes the week's intel straight into a Notion page. It uses Perplexity's API for the research (because Perplexity searches the live web and cites sources, unlike most LLMs) and Notion AI to produce a short executive summary of the week's movements. The whole pipeline runs on a schedule every Monday morning and your team walks in to an already-written intel doc.
What you'll build
A Python script that:
- Takes a list of named competitors from a config file
- Asks Perplexity's API three structured questions per competitor (pricing changes, product launches, public announcements in the past 7 days)
- Writes the results into a new Notion page under a "Competitive Intel" database, with one section per competitor and source citations
- Triggers Notion AI to summarise the week's key movements at the top of the page
- Runs on a cron schedule every Monday at 07:30
Prerequisites
- A Perplexity Pro account with API access. The Sonar API is available on the Pro plan (£16/mo) and gives you the "sonar-pro" model with citations. The API pricing is separate from the Pro subscription and costs around $1 per 1000 requests plus token usage.
- A Notion workspace with a page where you can create a database. Notion AI is a £8/mo add-on (or included on Business plans) and is required for the summarisation step at the end.
- A Notion integration token. Create this at
notion.so/my-integrations, give it "Read content", "Update content", and "Insert content" capabilities, then share your target page with the integration. - A place to run the script on a schedule. GitHub Actions, a cron job on a VPS, or a Railway scheduled function all work. This post uses GitHub Actions.
- Python 3.10 or higher with
requestsandpython-dotenv. - About 45 minutes the first time, mostly spent getting the Notion integration permissions right.
How to build it
Step 1: Set up the Notion database
In Notion, create a new page called "Competitive Intel" in whatever workspace your team uses. Inside that page, create a full-page database. Add the following properties:
- Week of (title): the Monday date for this week's report
- Summary (text): the AI-generated summary, written by Notion AI
- Competitors (multi-select): tagged for filtering
- Created (created time): auto
Note the database ID from the URL. Notion URLs look like notion.so/your-workspace/<database_id>?v=<view_id>. The database ID is the 32-character hex string before the question mark.
Go to your database, click the ... menu, and choose Add connections → Your integration name. This is the step everyone forgets. Without it, the API will return a 404 no matter what the token is.
Step 2: Define the competitors you care about
Create a config file called competitors.json:
{
"competitors": [
{
"name": "Linear",
"tags": ["issue tracking", "project management"]
},
{
"name": "Height",
"tags": ["project management"]
},
{
"name": "Motion",
"tags": ["scheduling"]
}
]
}
Keep the list short. Three to five competitors is a sustainable weekly read. Ten competitors produces a wall of text that nobody will skim.
Step 3: Query Perplexity for each competitor
Here is the core of the research script. For each competitor, it asks three structured questions and captures the response plus citations.
import os
import json
import requests
from datetime import date, timedelta
PERPLEXITY_API_KEY = os.environ["PERPLEXITY_API_KEY"]
PERPLEXITY_URL = "https://api.perplexity.ai/chat/completions"
QUESTIONS = [
(
"pricing",
"Has {name} changed its pricing, plans, or billing model in the last 7 days? "
"If yes, describe what changed and link to the source. If no, say 'No pricing "
"changes this week'."
),
(
"launches",
"Has {name} announced or launched any new products, features, or significant "
"updates in the last 7 days? If yes, summarise each briefly and link to the "
"source. If no, say 'No product launches this week'."
),
(
"announcements",
"What public announcements, press releases, blog posts, or notable social media "
"from {name} are from the last 7 days? Summarise the most important 1-3 and "
"link to sources. If nothing notable, say 'Nothing notable this week'."
),
]
def ask_perplexity(question: str) -> dict:
response = requests.post(
PERPLEXITY_URL,
headers={
"Authorization": f"Bearer {PERPLEXITY_API_KEY}",
"Content-Type": "application/json",
},
json={
"model": "sonar-pro",
"messages": [
{
"role": "system",
"content": (
"You are a competitive intelligence researcher. Be precise "
"about dates. Only include information from the last 7 days. "
"Cite every claim with a URL."
),
},
{"role": "user", "content": question},
],
"temperature": 0.2,
},
timeout=60,
)
response.raise_for_status()
data = response.json()
return {
"content": data["choices"][0]["message"]["content"],
"citations": data.get("citations", []),
}
def research_competitor(competitor: dict) -> dict:
name = competitor["name"]
print(f"Researching {name}...")
results = {}
for key, template in QUESTIONS:
question = template.format(name=name)
results[key] = ask_perplexity(question)
return {"competitor": competitor, "results": results}
A couple of notes on the Perplexity call. The sonar-pro model is the research tier that searches the live web and returns citations as a separate field on the response. The temperature: 0.2 keeps the output stable across runs so you get consistent phrasing. The system prompt explicitly bounds the time window; without it, Perplexity will sometimes surface older news because it is more topically relevant.
Step 4: Write the results into Notion
Notion's API accepts rich content as a tree of blocks. We build one block group per competitor with a heading, the three research results, and a citations list.
NOTION_API_KEY = os.environ["NOTION_API_KEY"]
NOTION_DATABASE_ID = os.environ["NOTION_DATABASE_ID"]
NOTION_HEADERS = {
"Authorization": f"Bearer {NOTION_API_KEY}",
"Notion-Version": "2022-06-28",
"Content-Type": "application/json",
}
def build_competitor_blocks(research: dict) -> list[dict]:
name = research["competitor"]["name"]
results = research["results"]
blocks = [
{
"object": "block",
"type": "heading_2",
"heading_2": {"rich_text": [{"type": "text", "text": {"content": name}}]},
}
]
section_titles = {
"pricing": "Pricing changes",
"launches": "Product launches",
"announcements": "Public announcements",
}
for key, title in section_titles.items():
blocks.append({
"object": "block",
"type": "heading_3",
"heading_3": {"rich_text": [{"type": "text", "text": {"content": title}}]},
})
blocks.append({
"object": "block",
"type": "paragraph",
"paragraph": {
"rich_text": [{
"type": "text",
"text": {"content": results[key]["content"][:2000]},
}]
},
})
# Citations as a bulleted list
for url in results[key]["citations"][:5]:
blocks.append({
"object": "block",
"type": "bulleted_list_item",
"bulleted_list_item": {
"rich_text": [{
"type": "text",
"text": {"content": url, "link": {"url": url}},
}]
},
})
return blocks
def create_weekly_page(all_research: list[dict]) -> str:
week_of = date.today() - timedelta(days=date.today().weekday())
title = f"Week of {week_of.isoformat()}"
# Create the page with the title and empty summary
page_response = requests.post(
"https://api.notion.com/v1/pages",
headers=NOTION_HEADERS,
json={
"parent": {"database_id": NOTION_DATABASE_ID},
"properties": {
"Week of": {
"title": [{"text": {"content": title}}],
},
"Competitors": {
"multi_select": [
{"name": r["competitor"]["name"]} for r in all_research
],
},
},
},
)
page_response.raise_for_status()
page_id = page_response.json()["id"]
# Add the competitor blocks as children of the new page
all_blocks = []
for research in all_research:
all_blocks.extend(build_competitor_blocks(research))
# Notion's block append endpoint has a 100-block limit per request
for i in range(0, len(all_blocks), 100):
chunk = all_blocks[i:i + 100]
requests.patch(
f"https://api.notion.com/v1/blocks/{page_id}/children",
headers=NOTION_HEADERS,
json={"children": chunk},
).raise_for_status()
return page_id
if __name__ == "__main__":
with open("competitors.json") as f:
config = json.load(f)
research = [research_competitor(c) for c in config["competitors"]]
page_id = create_weekly_page(research)
print(f"Created Notion page {page_id}")
The 100-block limit in the children append endpoint is the thing that will trip you up if you don't pre-chunk. Notion silently drops everything past the limit in a single request and only returns an error if the first chunk fails.
Step 5: Let Notion AI summarise the page
This step is manual-adjacent: Notion AI's "Summarize" action is not yet exposed via a stable API in the way the rest of Notion is. What you can do is open the weekly page once it has been generated, press space, type "Summarize this page", and paste the result into the page's Summary property. This takes about 10 seconds of human effort per week and is worth the wait because Notion AI's summary is tuned to the page content directly.
If you want to fully automate this step too, the alternative is to call Claude or Perplexity again with the page content as input and write the summary back via the Notion properties update endpoint. That code would look roughly like:
def summarise_and_update(page_id: str, all_research: list[dict]):
full_text = "\n\n".join(
f"{r['competitor']['name']}\n"
f"Pricing: {r['results']['pricing']['content']}\n"
f"Launches: {r['results']['launches']['content']}\n"
f"Announcements: {r['results']['announcements']['content']}"
for r in all_research
)
summary = ask_perplexity(
f"Summarise the most important competitive moves from the past week "
f"in under 150 words, grouped into 'pricing', 'product', and 'positioning'. "
f"Here is the research:\n\n{full_text}"
)
requests.patch(
f"https://api.notion.com/v1/pages/{page_id}",
headers=NOTION_HEADERS,
json={
"properties": {
"Summary": {
"rich_text": [{
"type": "text",
"text": {"content": summary["content"][:2000]},
}]
}
}
},
).raise_for_status()
This costs about 2 pence per run and saves you the manual Notion AI step.
Step 6: Schedule it
Create a GitHub Actions workflow at .github/workflows/competitive-intel.yml:
name: Weekly Competitive Intel
on:
schedule:
- cron: "30 7 * * MON"
workflow_dispatch:
jobs:
research:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- run: pip install requests python-dotenv
- env:
PERPLEXITY_API_KEY: ${{ secrets.PERPLEXITY_API_KEY }}
NOTION_API_KEY: ${{ secrets.NOTION_API_KEY }}
NOTION_DATABASE_ID: ${{ secrets.NOTION_DATABASE_ID }}
run: python research.py
Add the three secrets to your GitHub repo's Settings → Secrets and variables → Actions. First run with the workflow_dispatch trigger to verify everything works, then let the Monday schedule take over.
Cost breakdown
- Perplexity Pro: £16/mo (includes Pro access; the API calls on top are metered and typically under £2/mo for a 5-competitor weekly run)
- Notion + Notion AI: £4/mo + £8/mo AI add-on (if you do not already have a team workspace)
- Total: about £28/mo all-in
- Per week: about 50 pence in API calls
What this won't catch
Perplexity is good at finding public information that has been indexed by search engines. It is much less good at finding four specific things that often matter more in competitive intel: undisclosed beta features mentioned only in private communities, pricing changes on country-localised pages that do not appear in the main indexed pricing URL, hiring and job listing shifts that hint at product direction, and social posts from founders that do not make it to the main blog. This workflow gives you a solid baseline on the indexed web, not a complete picture.
Pair it with a human reviewer who spends 15 minutes each week on the non-indexed stuff. The AI pipeline catches the obvious weekly movements so the human can spend their time on the subtle signals that actually move strategy conversations in your team.
More Recipes
Automated Podcast Production Workflow
Automated Podcast Production Workflow: From Raw Audio to Published Episode
Build an Automated YouTube Channel with AI
Build an Automated YouTube Channel with AI
Medical device regulatory documentation from technical specifications
Medtech companies spend significant resources translating technical specs into regulatory-compliant documentation.