AEO Content Automation: Open-Source LLM + n8n to WordPress
Build a self-hosted AEO content engine with n8n and an open-source LLM to research, enrich, evaluate, and publish WordPress posts automatically.
Answer Engine Optimization (AEO) is the practice of structuring content so AI answer engines can extract clear, direct answers and reuse them in summaries. Instead of optimizing only for rankings and clicks, you optimize for being the source behind the answer. In this guide, you’ll build a practical n8n workflow that continuously researches topics, drafts articles with a self-hosted open-source model, runs AEO checks, and publishes to WordPress.
What is AEO (and why it changes blogging)?
AEO is about making your content “answer-ready.” That means your page should contain a direct answer near the top, clear headings, short sections, and an FAQ that matches real questions people ask.
In traditional SEO, success is often measured by page rank and traffic. In AEO, success also includes visibility inside AI-generated answers—where the reader may never click, but will still see (and remember) your brand.
What this automation will build
You’re going to set up a content engine that runs on a schedule (daily/weekly) and does the following:
- Pulls a topic idea from a backlog.
- Generates keyword and question ideas (the “People Also Ask” style angles).
- Creates an AEO content brief (outline + required sections + internal links to add).
- Drafts the article using an open-source model you control.
- Performs an “AEO enrichment” pass to enforce snippet-friendly structure.
- Evaluates the output against simple quality gates.
- Creates a WordPress draft (or publishes automatically).
The result is a repeatable pipeline you can improve over time, instead of repeatedly prompting a model from scratch.
Prerequisites (keep it simple)
Before you build the workflow, prepare:
- WordPress access: an admin user that can create application passwords, and the REST API must be reachable.
- n8n: self-hosted or cloud-hosted, with permission to store credentials securely.
- Open-source LLM endpoint: the easiest path is running a local model server (commonly via Ollama) that exposes a simple HTTP generation endpoint.
Optional (but recommended):
- A topic backlog table (Google Sheets, Airtable, Notion, or a database).
- A lightweight style guide (tone, audience, do/don’t, formatting rules).
The workflow stages
Research → brief → write → enrich → evaluate → publish
Stage 1: Pick a topic from a backlog
Create a list of topics you want to cover, each with:
- Primary keyword (seed)
- Audience
- Intent (informational / commercial)
- Status (queued / drafted / published)
The workflow should pull the next “queued” topic, mark it “in progress,” and continue.
Stage 2: Generate keyword + question research
For a first working version, you can generate research using the model itself:
- Secondary keywords
- Related entities
- Long-tail questions
- Suggested headings
Later, you can replace or enrich this step with real SEO data sources (Search Console exports, paid tools, SERP APIs).
Stage 3: Build an AEO content brief
Your brief is the contract the model must follow. A strong brief typically includes:
- Primary keyword + 5–15 related terms
- 5–8 headings (H2/H3)
- Required snippet paragraph (40–60 words)
- Required lists or tables (when relevant)
- Required FAQ count (for example: 6 questions)
Stage 4: Draft the article with an open-source LLM
Generate the full draft from the brief, not from the seed keyword alone.
This produces content that aligns with search intent and reduces generic filler.
Stage 5: AEO enrichment pass
Snippet, headings, FAQs
Do a second pass whose only job is structure and clarity:
- Tighten the first paragraph into a direct answer
- Ensure headings are descriptive
- Convert dense sections into bullets
- Add an FAQ section with concise answers
This “two-pass” pattern is one of the easiest ways to get consistently clean outputs.
Stage 6: Evaluation gates (structure + quality checks)
Add checks that prevent bad drafts from reaching WordPress, such as:
- Does the article include exactly one H1?
- Is there a 40–60 word summary at the top?
- Are there at least 6 H2 headings?
- Is there an FAQ section with at least 5 Q&As?
- Does the draft exceed a minimum word count?
If a check fails, route the draft back into a “fix” prompt that only repairs the missing parts.
Stage 7: Create a WordPress draft (or publish)
Start by creating drafts so you can review formatting inside WordPress.
Once quality is stable, switch to auto-publish for low-risk content types.
n8n workflow you can import (test-ready)
How to use, simply follow the steps below:
- In n8n, go to Workflows → Import from File (or Import from Clipboard).
- Paste the JSON below and save.
- Set credentials:
- “WordPress API” credential (WordPress username + application password)
- Update the “WP Base URL” value
- Update the “LLM Base URL” value (your local model endpoint)
- Click “Execute workflow” to test once, then activate scheduling.
Importable n8n workflow JSON
Copy and Paste into N8N
{
"name": "AEO Content Engine (Open-Source LLM -> WordPress)",
"nodes": [
{
"parameters": {
"triggerTimes": {
"item": [
{
"mode": "everyDay",
"hour": 8,
"minute": 5
}
]
}
},
"id": "cron_1",
"name": "Schedule Trigger",
"type": "n8n-nodes-base.cron",
"typeVersion": 1,
"position": [240, 220]
},
{
"parameters": {
"values": {
"string": [
{ "name": "topic_seed", "value": "Answer engine optimization checklist for ecommerce" },
{ "name": "audience", "value": "Marketers and founders" },
{ "name": "brand_voice", "value": "Clear, practical, no hype. Short paragraphs. Use bullets." },
{ "name": "language", "value": "English" },
{ "name": "wp_base_url", "value": "https://YOUR-WP-SITE.COM" },
{ "name": "wp_status", "value": "draft" },
{ "name": "wp_category", "value": "AEO" },
{ "name": "llm_base_url", "value": "http://localhost:11434" },
{ "name": "llm_model", "value": "llama3.1:8b-instruct" }
],
"boolean": [
{ "name": "auto_publish", "value": false }
]
},
"options": {}
},
"id": "set_config",
"name": "Set Config",
"type": "n8n-nodes-base.set",
"typeVersion": 2,
"position": [460, 220]
},
{
"parameters": {
"method": "POST",
"url": "={{$json.llm_base_url + '/api/generate'}}",
"sendBody": true,
"contentType": "json",
"jsonBody": "={\n \"model\": $json.llm_model,\n \"stream\": false,\n \"format\": \"json\",\n \"prompt\": \"You are an SEO + AEO strategist. Generate research as JSON only.\\n\\nTopic seed: \" + $json.topic_seed + \"\\nAudience: \" + $json.audience + \"\\nLanguage: \" + $json.language + \"\\n\\nReturn JSON with keys: primary_keyword, secondary_keywords (array), longtail_questions (array), entities (array), suggested_outline (array of headings).\\nKeep it realistic and specific.\"\n}",
"options": {
"timeout": 600000
}
},
"id": "llm_research",
"name": "LLM Research (Keywords + Questions)",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [700, 220]
},
{
"parameters": {
"functionCode": "const raw = $json.response || $json;\n// Ollama /api/generate returns { response: \"...\" } when format is not json.\n// With format:\"json\", many setups return response as a JSON string.\nlet research;\ntry {\n research = typeof raw.response === 'string' ? JSON.parse(raw.response) : raw.response;\n} catch (e) {\n // Fallback: if the model returned plain text, store it.\n research = { primary_keyword: $json.topic_seed, secondary_keywords: [], longtail_questions: [], entities: [], suggested_outline: [], raw: raw.response };\n}\nreturn [{ ...$item(0).$json, research }];"
},
"id": "parse_research",
"name": "Parse Research",
"type": "n8n-nodes-base.function",
"typeVersion": 2,
"position": [940, 220]
},
{
"parameters": {
"functionCode": "const r = $json.research;\nconst brief = {\n title: r.primary_keyword || $json.topic_seed,\n required: {\n snippet_words: [40, 60],\n faq_count_min: 6,\n h2_min: 6,\n include_steps: true\n },\n keywords: {\n primary: r.primary_keyword || $json.topic_seed,\n secondary: r.secondary_keywords || []\n },\n questions: r.longtail_questions || [],\n entities: r.entities || [],\n outline: r.suggested_outline || [\n \"What is AEO?\",\n \"How AEO differs from SEO\",\n \"AEO content checklist\",\n \"Common mistakes\",\n \"FAQ\"\n ],\n voice: $json.brand_voice,\n audience: $json.audience,\n language: $json.language\n};\nreturn [{ ...$json, brief }];"
},
"id": "build_brief",
"name": "Build AEO Brief",
"type": "n8n-nodes-base.function",
"typeVersion": 2,
"position": [1180, 220]
},
{
"parameters": {
"method": "POST",
"url": "={{$json.llm_base_url + '/api/generate'}}",
"sendBody": true,
"contentType": "json",
"jsonBody": "={\n \"model\": $json.llm_model,\n \"stream\": false,\n \"prompt\": \"Write a blog article in Markdown.\\n\\nFollow this AEO brief strictly:\\n\" + JSON.stringify($json.brief, null, 2) + \"\\n\\nRules:\\n- Start with a direct 40–60 word answer.\\n- Use exactly one H1, then H2/H3.\\n- Use short paragraphs and bullets.\\n- Include a step-by-step section.\\n- Include a FAQ section with at least 6 Q&A pairs.\\n- Avoid fluff and vague claims.\\n\\nOutput: Markdown only.\"\n}",
"options": {
"timeout": 600000
}
},
"id": "llm_draft",
"name": "LLM Draft (Markdown)",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [1420, 220]
},
{
"parameters": {
"functionCode": "const draft = $json.response || $json;\nconst markdown = typeof draft.response === 'string' ? draft.response : (draft.response?.toString?.() || '');\n// Simple checks (lightweight gates)\nconst hasH1 = /^#\\s+.+/m.test(markdown);\nconst h2Count = (markdown.match(/^##\\s+/gm) || []).length;\nconst hasFAQ = /##\\s+FAQ/i.test(markdown) || /##\\s+FAQs/i.test(markdown);\nconst wordCount = markdown.split(/\\s+/).filter(Boolean).length;\nconst evalReport = {\n hasH1,\n h2Count,\n hasFAQ,\n wordCount,\n passed: hasH1 && h2Count >= ($json.brief?.required?.h2_min || 6) && hasFAQ && wordCount >= 900\n};\nreturn [{ ...$json, markdown, evalReport }];"
},
"id": "evaluate",
"name": "Evaluate Draft",
"type": "n8n-nodes-base.function",
"typeVersion": 2,
"position": [1660, 220]
},
{
"parameters": {
"conditions": {
"boolean": [
{
"value1": "={{$json.evalReport.passed}}",
"value2": true
}
]
}
},
"id": "if_passed",
"name": "If Passed",
"type": "n8n-nodes-base.if",
"typeVersion": 2,
"position": [1880, 220]
},
{
"parameters": {
"method": "POST",
"url": "={{$json.wp_base_url + '/wp-json/wp/v2/posts'}}",
"authentication": "predefinedCredentialType",
"nodeCredentialType": "wordpressApi",
"sendBody": true,
"contentType": "json",
"jsonBody": "={\n \"title\": $json.brief.title,\n \"content\": $json.markdown,\n \"status\": $json.auto_publish ? \"publish\" : $json.wp_status\n}"
},
"id": "wp_create_post",
"name": "Create WordPress Post",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [2120, 160],
"credentials": {
"wordpressApi": {
"id": "YOUR_CREDENTIAL_ID",
"name": "WordPress API"
}
}
},
{
"parameters": {
"functionCode": "throw new Error('Draft failed quality gates. Review evalReport and adjust prompts or thresholds.');"
},
"id": "fail_node",
"name": "Fail (Needs Review)",
"type": "n8n-nodes-base.function",
"typeVersion": 2,
"position": [2120, 300]
}
],
"connections": {
"Schedule Trigger": {
"main": [[{ "node": "Set Config", "type": "main", "index": 0 }]]
},
"Set Config": {
"main": [[{ "node": "LLM Research (Keywords + Questions)", "type": "main", "index": 0 }]]
},
"LLM Research (Keywords + Questions)": {
"main": [[{ "node": "Parse Research", "type": "main", "index": 0 }]]
},
"Parse Research": {
"main": [[{ "node": "Build AEO Brief", "type": "main", "index": 0 }]]
},
"Build AEO Brief": {
"main": [[{ "node": "LLM Draft (Markdown)", "type": "main", "index": 0 }]]
},
"LLM Draft (Markdown)": {
"main": [[{ "node": "Evaluate Draft", "type": "main", "index": 0 }]]
},
"Evaluate Draft": {
"main": [[{ "node": "If Passed", "type": "main", "index": 0 }]]
},
"If Passed": {
"main": [
[{ "node": "Create WordPress Post", "type": "main", "index": 0 }],
[{ "node": "Fail (Needs Review)", "type": "main", "index": 0 }]
]
}
},
"active": false,
"settings": {
"executionTimeout": 3600,
"saveExecutionProgress": true
},
"versionId": "1"
}
Notes to make the workflow succeed on the first test
- If your LLM endpoint doesn’t support the exact JSON behavior shown, keep the workflow but adjust the “Parse Research” node to match the response shape.
- Start with
auto_publish = falseso WordPress posts land as drafts. - If the workflow fails the gates too often, lower
wordCountthreshold or simplify the required structure until you get stable output.
FAQ
Can this workflow update existing posts instead of creating new ones?
Yes. Replace the “Create WordPress Post” step with “Search post by slug/title → Update post content,” and keep the same evaluation gates.
Where do I plug in real keyword research data?
Replace Stage 2 with an HTTP call to your preferred SEO data source, then merge that data into the brief before drafting.
Should you fully automate publishing?
For most teams, the safest approach is draft-first with human approval, then move selective categories to auto-publish once quality is consistent.