Running a personal brand blog while working full-time as a cloud architect in Malaysia is a time problem I never fully solved — until I stopped trying to do everything myself.

I have ideas, expertise, and opinions accumulated from years of Azure consulting, migration projects, and solution architecture. But I don't have the 4-6 hours per article it takes to research, write, edit, optimize images, create social posts, and distribute content consistently. The ideas pile up, the drafts languish, and the publishing cadence suffers.

So I built a multi-agent content pipeline that does most of the repeatable operational work autonomously. A set of specialized scheduled jobs handles topic research, drafting, SEO review, fact-checking, social media creation, backups, and posting — all orchestrated by Hermes Agent. The blog runs on Ghost (self-hosted on an Azure VM), and I keep the ability to intervene for strategic review while the pipeline enforces automated pre-publish checks.

This article walks through the full architecture, the agent responsibilities, the infrastructure setup, the hard-won lessons from production, and the actual cost breakdown.


The Problem: Content Consistency at Scale

The content creation workflow has more steps than most people realise:

  1. Research — find trending topics worth writing about
  2. Decide — pick the right topic for your audience and schedule
  3. Write — produce a well-structured, technically accurate article
  4. Optimize — SEO titles, meta descriptions, headings, tags, internal links
  5. Create visuals — diagrams, screenshots, code blocks
  6. Social posts — platform-specific copy for LinkedIn, X/Twitter, and the newsletter
  7. Publish — format, preview, schedule, publish
  8. Distribute — post to social channels, email subscribers
  9. Track — monitor analytics, update strategy

Doing all nine steps manually takes 4-6 hours per article for me. The bottleneck is not the writing itself — it is everything around the writing. Research eats 45 minutes, SEO tweaks eat 30 minutes, and social media creation and scheduling eat another hour.

The solution: automate the repeatable work, then gate publishing with review and validation controls.

My pipeline automates topic discovery, SEO checks, social content preparation, backups, and distribution. Topic selection and drafting are assisted by agents, while publishing is gated by scheduled validation jobs so an article can be corrected before it goes live. I retain control over creative direction and can override the schedule when needed.


Architecture Overview

The pipeline consists of specialized scheduled jobs, each with a single, well-defined responsibility. No job should overlap heavily with another's job — this is critical for debugging and maintaining the system.

Agent Roster

Agent Schedule (MYT) Responsibility
Topic Researcher Daily, 12:00 AM Adds new content ideas to the topic bank so the draft library stays stocked.
Research Scout Daily, 12:15 AM Checks current signals across AI, Azure, cloud migration, Fabric, LLMs, DevOps, and backup/DR. Outputs scored topic suggestions with source URLs.
Content Strategist Daily, 1:15 AM Assigns drafted topics from the bank to editorial calendar dates. Rotates between content pillars such as AI Agents, Cloud Architecture, Data Platform, and DevOps.
Blog Writer Daily, 2:15 AM The heavy lifter. Writes full 1500-2500 word articles in markdown and updates Ghost drafts through the Admin API.
SEO Reviewer Daily, 3:15 AM Audits drafts for title optimization, meta description quality, heading structure, internal linking opportunities, and tag relevance.
Fact Checker Daily, pre-publish Validates high-risk technical claims against authoritative sources and patches drafts before publication.
Social Writer / Sync Daily, 4:15-5:00 AM Creates and reconciles platform-specific social posts for each editorial entry — LinkedIn, X/Twitter, and Facebook.
Ghost Backup Daily, 4:00 AM Exports a MySQL 8 dump to a GitHub repository for disaster recovery.
Buffer Poster Daily, 5:30 AM Posts scheduled social items via the Buffer GraphQL API. Retries safely on rate limits.
Analytics Reporter Weekly, Sunday 3:00 AM Generates a traffic and SEO performance report for the past week.

Most agents run during off-peak hours (12:00 AM - 5:30 AM MYT) using cost-efficient models, with pre-publish validation later in the morning. The staggered schedule prevents API rate-limit collisions — the Blog Writer, SEO Reviewer, Fact Checker, and Publisher do not all mutate the same draft at the same time.

Data Flow

Topic Researcher / Research Scout -> Topic Bank (file) -> Content Strategist -> Editorial Calendar
  -> Blog Writer -> Ghost Draft (via Admin API) -> SEO Reviewer -> Fact Checker -> Publisher
  -> Social Writer / Sync -> Social Calendar -> Buffer Poster -> Buffer API -> Social Platforms

Each step depends on the previous one completing. If the Research Scout fails, the Strategist works from the existing topic bank. If the Blog Writer fails, the SEO Reviewer skips that article. This "fail-soft" design was essential for building a system I can trust while I sleep.


The Ghost Blog Setup

Ghost runs as a Docker container on an Azure B-series VM with MySQL 8 as the database container on the same host. Nginx handles reverse proxy and SSL termination via Let's Encrypt, using Cloudflare DNS challenge for certificate renewal. I avoid hard-coding the VM size as a universal recommendation because sizing depends on traffic, backup load, and background agent activity.

Why Ghost?

Ghost provides the best combination of:

  • Visual editor — clean, distraction-free writing experience when I do manual edits
  • Built-in newsletter — no extra Mailchimp or ConvertKit subscription needed
  • Full Admin API — this is the killer feature for agent-driven content creation. Every agent interaction goes through the REST API.
  • Self-hosting — full control over the data, theme, and deployment pipeline
  • Membership system — free and paid subscription tiers built in

Alternatives I evaluated included WordPress, Hashnode, and Substack. For my workflow, Ghost was the cleanest fit because its Admin API and self-hosted model make draft creation, metadata updates, image handling, and publishing automation practical without giving up editorial control.

Agent-to-Ghost API Flow

The Blog Writer agent generates markdown, converts it to HTML, and sends it to Ghost using the Admin API's HTML source import path. A simplified request looks like this:

POST /ghost/api/admin/posts/?source=html
Authorization: Ghost {JWT}
Content-Type: application/json

{
  "posts": [{
    "title": "...",
    "html": "<p>Article body...</p>",
    "tags": [{"name": "AI Agents"}, {"name": "Hermes Agent"}],
    "status": "draft",
    "feature_image": "...",
    "meta_title": "...",
    "meta_description": "..."
  }]
}

For updates, Ghost requires optimistic locking via the post's latest updated_at value, so the SEO Reviewer and Fact Checker read the draft first and then issue a PUT with the current timestamp plus the corrected fields. Publishing is a status transition from draft to published after the pre-publish checks pass.

I built a small Python helper library that handles: - JWT token generation with the correct hex-decoded Admin API secret - Markdown-to-HTML conversion for Ghost's ?source=html import path - Retry logic with exponential backoff for API rate limits - Image upload via the Ghost Admin API image endpoint


Content Library Strategy

The pipeline maintains a content library of 20-30+ pre-written articles in draft status. This solves the "what do I publish this week?" problem permanently.

Research accumulates topics in a topic bank file. The Content Strategist assigns priorities based on: - Which content pillar hasn't had a post recently - Seasonal relevance (e.g., Microsoft Ignite recap in November) - Topic freshness (trending vs evergreen)

The Blog Writer drafts articles from the bank, the library grows, and by the time a publish date arrives, the article has already gone through SEO review and pre-publish fact-checking. On publish day, the publisher can move the validated draft live and the social posts fire automatically.

Why 20-30 articles ahead? Buffer. If I am sick, travelling for a client engagement, or buried in a migration project, the pipeline keeps publishing from the library. I don't miss a week, and the site keeps generating traffic.

Topic Bank Lifecycle

Researched -> In Bank -> Assigned -> Draft Created -> SEO Reviewed -> Published

Every topic carries full metadata: source URL, suggested title, target pillar, estimated word count, and creation date. This makes the system auditable — I can see exactly where each article came from and how it progressed.


The Hard Parts

Nothing worked on the first try. Here are the biggest gotchas I hit in production and how I resolved them.

1. Ghost Admin API Authentication (The Hex vs Base64 Trap)

Ghost Admin API uses hex-encoded secrets for JWT generation — not base64. This is the most common gotcha, and the Ghost documentation could be clearer about it.

# WRONG — this silently fails with 401
import base64
secret = base64.b64decode(ghost_admin_key)

# RIGHT — hex decode
secret = bytes.fromhex(ghost_admin_key.split(':')[1])

Use bytes.fromhex() in Python or xxd -r -p in bash. The JWT itself is standard HS256, but the input secret must be raw bytes from hex, not base64.

2. Ghost Editor Content Format

Ghost's current Admin API represents post content around a Lexical editor field, and the API also supports an HTML import workflow with ?source=html. That means I do not need to hand-build editor JSON for normal article drafts.

The safer pattern for agents is:

POST /ghost/api/admin/posts/?source=html

with an html field in the JSON payload. Ghost converts the HTML into its internal editor representation. This keeps the agent code simpler and avoids brittle conversions whenever Ghost changes editor internals.

3. Cloudflare Bot Fight Mode

When Cloudflare Bot Fight Mode is enabled, all programmatic API calls receive 403 challenges — even valid Ghost Admin API requests from the agent. WAF skip rules do not override it.

The fix: disable Bot Fight Mode for the Ghost subdomain. Alternatively, route API calls through an SSH tunnel to bypass Cloudflare entirely, but that adds latency and complexity.

4. Content Quality at Scale

The biggest risk with automated content pipelines is quality degradation. If the system publishes mediocre articles, it damages your brand faster than not publishing at all.

My mitigations: - Require real code examples — every article must include at least one CLI command, config file, or code sample - Emphasize practical content — no fluff, no filler paragraphs - Human review gate — no article publishes without my review - SEO Reviewer agent — catches structural issues before they reach me - Random spot-checks — I read 1 in 5 articles end-to-end to audit quality drift


Cost Breakdown

Here is the actual monthly cost of running this pipeline:

Component Cost
Hermes Agent (cron scheduler) $0 (open source, self-hosted)
Ghost hosting (Azure VM, storage, backup overhead) Roughly low tens of USD/month; verify with Azure Pricing Calculator for the selected region and VM size
Domain (wenfeng.my) ~$12/year (~$1/mo)
Cloudflare (free tier) $0
Buffer (free tier) $0
LLM inference (Hermes-configured cost-efficient model, currently mimo-v2.5 in my environment) ~$5-8/mo
Total Usually low tens of USD/month for this small self-hosted setup, excluding labour and optional paid social tools

The important point is not the exact monthly number; it is the cost profile. The core platform is inexpensive to run compared with manual production time, but the final figure depends on VM sizing, region, model usage, backup retention, and whether paid social scheduling features are required.


What's Next

I am currently working on:

  • Image generation agent — generates featured images and diagrams, then uploads approved assets to Ghost
  • Translation agent — translates select articles to Malay and Chinese for regional audiences
  • Performance scraper — tracks Google Search Console data weekly to refine the SEO Reviewer agent's rules

Key Takeaways

  1. Single-responsibility agents — each agent has one job, no overlap, no shared context. Debugging is trivial because the failure is always in one agent.
  2. The topic bank is the most important component — invest in research automation first. A pipeline with great research and mediocre writing outperforms one with great writing and no research.
  3. Ghost Admin API is the integration point — programmatic draft creation and updates via the API make agent-driven blogging practical. Use the documented HTML import path or Lexical field rather than relying on outdated editor assumptions.
  4. Automate the repeatable, validate the critical — research, drafting, SEO, and distribution can be automated. Strategic direction and high-risk technical claims still need review or automated fact-checking gates.
  5. Off-peak scheduling saves money — agents run while you sleep. Staggered schedules prevent API collisions and keep costs minimal.
  6. Fix Bot Fight Mode before you deploy — this single issue cost me two nights of debugging. Disable it on your Ghost subdomain from day one.
  7. Build fail-soft — each agent should work with whatever state the previous agent left, even if that state is "nothing new today."

The goal is not to replace human creativity — it is to remove the operational friction that prevents consistent publishing. Agents handle research, drafts, SEO, fact-checking, and distribution. I handle strategic direction and intervene where judgement matters. The blog has gone from 1 post per month to 4-6 posts per month, and the quality has improved because attention goes to the posts that need actual thought rather than the mechanics of publishing.


I build multi-agent content pipelines and AI automation systems for Malaysian enterprises. If your organisation is exploring agent-based automation or wants to streamline content operations, let's talk on LinkedIn.