How to Build an Agentic AI Blogging Workflow on Blogger.com for $0

Hero image for a blog post titled "AI Blogging Automation for Blogger.com: The Zero Dollar Guide," showing an infographic-style diagram of a free n8n workflow using GSC data, Gemini API Free Tier (Agentic Workflow), and Blogger API v3 to automate content publishing and boost 30-day traffic results.
detailed infographic outlining a completely free AI blogging automation pipeline designed specifically for Blogger.com. The diagram shows how GSC data, Gemini API, and Blogger API are connected via n8n to achieve automated content creation and publishing, with a focus on 'Zero Dollar' costs and proven traffic growth over 30 days.

I spent three weeks watching every "AI blogging automation" tutorial on YouTube before I accepted the obvious truth: every single one of them assumes you are running WordPress with a paid plugin stack, a hosted n8n server costing $20 per month, and a Claude or GPT-4 API plan costing another $20 to $60 per month. Not one of them covers Blogger.com. Not one builds for zero dollars. And not one of them explains what an agentic workflow actually is versus a simple API call dressed up in automation language.

I built this on Blogger because Blogger is where I started, and because the constraint of a free platform forces engineering discipline that paid platforms make too easy to avoid. The result after 30 days is a fully autonomous content pipeline that researches keyword gaps using GSC data, generates structured posts through the Gemini API free tier, formats them as valid Blogger HTML, and publishes drafts directly to Blogger via the Blogger API v3, all without touching a paid service. The monthly cost is exactly zero dollars.

This post covers every step of that build in technical detail: the n8n workflow logic, the Gemini prompt engineering, the Blogger API authentication, the INP optimisation hacks, and the 30-day traffic results that prove the pipeline works. If you came here from a guide that told you Blogger "cannot be automated," this post is the direct rebuttal.

How to Build an Agentic AI Blogging Workflow on Blogger for Free

To build an agentic AI blogging workflow on Blogger.com for $0, connect n8n Community Edition (self-hosted free), the Google Gemini API free tier (60 requests/min at no cost), and the Blogger API v3 (free with a Google Cloud project) into a three-node pipeline: n8n triggers a Gemini API call with a structured prompt, receives the JSON response, converts it to Blogger-compatible HTML, and posts it as a draft via the Blogger API. No paid tools, no WordPress, no monthly subscription. The entire stack runs on free tiers and self-hosted open-source software.

The Rise of Agentic Blogging and Why Blogger Is the Underdog Platform Nobody Built For

Most people use "agentic AI" and "AI automation" interchangeably, but the distinction matters for how you architect the workflow. A simple automation calls an AI API and outputs the result. An agentic workflow gives the AI a goal, tools, and decision-making capacity to pursue that goal across multiple steps, adapting based on intermediate outputs. The difference between asking Gemini to "write a post about SCHD" and building an agent that researches the current GSC impression data, identifies the specific keyword gap, generates a post targeting that gap, formats it correctly for Blogger, and publishes it as a draft is the difference between a prompt and a pipeline.

Blogger.com has been running since 2003, is owned by Google, and hosts millions of blogs globally. Despite this, it is almost completely absent from the AI blogging automation conversation. Every tool, tutorial, and framework assumes WordPress. The reason is not technical: the Blogger API v3 is well-documented, supports full CRUD operations on posts, and is free to use under Google Cloud's standard API quota system. The reason is commercial: no SaaS company has built a paid tool on top of the Blogger API because the Blogger user base is perceived as unwilling to pay for software. That commercial blind spot is the technical opportunity this post exploits.

The agentic workflow I built treats Blogger not as a limitation but as an advantage. The Blogger API publishes posts as drafts with full HTML body support, custom labels, custom URL slugs, and search descriptions all controllable via the API payload. The platform's lack of a plugin ecosystem forces cleaner HTML output and faster Core Web Vitals scores because there is no plugin bloat. And the Google ownership of Blogger means that Blogger posts are indexed by Googlebot faster than comparable WordPress posts hosted on shared servers, a real-world advantage I measured across the 30-day case study.

Alex's Advice

The first mistake I made when building this was assuming the Blogger API would behave like the WordPress REST API. It does not. WordPress expects a plain text or Gutenberg JSON body. The Blogger API expects a raw HTML string passed as the content field in the JSON payload. I spent two days debugging why my posts were publishing as empty because I was passing the Gemini response directly without converting it to an HTML string first. The conversion step is not optional. You must build a dedicated transformation node between the Gemini response and the Blogger API POST request.

Also: Blogger strips certain HTML tags on publish. Specifically, it removes <script> tags, inline event handlers, and some custom data attributes. Build your Gemini prompt to output only the HTML tags Blogger allows: headings, paragraphs, divs, spans, strong, em, blockquote, pre, code, ul, ol, li, table, and anchor tags. Anything beyond that gets silently stripped and you will spend time wondering why your formatted output looks correct in n8n but broken in Blogger.

The Competitor Gap Hunt Strategy Using GSC Data and the Gemini API

Before any content is generated, the agentic workflow needs a research input. Most AI blogging guides skip this step entirely and build a workflow that generates content from a fixed keyword list. That approach produces posts that may or may not address what searchers are actually looking for in the current search landscape. The competitor gap hunt strategy feeds real GSC impression data into the workflow as the research input, so every post generated by the pipeline is targeting a specific query that Google has already confirmed your blog is relevant to but is not yet ranking for on page one.

Extracting the Right Data From Google Search Console

Google Search Console's Performance report shows every query where your blog appeared in search results in the past 90 days, along with the click count, impression count, average CTR, and average position for each query. The specific data slice you want for the competitor gap hunt is queries with more than 3 impressions and an average position between 11 and 30. These are queries where Google has already decided your content is relevant, but your post is sitting on page 2 or at the bottom of page 1. A dedicated post targeting the exact query phrase can move those positions to the top 10, which converts the existing impressions into clicks without requiring any new domain authority.

Exporting GSC data for use in the n8n workflow

The Google Search Console API (also free under Google Cloud) allows programmatic access to your performance data. In n8n, you can use the HTTP Request node to call the Search Console API's searchanalytics.query endpoint with your site URL, a date range of 90 days, and dimension filters for query and page. The response is a JSON array of query performance objects. You filter this array in n8n using the Function node to keep only rows where impressions >= 3 and position >= 11. The filtered array becomes the keyword input list that feeds into the Gemini research step.

n8n Function Node: GSC Filter Logic
// Filter GSC rows: impressions >= 3, position >= 11
const rows = $input.first().json.rows || [];

const gaps = rows.filter(row =>
  row.impressions >= 3 &&
  row.position >= 11
).map(row => ({
  query: row.keys[0],
  impressions: row.impressions,
  clicks: row.clicks,
  position: Math.round(row.position * 10) / 10,
  ctr: (row.ctr * 100).toFixed(2) + '%'
}));

// Sort by impressions descending, take top 5
const top5 = gaps
  .sort((a, b) => b.impressions - a.impressions)
  .slice(0, 5);

return top5.map(item => ({ json: item }));

The output of this node is a ranked list of the five highest-impression, low-ranking queries from your GSC data. These five queries are passed one at a time into the Gemini research node, which generates a post brief for each. The workflow processes one query per run to keep the Gemini API calls within the free tier's 60-requests-per-minute limit and to avoid triggering Google Cloud's daily quota caps on the free tier.

Running the competitor gap analysis inside Gemini

Once the target query is identified from GSC, the next step is understanding what page-one competitors have already covered and what they have missed. I do this inside the n8n workflow using a two-prompt Gemini call. The first prompt asks Gemini to analyse the likely content structure of page-one results for the target query based on its training data and to identify the specific sub-topics, data types, and reader questions that high-ranking posts typically address. The second prompt asks Gemini to identify the information gap: what a post could include that would provide genuine information gain over typical page-one coverage.

The information gain prompt is the most important prompt in the entire pipeline. Generic AI blogging workflows skip this entirely and produce posts that mirror what already ranks. A post that mirrors existing content provides no reason for Google to rank it above the established pages. A post that adds specific first-hand data, a unique angle, or a technical depth that competitors lack gives Google an algorithmic reason to surface it because it answers reader questions that the existing page-one results do not fully address.

Building the Research Prompt That Finds What Competitors Missed

The research prompt template I use with Gemini follows a structured instruction format that forces the model to reason about the content gap before generating any copy. The prompt has four sections: the target query, the competitor analysis instruction, the information gain identification instruction, and the output format specification. Each section serves a distinct function in the agentic reasoning chain.

Gemini Research Prompt Template
SYSTEM: You are a senior SEO content strategist analysing keyword gaps.

TARGET QUERY: {{query}}
CURRENT POSITION: {{position}}
IMPRESSIONS: {{impressions}}

STEP 1 - COMPETITOR ANALYSIS:
List the 5 most common sub-topics that page-one posts
for this query typically cover. Be specific.

STEP 2 - INFORMATION GAP:
Identify 3 specific sub-topics, data points, or
technical details that page-one posts for this query
consistently MISS or cover only superficially.
These must be genuinely useful to the reader.

STEP 3 - ANGLE:
Propose one specific content angle that would
provide information gain over existing results.
The angle must be grounded in practitioner
experience, not generic advice.

OUTPUT FORMAT: Return as JSON with keys:
common_topics (array), gaps (array), angle (string).
No markdown. No backticks. Raw JSON only.

The raw JSON output from this research prompt becomes the structured brief that feeds into the content generation step. The gaps array and the angle string are the two inputs that differentiate the final post from generic AI-generated content. Every post in the pipeline is anchored to a specific identified gap rather than a broad topic instruction.

Alex's Advice

The biggest failure mode in the research step is trusting Gemini's competitor analysis without sanity-checking it manually the first few times you run the pipeline. Gemini's training data has a knowledge cutoff, and its understanding of what "typically ranks" for a specific query may be based on the search landscape from a year or more ago. For evergreen topics this is usually fine. For fast-moving topics like "best AI tools" or anything with a year modifier, Gemini's competitor analysis can be significantly out of date.

My practical fix: I run the pipeline's research step and then manually open the actual page-one results in a browser window before approving the content generation step. I built a manual approval node into my n8n workflow as a wait-step that sends me the research JSON output via a Telegram notification before continuing. This adds two minutes of human review to what is otherwise a fully automated process, and it has saved me from publishing two posts that were targeting gaps that had already been filled by competitors in the weeks after Gemini's training cutoff.

The Technical Workflow and n8n + Gemini API + Blogger API Step by Step

The complete three-stage pipeline runs inside n8n Community Edition, which is self-hosted, open-source, and free to use without any node or workflow limits. Self-hosting n8n requires either a local machine running continuously or a free-tier cloud instance. I run mine on a Google Cloud e2-micro instance which falls within the Google Cloud free tier (one e2-micro per month at no cost). The entire infrastructure cost for this pipeline is therefore zero dollars per month.

Setting Up n8n on the Google Cloud Free Tier

The Google Cloud free tier includes one e2-micro Compute Engine instance per month with 30 GB of standard persistent disk storage. This is sufficient to run n8n Community Edition, which requires approximately 256 MB of RAM for light workloads and less than 1 GB of storage for the n8n application and its SQLite workflow database. The e2-micro has 0.25 vCPU burst capacity and 1 GB RAM, which handles n8n's workflow execution well for scheduled pipelines that run once per day rather than real-time event-driven workflows.

Installing n8n on the e2-micro instance

After creating the e2-micro instance in Google Cloud Console with a Debian 11 OS image, the n8n installation process takes approximately 10 minutes. Install Node.js 18 or later using the NodeSource repository, then install n8n globally via npm. Configure n8n to run as a systemd service so it restarts automatically if the instance reboots. Set the N8N_HOSTN8N_PORT, and WEBHOOK_URL environment variables before starting the service. Access the n8n interface via the instance's external IP address on port 5678.

n8n Installation on Debian (Google Cloud e2-micro)
# Install Node.js 18
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs

# Install n8n globally
sudo npm install -g n8n

# Create systemd service
sudo nano /etc/systemd/system/n8n.service

# Service file content:
[Unit]
Description=n8n workflow automation
After=network.target

[Service]
Type=simple
User=YOUR_USERNAME
ExecStart=/usr/bin/n8n start
Restart=on-failure
Environment=N8N_HOST=0.0.0.0
Environment=N8N_PORT=5678
Environment=N8N_PROTOCOL=http

[Install]
WantedBy=multi-user.target

# Enable and start
sudo systemctl enable n8n
sudo systemctl start n8n

Authenticating With the Gemini API Free Tier

The Gemini API free tier provides access to the gemini-1.5-flash model at 60 requests per minute and 1,500 requests per day at zero cost. This is sufficient for a blogging pipeline that generates one to three posts per day. Access the Gemini API through Google AI Studio, where you generate an API key at no cost. No billing account is required for the free tier. The API key is stored in n8n as a credential of type "Header Auth" with the key name x-goog-api-key.

Structuring the Gemini API call in n8n

In n8n, the Gemini API call uses the HTTP Request node configured as a POST request to the endpoint https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent. The request body is a JSON object with a contents array containing the prompt text. The response includes a candidates array from which you extract candidates[0].content.parts[0].text to get the generated text output. Always add a Function node after the HTTP Request node to parse this extraction, because the path is nested and direct JSONPath expressions in n8n's expression editor can be unreliable for deeply nested arrays.

n8n HTTP Request: Gemini API Body
{
  "contents": [
    {
      "parts": [
        {
          "text": "{{ $json.prompt }}"
        }
      ]
    }
  ],
  "generationConfig": {
    "temperature": 0.7,
    "topK": 40,
    "topP": 0.95,
    "maxOutputTokens": 8192,
    "responseMimeType": "text/plain"
  }
}

The Content Generation Prompt: From Brief to Structured HTML

The content generation step uses a second Gemini API call that receives the research brief (gaps array, angle string, target query) and generates a complete blog post in structured HTML format. This is the step that most AI blogging guides get wrong by asking the model to produce a finished post in a single monolithic prompt. The agentic approach separates research from generation: the research step identified what the post should cover, and the generation step is solely responsible for producing the content in the correct format.

The content generation prompt specifies the exact HTML structure expected, the word count target, the heading hierarchy, the placement of the primary keyword, and the HTML tags permitted by Blogger's sanitisation layer. It also instructs Gemini to output raw HTML only, with no markdown fencing, no explanatory text, and no meta-commentary. The raw HTML output from this step is passed directly to the Blogger API publishing node after a validation step that checks for the presence of required structural elements.

Content Generation Prompt Structure
SYSTEM: You are an expert blog writer producing
structured HTML for a Blogger.com post.

TARGET QUERY: {{query}}
CONTENT GAPS TO ADDRESS: {{gaps}}
POST ANGLE: {{angle}}

REQUIREMENTS:
- Output raw HTML only. No markdown. No backticks.
- Minimum 1,800 words of body content.
- Structure: h2 > paragraph > h3 > paragraph > h4 > paragraph
- NEVER place one heading directly above another.
- Every heading must be followed by at least one paragraph.
- Primary keyword in first H2 and first paragraph.
- Use <strong> for key terms every 250-300 words.
- Use <ul> or <ol> lists every 300 words minimum.
- Permitted tags: p, h2, h3, h4, strong, em,
  ul, ol, li, blockquote, pre, code, a, div, span.
- Do not use: script, style, iframe, form, input.
- End with a summary paragraph starting with:
  "The bottom line on [primary keyword]:"

OUTPUT: Raw HTML string only. Nothing else.

Publishing to Blogger via the API v3

The Blogger API v3 uses OAuth 2.0 for authentication. In n8n, this is handled through the Google credential type, which stores the OAuth tokens and handles automatic refresh. You will need a Google Cloud project with the Blogger API v3 enabled, an OAuth 2.0 client ID of type "Web application," and your n8n instance's callback URL whitelisted in the OAuth client's authorised redirect URIs. This setup takes approximately 15 minutes and requires no billing account if you stay within the Blogger API's free quota, which is 10,000 requests per day.

The Blogger API POST request structure

To publish a post as a draft (recommended so you can review before live publishing), send a POST request to https://www.googleapis.com/blogger/v3/blogs/YOUR_BLOG_ID/posts/?isDraft=true. The blog ID is found in your Blogger dashboard URL. The request body must include the titlecontent (the raw HTML string from Gemini), labels (array of label strings), and optionally customMetaData for the search description. The url field sets the custom permalink slug. All of these fields are populated from the n8n workflow's accumulated data at this stage of the pipeline.

Blogger API v3: Draft Post Payload
{
  "kind": "blogger#post",
  "title": "{{ $json.post_title }}",
  "content": "{{ $json.html_content }}",
  "labels": ["{{ $json.label }}"],
  "url": "{{ $json.slug }}",
  "customMetaData": "{\"itemprop\":\"description\",
    \"content\":\"{{ $json.meta_description }}\"}"
}
Complete n8n Workflow Node Sequence: Agentic Blogger Pipeline
01
Schedule Trigger
Runs daily at 6:00 AM. Triggers the GSC data fetch. Configurable to run on any interval without consuming API quota between runs.
02
HTTP Request: GSC API
Calls the Search Console searchanalytics.query endpoint. Returns all query performance data for the past 90 days. Authenticated via Google OAuth credential in n8n.
03
Function Node: Gap Filter
Filters GSC rows to impressions 3+ and position 11+. Sorts by impressions descending. Returns top 5 keyword gap opportunities as individual items.
04
Wait Node (Manual Approval)
Optional: sends gap list via Telegram or email and waits for manual approval before proceeding. Recommended for first 30 days of operation.
05
HTTP Request: Gemini Research
Calls Gemini API with the research prompt for the top-priority query. Returns JSON with common_topics, gaps, and angle fields.
06
Function Node: Parse Research JSON
Extracts and validates the research JSON. Checks that gaps array has at least 3 items and angle is not empty. Halts pipeline if validation fails.
07
HTTP Request: Gemini Content Generation
Calls Gemini API with the content generation prompt, populated with research data. Returns raw HTML post content. Temperature set to 0.7 for consistent structured output.
08
Function Node: HTML Validation + Metadata
Validates HTML structure: checks for h2 presence, minimum word count, no script tags, no stacked headings. Generates title, slug, label, and meta description fields.
09
HTTP Request: Blogger API Draft
Posts the validated HTML as a draft to Blogger via the API v3. Stores the returned post ID and draft URL in the workflow for logging.
10
Telegram / Email Notification
Sends the draft URL to a designated phone or email for review. Workflow ends. Human reviews draft, makes any edits in Blogger Compose view, then publishes manually.
Alex's Advice

Do not automate the final publish step. I know it is tempting because the whole point of the agentic workflow is to reduce manual intervention. But I published my first five AI-generated posts directly to live without review, and three of them contained subtle factual errors that Gemini had confidently stated. One post claimed a tool had a feature it had discontinued months earlier. One used a statistic that was accurate but cited without a source in a way that looked fabricated. And one had a heading structure that the HTML validation node passed but that read awkwardly in the actual Blogger post because the Compose view renders heading sizes differently than my validation function expected.

The correct workflow is: generate the draft automatically, review it in Blogger's Compose view, make manual edits to add first-hand experience data that Gemini cannot supply, then publish. The human layer in the pipeline is not a failure of automation. It is the E-E-A-T layer that differentiates your published content from every other blog running the same Gemini pipeline without human review. That differentiation is what Google's quality systems are specifically designed to reward.

Blogger Performance Hacks for INP and Core Web Vitals

Interaction to Next Paint (INP) replaced First Input Delay as a Core Web Vitals metric in March 2024, and it is the metric where Blogger blogs are most likely to score poorly in 2026 if the default template is used without modification. INP measures the time between a user interacting with the page (clicking, tapping, pressing a key) and the next visual update the browser produces in response. Blogger's default templates load several third-party JavaScript files in the render-blocking position that inflate INP scores significantly even for pages with no interactive elements beyond standard navigation.

Diagnosing INP Issues on Blogger Templates

The fastest way to identify INP bottlenecks on a Blogger blog is to run a PageSpeed Insights test on two pages: the homepage and a recent post. The INP breakdown in the diagnostics section shows which JavaScript execution events are causing the longest interaction delays. On most Blogger templates, the primary INP culprits are the Blogger comment system JavaScript (loaded even on posts with comments disabled), the Blogger sharing widget JavaScript, and any third-party font loading configured through the template's font settings rather than through a preconnect-optimised link tag.

Disabling render-blocking Blogger gadgets

Blogger loads certain gadgets as JavaScript widgets even when they appear as native template features. The comment form, the sharing buttons, and the blog archive sidebar all load JavaScript in the document head by default. To remove these render-blocking scripts without affecting layout, go to the Blogger dashboard, click Theme, then Edit HTML. Search for each gadget's ID (typically BlogArchive1Share1, and Blog1). Replace the default gadget call with a static HTML equivalent where possible. For the comment form specifically, disabling comments at the blog level (Settings > Comments > Comment location: Embedded) reduces the JavaScript payload by approximately 40 KB on every post page.

Optimising Google Fonts loading for INP

Google Fonts loaded without the display=swap parameter cause layout shifts that contribute to both Cumulative Layout Shift (CLS) and perceived INP. In the Blogger theme HTML, locate the Google Fonts link tag and add &display=swap to the URL if it is not already present. Additionally, add a <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin> tag before the Google Fonts link to establish the DNS connection before the font file is requested. These two changes reduce the font-related CLS score from a typical 0.15 on default Blogger templates to under 0.05 in most cases.

Structuring AI Generated HTML for Maximum Core Web Vitals Performance

The HTML that Gemini generates for Blogger posts must be structured to avoid the two most common Core Web Vitals problems in AI generated content: missing image dimension attributes that cause layout shifts, and inline style blocks that browsers must parse before rendering the page. Both problems can be prevented at the prompt level by instructing Gemini to never generate image tags (images are added manually during the human review step) and to never use inline style attributes.

Lazy loading and image placeholder strategy

Because the AI-generated content does not include images (by design), the pipeline includes a standard placeholder structure in the post template that the human reviewer fills with real images during the review step. The placeholder uses a standard div with defined dimensions so the browser reserves the correct layout space before the image is loaded. Every image added during human review must include explicit width and height attributes and the loading="lazy" attribute to prevent layout shifts and to defer off-screen image loading. These three attributes together are the minimum required to prevent image-related CLS contributions on Blogger posts.

Core Web Vitals Quick Reference for Blogger : INP under 200ms is Good, 200-500ms Needs Improvement, over 500ms Poor. CLS under 0.1 is Good. LCP under 2.5 seconds is Good. The single highest-impact action for Blogger INP is removing the default comment system JavaScript. The single highest-impact action for CLS is adding explicit width and height attributes to all images. Both actions are free and take under 30 minutes total.

Bypassing Blogger's Compose View With Structured JSON to HTML

Blogger's Compose view is a visual editor that translates your writing into the HTML that actually gets published. The problem with the Compose view for automated workflows is that it adds its own formatting layer on top of whatever HTML you paste or type, often introducing unwanted div wrappers, style attributes, and span tags that bloat the published HTML. When you publish via the Blogger API, you bypass the Compose view entirely and post raw HTML directly. This is an advantage for cleaner output, but it requires that the HTML you publish be correctly structured before it arrives at the API.

The JSON to HTML Transformation Node

Between the Gemini content generation step and the Blogger API publishing step, I run a JavaScript transformation function in n8n that takes the structured JSON research data and the raw HTML content string and assembles the final Blogger-ready HTML document. This transformation adds the structured data markup that Blogger's own Compose view would normally inject, including the itemprop article schema annotations and the proper paragraph spacing that Blogger's CSS expects.

Building the slug and metadata automatically

The slug (custom permalink) for each post is generated from the post title using a simple slugification function that converts the title to lowercase, replaces spaces with hyphens, removes special characters, and truncates to 60 characters. The meta description is generated in a third Gemini API call (or using a template string if you want to avoid an additional API call) that takes the post's first paragraph and the target keyword and generates a 130 to 140 character summary. The label is mapped from the post's topic category using a lookup table defined as a JSON constant in the workflow.

n8n Function: Slug + Metadata Generation
const title = $json.post_title;
const query = $json.target_query;
const firstPara = $json.first_paragraph;

// Generate slug
const slug = title
  .toLowerCase()
  .replace(/[^a-z0-9\s-]/g, '')
  .replace(/\s+/g, '-')
  .replace(/-+/g, '-')
  .substring(0, 60)
  .replace(/-$/, '');

// Generate meta description (130-140 chars)
// Truncate first para to keyword + pain point
const metaBase = `${query}: ${firstPara}`;
const metaDesc = metaBase.length > 140
  ? metaBase.substring(0, 137) + '...'
  : metaBase;

// Label mapping
const labelMap = {
  'dividend': 'Dividend Investing',
  'blogger': 'Blogger Tips',
  'affiliate': 'Affiliate Marketing',
  'ai': 'Agentic AI',
  'default': 'Blog Strategy'
};

const label = Object.keys(labelMap)
  .find(k => query.toLowerCase().includes(k))
  ? labelMap[Object.keys(labelMap).find(
      k => query.toLowerCase().includes(k))]
  : labelMap.default;

return [{
  json: {
    slug,
    meta_description: metaDesc,
    label,
    char_count: metaDesc.length
  }
}];

Handling Blogger's HTML Sanitisation Rules

Blogger applies a server-side HTML sanitisation pass to all content published via the API. This sanitisation removes or modifies certain HTML constructs that are valid in the HTML specification but that Blogger considers unsafe or incompatible with its rendering environment. Understanding which constructs are removed prevents you from including them in the Gemini output prompt and then debugging why the published post looks different from what the API accepted.

Tags that Blogger strips silently

The following HTML elements are removed by Blogger's sanitiser without returning an error: <script><style><iframe><form><input><button><select><textarea>, and any element with an on* event attribute (such as onclick or onload). Additionally, <link> and <meta> tags in the post body are removed. Any style attribute containing position:fixedposition:absolute, or z-index values is also stripped. Your Gemini content generation prompt must explicitly instruct the model not to use any of these constructs.

Alex's Advice

The Compose view bypass is the single most technically valuable part of this entire workflow, and it is the part that zero other Blogger automation guides cover. Every guide I found either uses copy-paste from ChatGPT into the Compose view (which adds Blogger's own formatting layer on top) or uses the Blogger API incorrectly by passing the content as plain text rather than as a raw HTML string.

When I first got the Blogger API publishing step working, the published posts had double-spaced paragraphs because Blogger's Compose view adds a <br> tag after each paragraph when you paste into it, and I had naively tested the API output by pasting into Compose first. Posting directly via the API with properly structured <p> tags produces clean single-spaced output with no extra line breaks. The API output and the Compose view output are not equivalent. Always test your API output by viewing the published draft in the Blogger HTML view, not the Compose view, to see what actually got published.

30 Day Case Study and Traffic and Results From the $0 Agentic Pipeline

I ran the agentic pipeline for 30 days starting from a blog with approximately 1,800 monthly organic clicks and a GSC impression count of 278 across 29 visible queries. Over the 30-day period, the pipeline generated 11 draft posts from the gap keyword list, of which I reviewed and published 8 after human edits averaging 25 minutes per post. Three drafts were rejected because the Gemini angle did not sufficiently differentiate from existing page-one content after manual verification.

30-Day Pipeline ResultsAgentic AI Blogger Workflow: Real Performance Data
Posts Generated
11
Drafts created by the pipeline. 8 published after human review. 3 rejected post manual QA.
Avg. Review Time
24 min
Per post: reading, adding first-hand data, image insertion, meta check, final publish click.
GSC Impressions
+41%
From 278 to 392 total impressions across the 90-day rolling window by day 30 of the experiment.
New Queries Indexed
17
New search queries showing impressions in GSC that were not present before the pipeline started.
Position Improvements
6 of 8
6 of the 8 published posts moved from average position 19+ to average position 14 or better within 30 days of publication.
Monthly Pipeline Cost
$0
Gemini free tier, Blogger API free tier, n8n Community Edition, Google Cloud e2-micro free tier. Total: zero dollars.

What the Pipeline Did Well and Where It Required the Most Human Intervention

The pipeline's strongest performance was in producing correctly structured HTML output that required minimal formatting correction during human review. In all 8 published posts, the heading hierarchy was correct, the paragraph lengths were within the 4-line maximum I specified in the prompt, and the meta descriptions were within the 140-character limit. These structural elements, which take significant manual effort when writing posts from scratch, were handled consistently by the pipeline across all generated drafts.

Where Gemini's output required the most human editing

The three areas requiring the most human editing were first-hand data insertion, source citation removal, and introduction rewriting. Gemini consistently produced introductions that were writer-centric (explaining what the post would cover) rather than reader-centric (opening with the reader's specific problem). Every introduction required rewriting using the problem-first technique before the post was ready to publish. Gemini also occasionally cited statistics with phrases like "studies show" or "research indicates" without naming specific sources, which I removed and replaced with either a specific attributed source or a first-hand observation from my own data. And every post required at least one paragraph of first-hand experience data added manually, because Gemini cannot provide account-specific data from a live portfolio or dashboard that no one else has.

The conversion impact of the published posts

Of the 8 published posts, 3 included a ConvertKit affiliate link at Block 6. One of those 3 generated a confirmed ConvertKit paid referral within 22 days of publication, producing $7.50 in recurring monthly commission from a post that cost 24 minutes of human editing time on top of the automated pipeline's generation work. The affiliate revenue from that single referral will continue monthly for the lifetime of that subscriber's paid ConvertKit account, which means the 24-minute human investment in that post has already generated $7.50 and will continue generating $7.50 per month indefinitely from a pipeline that costs zero dollars per month to operate.

Future-Proofing Against Google's AI Overviews and SGE

Google's AI Overviews (formerly Search Generative Experience) are now present in the majority of informational search results in the United States. For bloggers, AI Overviews create both a threat and an opportunity. The threat: if Google's AI Overview answers the query completely, some searchers will not click through to any individual blog post. The opportunity: Google pulls content for its AI Overviews from pages it deems authoritative and specifically structured to answer the query, which means a well-structured post has a chance of being cited as a source in the AI Overview even when it does not rank position 1.

Structuring Posts for AI Overview Citation

Google's AI Overview attribution system favours pages that contain a direct, specific answer to the query in a clearly delimited block early in the page content. In the Profitackology post format, this is the AI Snippet callout block that appears after the introduction and before the first H2. The snippet contains a bolded, specific answer to the primary query in 50 to 100 words. Pages with this structure are more likely to be cited in AI Overviews because the Overview's content extraction algorithm can identify the specific answer block and attribute it to the source page with high confidence.

The structured data signals that increase AI Overview eligibility

Three specific structured data signals increase a Blogger post's eligibility for AI Overview citation. First, the Article schema markup with authordatePublished, and headline properties. Blogger adds basic Article schema automatically, but you can extend it via the Blogger API's customMetaData field. Second, the FAQ schema for posts that contain a question-and-answer section, which the AI Overview system specifically targets for citation. Third, the HowTo schema for step-by-step posts, which maps directly to the structured workflow content in posts like this one.

E-E-A-T signals in an AI-generated content pipeline

Google's 2026 quality assessment systems place significant weight on Experience, Expertise, Authoritativeness, and Trustworthiness signals. In a pipeline that uses AI to generate content, the E-E-A-T signals must be added by the human in the review step rather than generated by the AI. Specifically: the Experience signal comes from first-hand data that only the author could have (account-specific numbers, personal failures, specific tool interactions). The Expertise signal comes from accurate technical depth that a practitioner would demonstrate. The Authoritativeness signal comes from consistent publication under a named author identity with a documented history. The Trustworthiness signal comes from affiliate and AI usage disclosures, accurate citation of sources, and willingness to document failures alongside successes.

The Strategy Behind the Automation: There is no point in automating 100 posts if they don't convert. My pipeline specifically targets the "near-purchase intent" keywords I broke down in my Conversion Rate Framework: How to Make Money With 100 Blog Visitors. By automating the right intent, you don't need 10k visitors to start seeing affiliate commissions.

The key future proofing principle: Google is not penalising AI-assisted content. It is penalising AI-generated content that contains no human signal. The agentic pipeline in this post is designed to maximise the efficiency of the AI generation step while preserving the human layer that Google's quality systems are specifically designed to detect and reward. The 24-minute human review is not overhead. It is the E-E-A-T layer that makes the pipeline's output rank-worthy rather than rank-filtered.

Building a Topical Authority Map Around the Pipeline

The most effective long-term strategy for protecting a Blogger blog against AI Overviews in 2026 is topical authority: owning a specific sub-niche deeply enough that Google treats the blog as a reference source rather than a single-post contributor. The agentic pipeline supports this by systematically filling the GSC impression gap list, which means every new post extends the topic coverage in areas where the blog is already receiving search visibility. Over 30 days, the pipeline added 17 new indexed queries to the GSC data. Over 12 months at that pace, the blog would have indexed an additional 204 queries in its core topic areas, building a topical authority signal that individual post ranking cannot replicate.

When to switch from gap-filling to pillar post creation

The gap-filling phase of the pipeline (targeting impression queries between positions 11 and 30) is the correct strategy for the first six to twelve months of the pipeline's operation. Once the GSC data shows a cluster of queries ranking between positions 5 and 10 around a specific topic cluster, the pipeline's priority should shift to creating pillar posts that comprehensively cover that topic cluster and internally link to all the gap-filling posts that have already accumulated some ranking signals. Pillar posts convert the distributed authority of multiple gap-filling posts into a concentrated topical authority signal that moves the entire cluster from page 2 to page 1 more effectively than any single additional post can achieve on its own.

Alex's Advice

The single thing I wish I had done from day one of building this pipeline is keeping a structured log of every post the pipeline generated, including which draft was accepted, which was rejected, and why each accepted post needed specific human edits. I started doing this manually in a Google Sheets document after week two, and the pattern that emerged from 8 accepted posts was clear: Gemini consistently needed help with introductions, consistently produced accurate structural output, and consistently missed the first-hand experience layer. That pattern is now embedded in my review checklist as three specific things I check in every draft before publishing. Without the log, I would have reviewed every post without a systematic approach and almost certainly published some posts that failed the E-E-A-T test without realising it.

Build the log from day one. It becomes your pipeline quality improvement system and your evidence base for any future case studies about the workflow's effectiveness. A pipeline with no measurement is just an automated way to produce content of unknown quality at unknown cost. A pipeline with a measurement layer is a system you can optimise, defend with data, and scale with confidence.


Practitioner Transparency Disclosure: This post was researched, structured, and written using the same agentic AI workflow it describes. The n8n workflow generated a structured draft based on my GSC keyword gap data. I then spent approximately 40 minutes adding first-hand pipeline data, rewriting the introduction, correcting two technical inaccuracies in the Gemini output, and replacing three unsourced statistics with direct observations from my own dashboard. The final published version contains approximately 60 percent human-edited or human-added content by word count. All technical specifications, workflow steps, and performance data in this post reflect my actual pipeline configuration and 30-day test results.

Post a Comment

Previous Post Next Post

Contact Form