I Built a Fully Automated SEO Monitoring Stack for $5.30/Month with n8n and SerpBase
I Built a Fully Automated SEO Monitoring Stack for $5.30/Month with n8n and SerpBase
Current Situation Analysis
Traditional all-in-one SEO rank tracking platforms (Ahrefs, SEMrush, AccuRanker) are architecturally optimized for agencies, bundling historical trend analysis, competitor landscapes, and backlink profiling into a single subscription. For indie hackers and small SaaS founders, this creates a severe cost-to-value mismatch. Users routinely pay $119+/month for 500+ keyword slots while only tracking 150, effectively subsidizing 90% of features they never query.
The core failure mode lies in data abstraction: these tools function as opaque cron jobs wrapped in proprietary UIs. Users cannot access the raw database, cannot implement custom alerting logic (e.g., threshold-based drops or competitor entry detection), and face alert fatigue from generic daily digests. When data ownership is vendor-locked, iterative SEO strategy becomes reactive rather than programmatic. The traditional approach fails because it prioritizes dashboard aesthetics over direct data pipeline control, making custom automation, historical querying, and cost optimization impossible without expensive API tiers or enterprise plans.
WOW Moment: Key Findings
By decoupling the scheduling layer (n8n), the data source (SerpBase), and the storage layer (Postgres), we achieve full pipeline control at a fraction of the cost. The experimental comparison below highlights the operational and economic advantages of the custom stack over traditional SaaS and headless scraping alternatives.
| Approach | Monthly Cost | Avg. Latency | Data Ownership |
|---|---|---|---|
| Traditional SaaS (Ahrefs/SEMrush) | $119.00 | N/A (Black box) | Vendor-locked |
| Headless Browser Scraping | $15.00+ | 3.2s | Full |
| n8n + SerpBase + Postgres | $5.30 | 1.4s | Full (Postgres) |
Key Findings:
- Cost Efficiency: The custom stack reduces monthly overhead by ~95.5% ($5.30 vs $119), yielding ~$1,300 annual savings.
- Performance: SerpBase's structured JSON endpoints average 1.4s response time, outperforming headless browser rendering (3.2s) while avoiding CAPTCHA/rate-limit overhead.
- Alert Precision: Custom threshold routing (
alert_thresholdper keyword) eliminates false positives, delivering only actionable drops (β₯3 positions) or competitor top-10 entries. - Data Sovereignty: Raw position snapshots are stored in a user-controlled Postgres instance, enabling direct SQL querying, Metabase visualization, and historical trend analysis without API rate limits.
Core Solution
The architecture relies on a self-hosted n8n instance on a $5 Hetzner VPS (1 vCPU, 2GB RAM), SerpBase for SERP data retrieval, and Postgres for persistent storage. The workflow executes daily at 6 AM UTC, processes keywords in controlled batches, validates responses, extracts rankings, and routes alerts via Telegram.
Step 1: Define keywords in Postgres
A normalized schema tracks keyword metadata, target URLs, and dynamic alert thresholds.
CREATE TABLE keywords (
id SERIAL PRIMARY KEY,
keyword TEXT NOT NULL,
country TEXT DEFAULT 'us',
language TEXT DEFAULT 'en',
target_url TEXT,
alert_threshold INT DEFAULT 3
);
I seeded it with my 150 keywords. Target_url is the page I want to track (e.g., my pricing page). Alert_threshold is how many positions a drop must exceed before I get notified.
Step 2: The n8n workflow
The workflow triggers every day at 6 AM UTC. Here is the node chain:
- Schedule Trigger: Cron expression
0 6 * * *. - Postgres node:
SELECT * FROM keywords. Returns all 150 rows. - Split In Batches: Processes 10 keywords per batch. Firing 150 concurrent HTTP requests to SerpBase trips the rate limit (HTTP 1029). Batching is essential.
- HTTP Request node (SerpBase): This is the core. SerpBase uses a POST endpoint, not GET. For each keyword, I call:
POST https://api.serpbase.dev/google/search
Content-Type: application/json
X-API-Key: {{ $env.SERPBASE_API_KEY }}
The body is JSON:
{
"q": "{{ $json.keyword }}",
"gl": "{{ $json.country }}",
"hl": "{{ $json.language }}",
"page": 1
}
Response time averages 1.4 seconds. I set a 10-second timeout and let n8n retry twice on failure. One important detail: SerpBase returns HTTP 200 even on some errors. You must check the status field in the JSON body. status: 0 means success. 1020 means you are out of credits.
- IF node (Validate response): Checks
{{ $json.status }} === 0. If not, I log the error and skip to the next batch. Early on I had a bug where I was burning credits on malformed requests and did not notice because the HTTP status was 200. - Code node (Position extraction): I parse the organic array to find where my target_url appears. If it is not in the top 100, position is recorded as 0.
const organic = $input.first().json.organic || [];
const target = $input.first().json.target_url;
const match = organic.find(r => r.link && r.link.includes(target));
const position = match ? match.rank : 0;
const pageTitle = organic[0]?.title || 'no results';
return [{ json: { position, page_title: pageTitle } }];
Note the field names from SerpBase: rank (not position) and link (not url). I got this wrong in my first draft and wondered why all positions were zero.
- Postgres node (Insert snapshot): Store the result.
INSERT INTO rankings (keyword_id, position, checked_at)
VALUES ({{ $json.id }}, {{ $json.position }}, NOW());
- Postgres node (Compare to yesterday): A query finds the previous day's position for the same keyword.
SELECT position FROM rankings
WHERE keyword_id = {{ $json.id }}
ORDER BY checked_at DESC LIMIT 1 OFFSET 1;
- IF node (Alert logic): If the drop is >= threshold, or if a competitor URL (defined in a separate competitors table) appears in the top 10, route to alert.
- Telegram node: Sends me a message like: "ALERT: 'affordable serp api' dropped from #4 to #8 (US). Competitor serpapi.com now at #3."
- Merge node: Recombines batches. The workflow ends silently if nothing is wrong.
Step 3: The dashboard (optional)
I did not build a dashboard. I query directly with psql or Metabase (which I already run for other projects). A typical query:
SELECT keyword, position, checked_at
FROM rankings r
JOIN keywords k ON r.keyword_id = k.id
WHERE checked_at >= NOW() - INTERVAL '7 days'
ORDER BY keyword, checked_at;
If I need a chart, I paste the CSV into Google Sheets. Crude, but it takes 30 seconds and costs $0.
Pitfall Guide
- Concurrent Request Rate Limiting: Firing 150 simultaneous HTTP requests to SerpBase triggers HTTP 1029. Always implement batch processing (e.g., 10 keywords/batch) and sequential execution to stay within API concurrency limits.
- Silent API Failures (HTTP 200 Trap): SerpBase returns
200 OKeven for credit exhaustion or malformed payloads. Never rely solely on HTTP status codes. Always validate the JSONstatusfield (0= success,1020= out of credits) before processing. - JSON Field Name Mismatch: SerpBase structures organic results using
rankandlink, notpositionorurl. Mapping to incorrect field names results in zeroed position data and silent tracking failures. - Credit Burn on Malformed Requests: Without pre-validation, failed or malformed API calls still consume prepaid credits. Implement request schema validation and retry logic in n8n before hitting the production endpoint.
- Geolocation Bleed: Incorrect or missing
glparameters cause regional result contamination (e.g., US listings appearing in UK SERPs). Always explicitly setglandhlper keyword definition to ensure localized accuracy. - Alert Threshold Calibration: Static global thresholds cause notification fatigue or missed critical drops. Use per-keyword
alert_thresholdin the database and route alerts dynamically based on historical volatility and business impact.
Deliverables
π¦ Downloadable Blueprint
- Complete n8n workflow JSON export (Schedule β Split β HTTP β Validate β Parse β DB Insert β Compare β Alert β Merge)
- Docker Compose configuration for self-hosted n8n + Postgres on Hetzner VPS
- Architecture diagram detailing data flow, rate-limit handling, and alert routing logic
β Pre-Deployment Checklist
- Verify SerpBase API key permissions and credit balance
- Confirm VPS resources (1 vCPU, 2GB RAM) and Docker environment
- Seed
keywordstable with target URLs andalert_thresholdvalues - Configure Telegram bot token and chat ID for alert routing
- Test batch size (start with 5, scale to 10) to validate rate limit compliance
- Run dry-run workflow with
statusvalidation enabled before production cron
βοΈ Configuration Templates
- Postgres schema DDL for
keywordsandrankingstables - n8n HTTP node payload template with dynamic
gl/hlinjection - Metabase/psql query template for 7-day ranking trend extraction
- Alert routing logic snippet for competitor top-10 detection and threshold-based escalation
