KS
Killer-Skills

experiment-loop — how to use experiment-loop how to use experiment-loop, what is experiment-loop, experiment-loop alternative, experiment-loop vs other optimization tools, experiment-loop install, experiment-loop setup guide, MD Home Care optimization, YMYL content optimization, SEO lag times, AEO lag times

v1.0.0
GitHub

About this Skill

Perfect for SEO Analysis Agents needing data-driven content optimization capabilities for YMYL content experiment-loop is a weekly process that tracks content changes, measures their impact, and optimizes traffic and rankings for MD Home Care

Features

Tracks content changes and measures their impact on traffic and rankings
Decides whether to keep, iterate, or revert changes based on evaluation windows
Accounts for lag times in YMYL content, such as aged care and disability services
Runs weekly to ensure consistent optimization
Evaluates changes based on SEO and AEO lag times, with windows of 3-7 days

# Core Topics

adscorp100 adscorp100
[0]
[0]
Updated: 3/7/2026

Quality Score

Top 5%
48
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add adscorp100/mdhomecarebuild/experiment-loop

Agent Capability Analysis

The experiment-loop MCP Server by adscorp100 is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use experiment-loop, what is experiment-loop, experiment-loop alternative.

Ideal Agent Persona

Perfect for SEO Analysis Agents needing data-driven content optimization capabilities for YMYL content

Core Value

Empowers agents to track content changes, measure impact on traffic and rankings using SEO lag and AEO lag metrics, and make data-informed decisions to keep, iterate, or revert changes, all while considering evaluation windows and lag times for sensitive content types like aged care and disability services

Capabilities Granted for experiment-loop MCP Server

Automating weekly content performance analysis
Generating data-driven recommendations for service page optimization
Debugging underperforming content changes using AEO lag and SEO lag metrics

! Prerequisites & Limits

  • Requires weekly runtime for optimal performance
  • Specifically designed for MD Home Care and YMYL content
  • Evaluation window and lag times must be carefully considered to avoid premature evaluation
Project
SKILL.md
6.2 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Experiment Loop for MD Home Care

Tracks content changes, measures their impact on traffic and rankings, and decides whether to keep, iterate, or revert. Runs weekly.

CRITICAL: Lag Times for YMYL Content

YMYL content (aged care, disability services) has longer lag times than SaaS content. Do not evaluate changes too early.

Change TypeSEO LagAEO LagEvaluation Window
Service page optimization10-21 days3-7 days3 weeks minimum
Location page creation14-21 days7-14 days3 weeks minimum
Blog post publishing7-14 days3-7 days2 weeks minimum
Provider comparison addition7-14 days3-7 days2 weeks minimum
Trust signal enhancement10-21 days7-14 days3 weeks minimum
FAQ addition7-14 days3-7 days2 weeks minimum

Step 1: Weekly Git Scan

Identify all content changes from the past week:

bash
1cd ~/Projects/mdhomecarebuild 2 3# All content changes in last 7 days 4git log --since="7 days ago" --name-only --pretty=format:"%h %s" -- "src/content/**/*.md" "src/content/**/*.mdx" 5 6# Summarize by type 7git log --since="7 days ago" --name-only --pretty=format:"" -- "src/content/blog/*.md" | sort -u | head -20 8git log --since="7 days ago" --name-only --pretty=format:"" -- "src/content/services/*.md" | sort -u | head -20 9git log --since="7 days ago" --name-only --pretty=format:"" -- "src/content/providers/*.md" | sort -u | head -20

Categorize each change:

  • New page: Completely new content file
  • Major edit: Structural changes (new sections, comparison tables, rewritten H1/H2)
  • Minor edit: Small fixes (typos, link updates, frontmatter changes)

Step 2: Baseline Measurement

For each changed page, capture the pre-change baseline. If baseline was not captured before the change, use the previous period as proxy.

GSC Baseline

bash
1cd ~/Projects/mdhomecarebuild 2 3# For each changed page, get keyword data 4python3 src/scripts/advanced_gsc_analyzer.py --page "/services/[slug]" 5python3 src/scripts/advanced_gsc_analyzer.py --page "/blog/[slug]"

Record:

  • Top 10 keywords by clicks
  • Average position for primary keyword
  • Total impressions and clicks (last 7 days)

PostHog Baseline

bash
1# Page traffic 2python3 src/scripts/posthog_analytics.py --page "/services/[slug]" --days 7 3 4# AI referral traffic 5python3 src/scripts/posthog_analytics.py --ai-referrals --days 7

Record:

  • Total pageviews (last 7 days)
  • AI referral visits to that page
  • Traffic sources breakdown

Step 3: Post-Change Measurement

After the evaluation window has passed (see lag times table), measure again.

bash
1# GSC: same page analysis 2python3 src/scripts/advanced_gsc_analyzer.py --page "/services/[slug]" 3 4# PostHog: same page traffic 5python3 src/scripts/posthog_analytics.py --page "/services/[slug]" --days 7 6python3 src/scripts/posthog_analytics.py --ai-referrals --days 7

Step 4: Attribution and Decision

Compare Metrics

For each experiment, calculate:

MetricBeforeAfterChange
Organic clicks (7d)XY+/- %
Impressions (7d)XY+/- %
Avg position (primary KW)XY+/- positions
AI referral visits (7d)XY+/- %
Total pageviews (7d)XY+/- %

Decision Framework

KEEP if:

  • Organic clicks increased >10%
  • OR average position improved by 2+ positions
  • OR AI referral visits increased >20%
  • OR impressions increased >15% (leading indicator)
  • AND no negative impact on other pages (cannibalization check)

ITERATE if:

  • Mixed signals (some metrics up, some flat)
  • OR small positive movement (<10% clicks) that suggests potential
  • OR evaluation window has not fully elapsed
  • Action: Make targeted refinements and re-evaluate after another cycle

REVERT if:

  • Organic clicks decreased >15%
  • AND average position dropped by 3+ positions
  • AND no compensating AI referral increase
  • Action: Restore previous version via git, document what went wrong

WAIT if:

  • Change is too recent (within lag window)
  • Action: Re-evaluate next week

Step 5: Log to Playbook

Record every experiment result in PLAYBOOK.md:

markdown
1## [Date] - [Experiment Name] 2 3**Category:** [Service page optimization / Location page / Blog post / Comparison / Trust signal / FAQ] 4**Page:** [URL path] 5**Change:** [Brief description of what was changed] 6**Hypothesis:** [What we expected to happen] 7 8**Baseline (pre-change):** 9- Organic clicks (7d): X 10- Avg position (primary KW): X 11- AI referrals (7d): X 12 13**Result (post-change, measured [date]):** 14- Organic clicks (7d): Y (+/- %) 15- Avg position (primary KW): Y (+/- positions) 16- AI referrals (7d): Y (+/- %) 17 18**Decision:** KEEP / ITERATE / REVERT / WAIT 19**Lesson:** [What we learned]

Experiment Categories

Service Page Optimizations

  • Adding comparison tables
  • Rewriting H1/byline
  • Adding trust signal sections
  • Expanding FAQ sections
  • Adding AI differentiation paragraphs

Location Page Creation

  • New suburb-specific service pages
  • Measure: local keyword rankings, location-specific traffic

Blog Post Publishing

  • New informational content
  • Template/download posts
  • Provider comparison posts
  • Measure: organic clicks, keyword coverage expansion

Provider Comparison Additions

  • New comparison tables on existing pages
  • New "vs" blog posts
  • Measure: comparison keyword rankings, AI referral traffic

Trust Signal Enhancements

  • Adding registration numbers
  • Adding testimonials
  • Adding clinical governance sections
  • Measure: overall page authority signals, position changes

FAQ Additions

  • New FAQ sections
  • Expanding existing FAQs with PAA questions
  • Measure: featured snippet captures, PAA appearances

Weekly Routine

Every week:

  1. Run git scan (Step 1)
  2. For changes past their evaluation window, measure results (Step 3)
  3. Make keep/iterate/revert decisions (Step 4)
  4. Log results to PLAYBOOK.md (Step 5)
  5. Capture baselines for new changes (Step 2)

Usage

/experiment-loop

Runs the full weekly cycle: scan, measure, decide, log.

/experiment-loop --check "/services/sil-services"

Check status of a specific page experiment.

Related Skills

Looking for an alternative to experiment-loop or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication