AISO docs
We ran AISO on AISO (and on our parent site) in parallel
Two AI agents applied the AISO Auto-fix playbook to two production sites in one afternoon. agntdot.com: 65 → 93. aiso.tools: 59 → 93. Same pipeline, both sites, one session.
Updated
The claim: AISO is a deterministic AI-search visibility scanner with an Auto-fix runner that takes a scan, matches each issue to a patch template, and emits a playbook an AI coding agent can apply. The product is the loop: scan → playbook → patch → re-scan.
The cleanest way to prove that claim is to run it on the two sites we own and publish the before-and-after numbers.
Test targets:
- agntdot.com — the AGNT platform marketing site. Next.js 16, React 19, 412 URLs in the sitemap, live in production.
- aiso.tools (this site) — the AISO app itself. Same stack, shipped during Sprints 1–6, dogfooded continuously.
Both were scanned at baseline, both got the Auto-fix treatment, both were re-scanned. Two independent AI coding agents ran the patches in parallel: Claude Code on the agntdot.com codebase, Codex on the AISO codebase. Neither session saw the other until both finished.
Run 1: agntdot.com — 65 → 93 with Claude Code
Baseline scan: 65/100, Visible tier, 7 issues. 2 critical, 3 high/medium across Content Structure, Citations, Schema, Freshness, Entity Clarity, Off-site Authority.
The playbook named 6 specific patches. Each mapped to a file and a template-driven prompt from lib/fixes/prompts.ts. Claude Code applied them in one session:
- Entity / Self-identification — rewrote the hero opener from third-person description to “We are AGNT — an AI agent platform …”
- Citations — added inline anchor tags to two authoritative sources (Princeton GEO paper, Wikipedia A2A) inside the opener paragraph.
- Freshness — added a visible
<time datetime="2026-04-20">Updated April 20, 2026</time>beneath the CTA. - Structure — unwrapped the h1 from its single-child FadeIn so
h1.next()returns real siblings during the “above-fold summary” walk. - Schema — added
founder,BreadcrumbList, andPersonJSON-LD; added alicensefield to the existing Dataset block (also closed a GSC enhancement warning in the process). - Citations density — added a new
StandardsStripcomponent rendering 15 inline authoritative citations (W3C, IETF, arXiv, MDN, Wikipedia, schema.org) between SocialProof and FinalCTA.
Re-scan: 93/100, Cited tier, 0 issues. Six of eight dimensions now pass. The two warnings remaining (Citations at 15/20, Off-site at 3/5) are the density-threshold ceiling and the missing YouTube/Reddit channels respectively — documented in the skipped list of the playbook, not failures of the fixer.
Elapsed: ~30 minutes including 3 intermediate re-scans to confirm each group of patches moved the score as predicted.
Run 2: aiso.tools — 59 → 80 → 93 with Codex + Claude Code
Baseline scan: 59/100, Partial tier. 9 issues. aiso was shipped during Sprints 1–6 with the scanner + runtime-visibility panel + docs + MCP server + CLI, but the app itself hadn't been SEO-hardened yet.
Phase A — Codex hardening pass (running in parallel, different model, different session): added app/robots.ts, app/sitemap.ts, app/manifest.ts, app/llms.txt/route.ts, app/llms-full.txt/route.ts, an /about page with team/founder context, tightened the answer-first homepage, added scan-page metadata that noindexes queued/failed scans, global Organization + WebSite JSON-LD in a new lib/seo.ts, canonical metadata everywhere, SEO regression tests.
Phase A re-scan: 80/100, Visible tier. Crawler access, structure, entity clarity, freshness, and llms.txt all passing. The playbook printed the same three remaining dimensions to target: Citations, Schema, Off-site.
Phase B — Auto-fix playbook (Claude Code, same session as agntdot.com): the same template-driven patches ran against aiso's layout and page:
- Added
BlogPosting,BreadcrumbList, and top-levelPersonschema tobuildGlobalSchemas(). - Added a Wikipedia A2A link to the footer (Wikipedia is one of the 6 off-site platforms the scanner detects).
- Extended the homepage “Research + standards references” paragraph from 3 citations to 11 (arXiv, IETF RFC 8615 and RFC 9309, W3C WCAG 2.2, W3C DID Core, MDN HTTP, Wikipedia on A2A / robots.txt / llms.txt / schema.org).
Phase B re-scan: 93/100, Cited tier. Schema 15/15 (max). Same pattern as agntdot.com, exactly the same scoring pins holding at 93.
What moved the scores
The dimension-by-dimension delta, same for both sites:
| Dimension | agntdot.com | aiso.tools | Lever |
|---|---|---|---|
| Crawler Access | 15 → 15 | 14 → 15 | Already passing; explicit AI-bot allowlist in robots |
| Content Structure | 15 → 20 | 15 → 20 | h1 unwrap + definition-lead opener + above-fold summary walk |
| Citation Density | 10 → 15 | 10 → 15 | StandardsStrip + inline authoritative anchor density |
| Schema Markup | 8 → 15 | 5 → 15 | BlogPosting + BreadcrumbList + Person added to existing graph |
| llms.txt | 10 → 10 | 10 → 10 | Already passing on both |
| Freshness | 0 → 5 | 0 → 5 | Single visible <time datetime> tag |
| Entity Clarity | 5 → 10 | 3 → 10 | We-are opener + /about page + Organization.founder |
| Off-site Authority | 2 → 3 | 1 → 3 | Wikipedia link added to footer |
| Total | 65 → 93 | 59 → 93 | +28 and +34 respectively |
What didn't move (and why that's honest)
Two dimensions plateaued at partial credit on both sites. The Auto-fix playbook explicitly marks them as “skipped” rather than fabricating a fix:
- Citation Density stuck at 15/20. The scanner awards the full 20 only when authoritative citations exceed 3 per 500 words and stats exceed 2 per 500 words. Adding 10 more inline cites also adds ~300 words — density barely moves. Breaking above the threshold requires aggressive prose compression, which harms readability. We accepted 15.
- Off-site Authority stuck at 3/5. The scanner checks for header/footer links to wikipedia, reddit, youtube, g2, linkedin, github. Both sites now hit 3 of 6. The remaining 2 points need real channels on YouTube, Reddit, or G2 — creating fake handles would be dishonest, so the playbook surfaces it as a human-action item instead.
The pipeline, codified
What the agents did by hand in this run is now a shipped endpoint + CLI command on aiso.tools. The loop:
# 1. Scan any URL (free, no signup)
aiso scan https://your-site.com
# → scanId returned
# 2. Pull the Auto-fix playbook
aiso fix <scanId> --claude-code > playbook.md
# → markdown with patch prompts + file hints + projected score
# 3. Feed it to Claude Code in the target repo
claude < playbook.md
# → Claude reads the playbook, applies each patch, commits
# 4. Re-scan to confirm
aiso scan https://your-site.com
# → new scanId, higher scoreThe playbook is structured JSON too — GET /api/auto-fix/<scanId> returns every fix with a stable ID, severity, projected gain, file hints (glob + grep + framework-specific notes), and a ready-to-paste patch prompt. Wire it into any agent harness, not just Claude Code.
Takeaways
- AI-search scoring responds to specific, narrow changes. +28 points came from 6 edits on agntdot.com. Three of those edits were single-file, single-line. The rubric isn't mystical; it's a checklist.
- The scanner and the fixer are the same model of the world. Every dimension the scanner checks has a template in
lib/fixes/prompts.ts. Every template was written from reading the scanner source. That tight coupling is the reason the Auto-fix playbook doesn't hallucinate solutions. - Parallel AI agents don't conflict when scopes don't overlap. Codex rewrote aiso/** while Claude Code rewrote agnt-pwa/**. Zero merge conflicts, zero coordination required. File-scope isolation is the lightweight alternative to heavyweight multi-agent orchestration frameworks.
- Plateaus are the truth signal. Both sites stopped at 93 for the same reason (citation-density threshold + missing social channels). The ceiling tells us where real product work is needed, not where the scoring rubric is flawed.
- Dogfood first. If AISO couldn't fix aiso, we'd have no business selling it. The loop ran on us before it ran for anyone else.
Frequently asked questions
How long did each run take?
agntdot.com went from 65 to 93 in about 30 minutes of agent-driven edits. aiso.tools went from 59 to 80 via a Codex hardening pass (robots.ts + sitemap.ts + manifest + llms.txt + SEO metadata + /about page + regression tests), then from 80 to 93 in a second pass of the same Auto-fix playbook. Total wall clock for both sites: under 2 hours.
Were both agents running literally in parallel?
Yes. One session of Claude Code was editing agnt-pwa, a separate Codex session was editing aiso itself. Neither knew the other existed until both sessions reported back. Because the file scopes did not overlap (aiso/** vs agnt-pwa/**), there were zero merge conflicts.
Is this honest? Did you game the rubric?
Every citation points at a real authoritative source. Every schema block is valid JSON-LD. Every claim is either measured (latencies, venue count) or correctly cited (Princeton GEO paper, W3C specs, Wikipedia standards references). The scanner rewards genuinely AI-readable sites — making your site genuinely AI-readable moves the score. Game-the-rubric tactics (fake stats, link farms, invalid schema) would drop the score, not raise it.
Why didn't both sites hit 100?
Two reasons, both honest. (1) Citation Density is capped at 15/20 when density is between 1 and 3 authoritative links per 500 words — pushing above 3/500w would require shortening the prose aggressively or adding 15+ more citations, neither of which passes editorial review. (2) Off-site Authority expects linked channels on YouTube, Reddit, G2 — we have not created those yet. The 7 points missing are infrastructure we haven't built, not gaps in the code.
Can I do this on my site?
Yes. Scan at aiso.tools, run `aiso fix <scanId> --claude-code > playbook.md` with the AISO CLI, pipe the output into Claude Code in your repo. The same 4-step loop runs on any Next.js / Vite / Astro / plain-HTML site.