Writing a Blog Pre and Post OpenClaw
Quality as a Matter of Context and Drift Control
What Is OpenClaw?
OpenClaw is an open-source, self-hosted AI agent launched in 2026 that runs on your own hardware and connects to large language models to perform real tasks — on your computer and across your online accounts. It lives inside chat apps you already use (WhatsApp, Telegram, Discord, Slack, Signal, iMessage) and from there it can run shell commands, control a browser, read and write local files, manage email and calendars, and trigger automations from natural-language instructions — all while keeping memory and configuration stored locally on your machine.
You can install OpenClaw with the command:
curl -fsSL https://openclaw.ai/install.sh | bash
This installs Node.js if needed, pulls OpenClaw, and starts an interactive setup wizard with defaults. During setup you choose whether to run locally or on a remote server, configure an LLM provider (Anthropic, OpenAI, or a local model runtime), and link at least one chat channel so you can talk to your agent from any device. Once configured, OpenClaw runs as a long-lived Node.js service — a gateway between chat apps, models, and your local system.
Or, get OpenClaw in Companion Hub to access your OpenClaw agent remotely.
Abstract
Longform content remains essential for discovery, but its production is burdened by a hidden cost: the constant reconstruction of context. As product details evolve and compliance requirements shift, teams accumulate “context debt,” leading to inconsistent outputs and slow review cycles.
With this blog, we're exploring how OpenClaw, a stateful, agentic workflow system, automates blog production by preserving and validating context across the pipeline. By embedding memory and quality controls into the process, OpenClaw reduces the true cost of content—alignment—while enabling faster, more reliable publishing. The essay itself is generated using this system, serving as both explanation and proof-of-concept.
The ContentLoop we developed to write the following post: a local-first pipeline pattern that separates governance (control plane) from outputs (data plane), then uses a recursive "salvage" mechanism so the system improves itself after each run. ContentLoop is implemented as machine-readable job definitions (jobPacket), conditional routing rules, an artifact registry, structural validators, and two explicit gates (CP1/CP2). The result is a publish process that is auditable, reproducible, and harder to break — while remaining dynamic enough to handle different content types, risk levels, and reader audiences without a brittle SOP maze.
1. The Problem: Content Is a Context Reconstruction Tax
Well-researched content is not just a writing task. It is a chain of hidden work: re-reading brand constraints, checking what legal approved last quarter, validating product specs and version numbers, confirming a statistic's recency and source quality, aligning on a single angle across stakeholders, packaging outputs for multiple channels, and ensuring claims are defensible enough to publish.
On lean teams, these steps don't disappear. They get squeezed into nights, context-switching, and "good enough" decisions that quietly compound into contradictions and rework.
1.1 Context Debt Defined
Context debt is the accumulation of repeated, manual reassembly work required to produce outputs safely. It grows when:
- source-of-truth documents aren't versioned or precisely referenced
- "what must be true" is implied rather than encoded
- teams rely on chat tools that blur cycles and forget prior decisions
- artifacts exist but lack required substance (present-but-wrong)
1.2 Observable Failure Modes
Claim drift. A claim persists after its supporting facts changed.
Recency rot. A stat is accurate historically but outdated for current decisions.
Boundary collapse. Sensitive context gets copied into the wrong place — public drafts, vendor tools, external channels — without a defined scope or trust boundary.
Review spirals. Stakeholders repeatedly request "missing" items: sources, benchmark methods, limitations, approvals. These are almost always context debt showing up late.
Packaging inconsistency. The blog post is clean, but the social copy or metadata artifacts don't match.
2. Where This Started: The 23-Step Pipeline
Before ContentLoop, the production workflow for a single piece of content looked like this:
- Brainstorm ideas
- Stakeholder approval of ideas
- Research the concept in Perplexity
- Gather specs from sources
- Reddit research on community opinion
- Determine appropriate persona to write for
- ChatGPT outline
- Perplexity research on each outlined area
- Incorporate research in ChatGPT
- Determine SEO needs
- Draft rough draft
- Review and critique as brand ambassador persona
- Collect additional brand info or research
- Edit draft in Word — add new research, open a new context window
- Write another draft with SEO incorporated
- Critique as brand ambassador
- Critique as user persona
- Move to Grammarly to edit copy
- ChatGPT to create prompts for visuals
- MidJourney to create visuals
- Adobe Photoshop to apply brand filters to images
- ChatGPT to convert text to
.md - Post in WordPress
- Generat platform specific postsShare to socials
- Share to socials
Tools involved: ChatGPT, Perplexity, MidJourney, Stable Diffusion, Adobe Photoshop, Grammarly, Google Drive, WordPress, Yoast SEO — at minimum seven separate platforms, each with its own context window, requiring manual handoffs between every step.
The real cost wasn't any individual step. It was the context reconstruction tax paid at every boundary: re-uploading brand docs, re-explaining the persona, re-grounding the claims. Each tool forgot everything the last one knew.
ContentLoop collapses this to two explicit gates — CP1 and CP2 — with a single persistent workspace running underneath. The steps don't disappear; they get encoded into governance objects that the system reads rather than humans re-invent.
3. Thesis: Treat Content Like a Governed Build, Not a Conversation
If content is a governed build, then:
- each "job" has a definition (inputs, outputs, constraints)
- requirements are computed by routing rules, not remembered by people
- artifacts are validated (structure and minimal truth) before review
- releases are gated at CP1 and CP2
- learnings are salvaged back into the system after each cycle
This is a software pattern more than a writing pattern: version the process, not just the prose.
4. ContentLoop Model: Context Has Half-Life
A major reason pipelines break is treating all context as a single blob. ContentLoop assumes context has different half-lives and must be handled accordingly.
4.1 Four Context Classes
Ephemeral context — high-churn inputs that are true now but not worth persisting past this run. Examples: "don't mention X this week," last-minute deadline changes, ad-hoc angle tweaks, pasted snippets. Rule: capture as ephemeralOverrides in the job, then expire at close. Failure mode: one-off constraints get promoted to "truth" and the pipeline behaves irrationally later.
Situational context — job-specific state that must persist across revision loops but should not become global policy. Examples: research brief, outline, claim inventory, routing results, drafts, council scorecards, salvage pack. Rule: treat as source of truth for this cycle, then archive with the job record.
Static context — operationally stable facts and assets that change occasionally and must stay correct. Examples: product specs and pricing, approved language blocks, packaging templates, supported toolchain versions. Rule: version and reference by ID in the job (e.g., productSpecVersion: 2026-02-15).
Stationary context — long-lived identity-level truth that governs the system. Examples: brand pillars, ICP and personas, voice constraints, "what we will not claim," governance standards. Rule: preload as the highest constraint layer; stationary context overrides situational preferences when they conflict.
This is the first hardening move: it reduces drift by forcing where truth lives and how it gets updated.
5. Architecture: Control Plane vs. Data Plane (Recursive by Design)
ContentLoop is a control-plane / data-plane pattern.
The control plane holds governance objects that decide what must be true and what must ship: jobPacket, baselineRules, routingRules, artifactRegistry, structuralValidators, and CP1/CP2 gating policies.
The data plane holds outputs produced under those constraints: drafts, HTML, metadata, social packs, QA checklists, benchmarks, threat models, limitations packets.
5.1 Why Recursive?
Each cycle produces salvage: new templates, updated routing rules, clarified constraints, better validators, refined packaging patterns. This salvage feeds back into the control plane, so the next run starts with fewer missing artifacts, fewer ambiguous decisions, less review churn, and lower risk at the same speed.
The full sequence:
> Intake → jobPacket → routing → artifact checklist → CP1 → parallel lanes → council (bounded) → fix pass → CP2 → publish bundle + QA → retro + salvage → rules and templates update
6. Specialized Vocabulary
These terms have precise meanings inside ContentLoop. They are defined here rather than assumed.
6.1 Lanes — Specialized Agent Branches
A lane is a specialized agent branch that evaluates the draft from one functional perspective before the council consolidates scores. Lanes run in parallel after CP1 is approved, so their feedback arrives at the same time rather than sequentially — a structural choice that compresses the review cycle.
The default lane configuration from the Content Ops Runbook:
Operations (Orchestrator). Owns workflow integrity and artifact completeness. Confirms the artifactChecklist is satisfied before council convenes. This lane cannot be scored out; it is a prerequisite.
Compliance. Pre-lints claims at outline stage (CP1) and runs the final compliance gate at CP2. Inputs: claim inventory and draft. Outputs: compliance flags and required mitigations. Owns the privacySecurity, regulatory, and boundary-collapse failure modes.
Development (Technical). Evaluates feasibility and technical accuracy on product and infrastructure claims. If a post says "runs locally without egress," this lane confirms it. Outputs: corrections and constraints flagged to the draft.
Marketing. Evaluates channel fit, CTA alignment, hook strength, and packaging for each target channel. Outputs: social copy, metadata, and packaging recommendations.
Council (5-seat). The multi-dimensional critique and scored gate (see 6.2). The council is distinct from the lanes — it receives all lane outputs plus the full draft and produces a consolidated readiness signal.
6.2 Council — Scored Multi-Agent Critique Loop
The council is a five-seat scored critique loop whose output is a readiness signal, not a rewrite. Each seat evaluates the draft along one dimension and assigns a score toward the 9/9/9 quality bar (Audience / Information / Compliance) required for publish.
Council seats correspond to roles defined in the RACI: Evidence Lead, Compliance Lead, Operator Lead, Differentiator Lead, Craft Lead. Each seat has a system prompt encoding its evaluation criteria, operates independently from the others, and outputs a score plus a targeted fix list for any failing dimension. The Ops Manager then runs only the failing seats on a targeted rerun rather than convening the full council again — the default loop budget is one full council pass plus one targeted rerun of failing dimensions only.
This is not a human review board. It is a structured multi-agent loop with hard iteration limits: maximum two revision loops, then kill or request exec override.
6.3 The 9/9/9 Quality Bar
The required quality bar for public publishing is 9/9/9 across three dimensions:
- Audience (9): The post serves its stated persona and reader level. Friction is resolved at outline, not during council.
- Information (9): Claims are accurate, sourced, and recency-compliant. No present-but-wrong artifacts pass this gate.
- Compliance (9): No boundary collapse, no unverified comparatives, no absolute language without a defined scope.
A post scoring below 9 on any dimension cannot pass CP2 without exec override.
7. SEO as a Governance Layer, Not an Afterthought
In the 23-step pipeline, SEO was step 10 — after brainstorming, research, persona assignment, outlining, and a full research pass. By that point, keyword targeting, topical authority decisions, and reader-level alignment were either retrofitted into a draft already written for a different shape, or addressed by a separate Yoast pass at the end that couldn't change the structure.
ContentLoop moves SEO into the jobPacket at intake. It is part of the routing matrix, not a post-draft checklist.
7.1 How SEO Is Encoded in the Pipeline
The brand ambassador and persona rules define the SEO strategy. Rather than treating keyword targeting as a separate technical task, ContentLoop encodes SEO intent directly into the persona system. The brand ambassador persona carries the topical authority map: which content pillars the brand owns, which reader levels to target, and which search intents each content type is meant to satisfy. The audience persona rules define reader technicality, vocabulary level, and the types of questions the post must answer to rank for informational queries.
These personas were developed and tracked separately with the marketing team, tested against performance data, and refined across cycles. Crucially, the refinement pattern was mostly subtractive: personas improved more through constraint than addition. The most significant updates were negations — removing language, topics, and claim types that caused audience mismatch or diluted topical focus. This is a useful design principle for anyone building persona prompts: if your persona keeps producing content that misses its target, the problem is usually something it's including, not something it's missing.
Content pillars map to search intent. The pipeline uses three pillar types:
- Problem-connection posts target awareness-stage queries — readers who know they have a problem but haven't named it yet. These are typically informational, long-tail, and persona-David or persona-Sophia audience level. SEO emphasis: semantic coverage of the problem space, internal links to solution posts.
- Solution-generator posts target consideration-stage queries — readers evaluating approaches. These carry the heaviest claim burden (benchmarks, comparisons, limitations) and the most routing flags. SEO emphasis: topical authority signals, structured data, FAQ coverage.
- Product-for-the-problem posts target decision-stage queries. CTA alignment is a CP1 criterion here, not an afterthought. SEO emphasis: conversion-intent keywords, product mention policy compliance.
Reader level is a routing signal. The routing matrix accounts for reader technicality. A post targeting a technical audience (persona AC) triggers the Development lane's benchmark requirements. A post targeting a practitioner audience (persona David) triggers the Marketing lane's packaging and Monday workflow checklist. SEO keyword selection follows the same branching logic: technical posts target precise tooling and architecture terms; practitioner posts target outcome and workflow terms.
Yoast as a render-QA step, not the SEO strategy. In ContentLoop, Yoast operates at the renderQaMd stage — it validates that the HTML output meets on-page technical requirements (meta description length, heading structure, keyword density checks) after the strategic SEO decisions have already been made and locked in the jobPacket. This prevents the common failure mode where Yoast's readability suggestions conflict with deliberate technical vocabulary choices for a technical audience.
7.2 SEO Fields in the jobPacket
The jobPacket carries SEO intent at intake:
{
"pillar": "authority",
"personaPrimary": "david",
"personaSecondary": "sophia",
"channelTargets": ["blog", "linkedIn"],
"thesis": "Most teams don't have a writing problem; they have a context reconstruction problem.",
"decisionOutcome": "Reader can decide whether to pilot a local-first agent pipeline and what proof artifacts are required.",
"pillarMix": { "productPercent": 8 }
}
pillar maps to the three content pillar types and drives which SEO intent the post must satisfy. personaPrimary drives reader level, vocabulary, and keyword class. pillarMix.productPercent enforces the product mention policy to avoid accidentally drifting a solution post into a product post, which would misalign with its target search intent.
8. The Core Object: jobPacket
ContentLoop becomes automatable when each post is a machine-readable job, not a narrative humans reinterpret each cycle. A jobPacket captures:
- Identity:
jobId, title, pillar, personas, channel targets - Intent: thesis, decision outcome, CTA
- Claims inventory: claim types with source requirements and recency rules
- Risk flags:
privacySecurity,regulatory,competitive,performance - Routing outputs: which artifacts this job must produce, computed from the routing matrix
- Gate status: CP1/CP2 state, loop count, council score
The jobPacket is the single file the system trusts for the run. Everything downstream reads from it.
9. Dynamic Routing: "If X Then Y"
Static SOPs fail because different posts have different burdens of proof. ContentLoop encodes a routing matrix so requirements are computed, not remembered:
- Comparative performance claim → benchmark report required before CP2
- Privacy or security language → threat model and trust boundary required
- Market statistics → named sources with ≤12 months recency
- Tooling version claims → named sources with ≤6 months recency
- Persona David → Monday workflow packaging and maintenance checklist
- Persona AC → tested vs. planned distinction, benchmark method required
Routing is what keeps the pipeline dynamic without becoming chaotic. Every post gets the right rigor for what it actually claims — not a uniform SOP applied regardless of risk level.
10. Structural Validators: Guardrails Against Empty Shells
Templates prevent forgetting structure. Validators prevent the worse failure mode: files that exist but contain no required truth.
Validators catch these before review:
- claim inventory missing required columns
- benchmark report created but containing no pending status with a due date or no completed results table
- CP1/CP2 packet missing an explicit decision line (
approve | revise | kill) - render QA missing pass/fail summary or checklist items
This is what prevents review cycles from devolving into hunting. "Missing artifacts" is almost always a validator gap — the artifact exists, but the validator wasn't there to catch that it was empty.
11. Gates: CP1 and CP2
11.1 CP1 — Outline Gate (Before Heavy Writing)
CP1 locks the thesis and scope before drafting begins. A post that fails CP1 is cheaper to kill than a post that fails CP2.
CP1 confirms: thesis and audience fit, channel packaging plan and hook strength, a claim plan (every non-trivial claim has a source plan or is marked opinion), at least one original framework or differentiator per post, and named limitations with mitigations.
The decision line is required and machine-validated: CP1 Decision: approve | revise | kill.
Maximum two CP1 iterations. If the post cannot clear CP1 after two passes, it is killed or re-scoped — not pushed to drafting.
11.2 CP2 — Publish Gate (Before Release)
CP2 confirms that the post can ship. Required artifacts must exist and pass structural validation. Claims must match sources and recency rules. Packaging must be complete across all channel targets. Render QA must pass (HTML, links, formatting, metadata, accessibility). The council score must reach 9/9/9.
The decision line is required: CP2 Decision: approve | revise | kill.
Maximum two CP2 revision loops. Beyond that, exec override is required or the post is killed with salvage outputs captured.
12. Why Local-First Matters (for Builders)
Recursive governance depends on persistent state: templates, rules, job packets, decision logs, inventories, validated artifacts. Chat-only workflows encourage context re-uploading and blur cycle boundaries. Local-first improves:
- Reproducibility: jobs can be re-run deterministically
- Auditability: decisions and proofs are stored as artifacts alongside the content
- Versioning: governance evolves under git — you can diff the routing rules between cycle 1 and cycle 10
- Security: trust boundaries are definable and enforceable; you choose where sensitive context is allowed to go
Local-first doesn't mean no risk. It means risk is choosable.
13. Who This Is For
Pilot ContentLoop if you:
- reuse sensitive context (strategy, pricing, proprietary ops)
- care about claims, recency, and auditability
- can support light hygiene (templates and validators)
- want a system that improves itself across cycles
Don't pilot yet if you:
- need zero maintenance
- can't support secrets handling or local ops discipline
- rely on frontier-only models for most work and can't define hybrid trust boundaries
14. What Changes When You Run the Loop
Reduces:
- manual context reassembly at every cycle boundary
- missing artifacts reaching review
- review churn caused by missing governance objects
- overclaim risk reaching publish
- packaging inconsistency across channels
Adds (intentionally):
- two named checkpoints with explicit decision lines
- a bounded critique loop with hard iteration limits
- repeatable, machine-checkable artifacts
- a salvage mechanism that feeds learnings back into the next run
It is less "process for process" and more "make the hidden costs explicit and put them where they can be managed."
Conclusion
ContentLoop turns content production into a fast, governed, repeatable build: job-defined, rule-routed, validator-checked, and gate-released. The 24-step pipeline didn't have too many steps because the work was wrong. It had too many steps because each tool forgot what the last one knew. A human was required at each change point to literally move the context from platform to platform, ensuring consistency and coherence. ContentLoop encodes that knowledge into a control plane that persists across cycles, so the system accumulates learning instead of resetting it.
For open-source builders, the pattern is portable. It is a small set of file-based primitives that can run locally, version cleanly, and scale across projects. The appendix below contains working code examples for each governance object — not reference schemas, but the actual files used in production. Each example is annotated to explain what it controls and why the structure matters.
Of course, we haven't shared the final form, but the basic primitives that began to take shape. Consider these files half developed and just waiting for your special sauce and topical focus and attentuation.
If you want to ship content faster and safer, don't just generate more text. Track your human habits and patterns. Harden the loop.
Appendix: Code Examples
The following examples are actual governance objects used in the ContentLoop implementation. They are intentionally minimal — in production, these are split across /rules, /registry, /validators, and per-job folders. Each example is annotated below to explain its purpose and what breaks if it is missing or malformed.
In practice, you'll start with a single flat folder and split as the job volume grows. The important constraint is that the artifactChecklist.json (Appendix H) is always generated fresh per job by applying baselineRules + routingRules + artifactRegistry, and it is never written by hand.
Appendix A — jobPacket.json (Minimal Example)
The jobPacket is the control-plane root object for one content job. It encodes intent, claim types, risk flags, and gate status in a single machine-readable file. Every downstream governance object reads from it. Without a valid jobPacket, routing cannot compute which artifacts are required — which is why baseline.alwaysRequireJobPacket is the highest-priority baseline rule.
{
"jobId": "ci-001-openclaw-contentloop",
"title": "Local-First Agent Pipelines for Content Governance",
"pillar": "authority",
"personaPrimary": "david",
"personaSecondary": "sophia",
"channelTargets": ["blog", "linkedIn"],
"thesis": "Most teams don't have a writing problem; they have a context reconstruction problem.",
"decisionOutcome": "Reader can decide whether to pilot a local-first agent pipeline and what proof artifacts are required.",
"cta": "Download the templates + run one pilot cycle",
"claimInventory": [
{
"claimId": "c1",
"claim": "Comparative performance claims require benchmarks before publish.",
"type": "comparativePerformance",
"sourceRequirement": "internalBenchmarkOrThirdParty",
"recencyRule": "methodCurrent"
}
],
"riskFlags": {
"privacySecurity": true,
"regulatory": false,
"competitive": false,
"performance": true
},
"gateStatus": {
"cp1": "draft",
"cp2": "notStarted",
"loopCount": 0,
"councilScore": null
}
}
Appendix B — routingRules.json (Comparative Performance → Benchmark Required)
Routing rules translate claim types and risk flags into required artifact lists. This rule fires whenever any claim in the claimInventory is typed as comparativePerformance. The cpBlocker: true flag means the benchmark artifact must exist and pass validation before CP2 can approve. Without this rule, a post could ship a comparative claim with no supporting evidence — one of the most common credibility failures in technical content.
{
"ruleId": "perf.comparativeRequiresBenchmark",
"priority": 100,
"when": {
"anyClaim": { "field": "type", "equals": "comparativePerformance" }
},
"then": {
"requireArtifacts": ["benchmarkReportMd"],
"cpBlocker": true,
"notes": "Comparative performance claims require a benchmark artifact before CP2."
}
}
Appendix C — routingRules.json (Privacy/Security → Threat Model Required)
This rule fires on either a riskFlags.privacySecurity: true value or any claim typed as privacySecurity. It requires both a threat model artifact and a residual risk section in the limitations document. The requireSections field is the key addition here — it prevents a threat model that exists but is empty, which is the "present-but-wrong" failure mode that structural validators catch at the artifact level.
{
"ruleId": "risk.privacySecurityRequiresThreatModel",
"priority": 95,
"when": {
"anyOf": [
{ "field": "riskFlags.privacySecurity", "equals": true },
{ "anyClaim": { "field": "type", "equals": "privacySecurity" } }
]
},
"then": {
"requireArtifacts": ["threatModelMd"],
"requireSections": [
"threatModelMd#trustBoundary",
"limitationsMd#privacyResidualRisk"
],
"cpBlocker": true
}
}
Appendix D — structuralValidators.json (Benchmark Must Be "Pending" or "Complete")
Validators run against artifact content, not just artifact existence. A benchmark file that contains no results and no pending status is an empty shell — it passes file existence checks but fails the substance check. This validator enforces that the benchmark artifact contains either a pending declaration with a due date, or a completed results table. The severity: "error" means the validator blocks CP2 rather than warning.
{
"validatorId": "benchmarks.resultsOrPending",
"target": "benchmarkReportMd#resultsOrPending",
"mustMatchAny": [
{ "containsAll": ["Status: pending", "DueDate:"] },
{ "containsAll": ["Status: complete", "| metric |", "|"] }
],
"severity": "error",
"why": "Prevents shipping a benchmark file that is present but empty."
}
Appendix E — artifactRegistry.json (Artifact Keys Resolve to Files + Required Sections)
The artifact registry is the lookup table that makes routing rules executable. It translates artifact keys (e.g., benchmarkReportMd) into filenames and required section anchors. Without the registry, routing rules are symbolic — they require artifacts by key but nothing tells the system what file to look for or which headings must exist inside it.
{
"version": "1.2.0",
"registryDialect": "jobPacketArtifacts",
"artifacts": {
"jobPacketMd": {
"filename": "jobPacket.json",
"sections": [
"jobMeta", "intent", "claimInventory",
"riskFlags", "routingOutputs", "gateStatus",
"killThresholds", "salvagePlan"
]
},
"cp1PacketMd": {
"filename": "packets/cp1Packet.md",
"sections": [
"jobMeta", "decision", "thesis", "audienceAndScope",
"outline", "claimIntent", "sourcesPlan", "limitationsPlan",
"channelPackagingPlan", "routingFlags", "artifactChecklistPreview"
]
},
"benchmarkReportMd": {
"filename": "benchmarks.md",
"sections": [
"purpose", "systemUnderTest", "method",
"resultsOrPending", "comparativeClaimAudit"
]
}
}
}
Appendix F — baselineRules.json (Always-On Minimum Contract)
Baseline rules define the minimum viable governance for every job regardless of topic, risk level, or persona. These are not defaults — they are hard floors. The always: true condition means they cannot be routed around. The jobPacket, both checkpoint packets, the claim inventory, limitations, render QA, and named sources are required on every single job. This is what makes the pipeline auditable: there is always a trail.
{
"version": "1.0.0",
"ruleDialect": "jobPacketRouting",
"rules": [
{
"ruleId": "baseline.alwaysRequireJobPacket",
"priority": 1100,
"when": { "always": true },
"then": {
"requireArtifacts": ["jobPacketMd"],
"requireSections": [
"jobPacketMd#jobMeta", "jobPacketMd#intent",
"jobPacketMd#claimInventory", "jobPacketMd#riskFlags",
"jobPacketMd#gateStatus"
],
"requireLanes": ["operations"],
"cpBlocker": true,
"notes": "JobPacket is the control-plane root object; prevents context debt."
}
},
{
"ruleId": "baseline.alwaysRequireCp1Cp2",
"priority": 1000,
"when": { "always": true },
"then": {
"requireArtifacts": ["cp1PacketMd", "cp2PacketMd"],
"requireSections": ["cp1PacketMd#decision", "cp2PacketMd#decision"],
"cpBlocker": true
}
},
{
"ruleId": "baseline.alwaysRequireClaimInventoryLimitationsRenderQaSources",
"priority": 990,
"when": { "always": true },
"then": {
"requireArtifacts": ["claimInventoryMd", "limitationsMd", "renderQaMd", "sourcesMd"],
"requireSections": [
"claimInventoryMd#claimsAll",
"limitationsMd#limitations",
"limitationsMd#residualRisk",
"renderQaMd#passFailSummary",
"sourcesMd#namedSources"
],
"cpBlocker": true
}
}
]
}
Appendix G — Packet Validator (Decision Line Enforcement)
This validator enforces that CP1 and CP2 packets contain an explicit, machine-readable decision line. A packet without a decision line is governance theater — the structure exists but the outcome is unrecorded. The regex catches all three valid states (approve, revise, kill) and fails on anything else, including blank or informal language like "looks good" or "needs work."
{
"validatorId": "packets.mustHaveDecisionLines",
"target": "cp1PacketMd#decision",
"mustMatchAny": [
{ "regex": "CP1 Decision:\\s*(approve|revise|kill)" }
],
"severity": "error"
}
Appendix H — Generated artifactChecklist.json (The "Glue" Object)
The artifact checklist is generated fresh per job by applying baselineRules + routingRules + artifactRegistry. It is the "glue" object: it translates the abstract governance graph into a flat, ordered list of files and sections the agent must produce and validate. The agent reads this list rather than inferring what to do from the rules directly — which is what makes the pipeline executable rather than just documented. The cpBlockers array identifies which artifacts block gate passage if missing or invalid.
{
"jobId": "ci-001-openclaw-contentloop",
"requiredArtifacts": [
{ "key": "jobPacketMd", "filename": "jobPacket.json",
"sections": ["jobMeta", "intent", "claimInventory", "riskFlags", "gateStatus"] },
{ "key": "cp1PacketMd", "filename": "packets/cp1Packet.md",
"sections": ["jobMeta", "decision", "outline", "limitationsPlan"] },
{ "key": "claimInventoryMd", "filename": "claims.md",
"sections": ["claimsAll"] },
{ "key": "sourcesMd", "filename": "sources.md",
"sections": ["namedSources"] },
{ "key": "limitationsMd", "filename": "limitations.md",
"sections": ["limitations", "residualRisk"] },
{ "key": "renderQaMd", "filename": "qa-checklist.md",
"sections": ["passFailSummary"] },
{ "key": "benchmarkReportMd", "filename": "benchmarks.md",
"sections": ["method", "resultsOrPending"] }
],
"cpBlockers": ["benchmarkReportMd"],
"notes": ["benchmark required because claim type includes comparativePerformance"]
}
ContentLoop Whitepaper — Draft v6, built with OpenClaw running Llama 3 70b on the Core Server. Lightly edited, and posted, by a human. For implementation templates, routing rule starters, and the one-week pilot guide, join our Discord for community classes