⚒ ACG ⚒

AI Rituals

Repeatable prompting and review habits Guild members use to represent working knowledge and keep AI work grounded, inspectable, and useful.

This page begins the Guild's collection of rituals for working with AI. A ritual is not a magic phrase and not a superstition. It is a repeatable habit that shapes how a human approaches an AI system before, during, and after prompting.

The Guild cares deeply about knowledge representation: how working knowledge gets stored, transmitted, tested, and reused. One way to represent that knowledge is through rituals. A good ritual compresses experience into a form that another craftsperson can repeat without losing the important structure.

The point is not to sound clever. The point is to reduce drift, surface assumptions, improve verification, and keep human judgement active. Prompting without ritual easily turns into vague delegation. Prompting with ritual keeps the craftsperson in the loop.

Here are the first ten rituals in the archive. The initial seed set came from Alex Bunardzic, and the archive now grows through member contribution, including Ritual 07 from Kelly Hohman, Ritual 08 from Jona Heidsick, and Rituals 09 and 10 from Laurie Scheepers.

Starter Rituals

The first ten rituals in the Guild archive. This list is expected to grow as members contribute their working practices.

Ritual 01

Name The Job

Start by telling the model exactly what role it is playing and what job it is doing right now. Not "help me with this." Name the task boundary clearly.

This reduces fuzzy assistance and improves the odds of getting an artifact you can inspect instead of a cloud of adjacent words.

Prompt pattern You are helping me as a reviewer / debugger / spec editor. The job is: [one concrete job]. Do only that job.
Ritual 02

Declare Non-Negotiables First

State the constraints before you ask for ideas: architecture boundaries, safety constraints, style limits, files that must not change, evidence requirements, business rules.

AI tends to fill empty space. Constraints should arrive before generation, not after cleanup.

Prompt pattern Constraints: - do not change X - stay within Y - preserve Z Now propose the smallest valid approach.
Ritual 03

Ask For The Failure Mode

Before accepting the happy path, ask the model how its own answer could fail. This shifts the interaction from affirmation to examination.

Guild work improves when the model is used to surface breakpoints, missing assumptions, and likely regressions before implementation.

Prompt pattern Before you continue, tell me the top 3 ways this answer could be wrong, incomplete, or risky.
Ritual 04

Separate Generation From Judgment

Use one pass to generate options and a second pass to evaluate them. Do not ask the model to invent and certify in the same breath.

This is the prompting equivalent of separating implementation from review.

Prompt pattern Pass 1: generate 3 options. Pass 2: critique each option against these criteria. Do not merge the two passes.
Ritual 05

Force Explicit Assumptions

If the model is making assumptions, make it say them out loud. Hidden assumptions are where most prompt drift begins.

Once assumptions are visible, you can confirm, reject, or narrow them before they contaminate the rest of the exchange.

Prompt pattern List the assumptions you are making. Mark each as confirmed, inferred, or unknown.
Ritual 06

End With Verification

Every meaningful prompting sequence should end with a verification question: what changed, how do we test it, what would falsify it, what still needs human review.

The ritual is complete only when the output is tied back to reality.

Prompt pattern Summarize: 1. what changed 2. how to verify it 3. what remains uncertain 4. what a human must still check
Ritual 07

Constraint First Generation

Submitted by Kelly Hohman. This ritual starts by forcing the model back to the actual problem boundary before it races into oversized solutions.

Intent: Reduce scope creep and prevent AI from over-engineering beyond requirements.

Trigger: Use it when AI starts proposing solutions that exceed the original problem scope.

Evidence: It prevents AI from building castles when the user asked for a shed - a common failure mode where helpfulness becomes harmful.

Failure mode: Under-delivering or missing opportunities for elegant solutions.

Prompt pattern "Stop. What problem are we actually solving?" "What is the minimum viable solution?" "What would make this fail (complexity, performance, maintenance)?" "Now propose the simplest version that meets core requirements" "Only after approval, discuss enhancements"
Example in practice User: "I need a way to sort these records" AI: "Stop. What problem are we actually solving? Just sorting? What's the minimum viable solution? A simple sort function? What would make this fail - wrong data types? Now propose the simplest version that meets core requirements."
Ritual 08

Overshoot The Ask

Submitted by Jona Heidsick. Do not ask only for implementation. Ask for the implementation and the evidence that it works, with the proof artifact defined as part of the task.

Intent: Prevent half-finished delivery. When the task boundary is "implement X", the model often stops at the first plausible state. When the boundary is "implement X and prove it", the full cycle has to be completed: build, verify, document.

Trigger: Use it for any task you intend to walk away from, especially agentic workflows, long-running sessions, or delegated coding work where you return later.

Pattern Implement [feature]. Then produce evidence that it works: [tests passing, screenshots, curl output, log output, before/after diff, ...]

The evidence type must be concrete and inspectable. "Make sure it works" is not overshooting. It is wishful thinking. Name the artifact.

Evidence: Tasks framed this way tend to arrive complete on return. Tasks framed as pure implementation frequently stall at partial completion and need another round of prompting to finish.

Failure mode: Overshooting too far - asking for implementation, tests, documentation, benchmarks, and a demo in one shot. The ritual works because it extends the boundary one step past done, not because it piles on scope.

Ritual 09

Falsify Before You Ship

Submitted by Laurie Scheepers. Before publishing any technical claim, run it through adversarial review. The goal is not to confirm your work is good -- it is to find out where it is wrong.

Intent: Prevent publishing overclaims. Addresses the failure mode where AI-assisted work sounds rigorous but has never been stress-tested. Most AI output optimises for plausibility, not truth. This ritual forces the distinction.

Trigger: Before publishing any technical claim, architectural decision, whitepaper, or public-facing document -- especially one generated with or validated by AI.

Evidence: Applied to a convergence proof paper. A 5-model adversarial council (Gemini, Llama, Qwen) evaluated 6 claims. 4 were killed outright. 2 were weakened. Zero survived intact. The killed claims would have been embarrassing if published -- the weakened claims were genuinely stronger for having been tested. Related work: multi-LLM debate (Du et al., ICML 2024), Karpathy's LLM Council (2025), FVA-RAG (arXiv:2512.07015).

Failure mode: (1) Running the ritual performatively but ignoring results -- you must commit to killing claims that fail. (2) Models are sycophantic unless given explicit adversarial framing -- "try to KILL this" works; "review this" does not. (3) Over-application -- not every Slack message needs a falsification council. Reserve for claims that will be public or consequential.

Prompt pattern "Before I publish this, try to kill it. Assume the role of a hostile reviewer. Find 3 ways this claim could be wrong. Search for prior art that undermines novelty. Steelman the null hypothesis. Only what survives gets shipped."
Example in practice User: "I've written a paper claiming my convergence kernel is a novel mathematical contribution." AI (adversarial): "This is fixed-point iteration (Banach, 1922). scipy.optimize.fixed_point does the same thing. The mathematical content is not novel. The application to AI agent workflows may be a reasonable engineering contribution, but the universality claim does not survive." Result: Claim reframed from "novel algorithm" to "engineering application of known mathematics." Credibility preserved.
Ritual 10

Cost-Aware Retrieval

Submitted by Laurie Scheepers. Before searching, pause and ask: do I already know this? AI agents and humans alike default to the most expensive tool when a cheaper one would suffice. This is the software equivalent of a cache hierarchy (L1, L2, RAM, disk) -- the pattern dates to the 1960s; the application to LLM token budgets is recent.

Intent: Prevent the "reach for the biggest tool first" failure mode. Expensive operations (full codebase scans, web searches, agent delegation) consume context budget that cannot be recovered. Cheap operations (recalling from conversation, checking a known file) cost almost nothing. The ritual creates a habit of checking cheap sources first.

Trigger: Any time you are about to ask the model to search, fetch, scan, or explore for information.

Evidence: Measured 440x token cost difference between the cheapest retrieval tier (~200 tokens for session recall) and the most expensive (~88,000 tokens for full codebase exploration). Two documented violations of this discipline cost ~368,000 tokens combined -- roughly 40% of a session's context budget, wasted. The cache hierarchy pattern is well-established in computing (CPU cache tiers since the 1960s, CDN edge caching, RAGCache — arXiv:2404.12457, 2024). The contribution here is applying it as a deliberate human discipline, not an infrastructure concern.

Failure mode: Over-caching leads to stale context. The ritual must include an escape hatch: if confidence drops below threshold, escalate despite cost. This ritual should not prevent necessary expensive lookups -- only unnecessary ones.

Prompt pattern "Before searching: 1. Do I already know this from our conversation? 2. Is it in a file I've already read? 3. Can I find it with a targeted lookup? Only escalate to expensive operations after cheaper ones fail. Each escalation needs a reason."
Example in practice User: "What channel should I post this update to?" Expensive approach: Launch a codebase exploration agent to search all config files. Cost: ~88,000 tokens. Cost-aware approach: "We discussed this 10 messages ago -- the channel is #grip-updates." Cost: ~0 tokens. Savings: 440x. Applied across a session, this discipline recovers 30-40% of the context budget.

How To Add A Ritual

The archive should grow by contribution, not by random accumulation.

A Guild ritual should be specific enough to repeat and concrete enough to test. If a ritual cannot be demonstrated in practice, it does not belong here yet.

In that sense, a ritual is a knowledge artifact. It captures not just what to type, but how to think, what to check, and where failure usually hides.

Suggested contribution format:

Over time this page can evolve from a simple list of prompting habits into a real Guild practice manual for human-AI collaboration.

Continue The Archive

Use this page as the beginning of a shared Guild vocabulary for prompting discipline, review habits, and human-in-the-loop AI practice.