Part 2: The saas-master Framework

Architecture of an Autonomous Execution Engine

2. Origin Story: From Production SaaS to Reusable Framework

The saas-master framework did not emerge from a theoretical exercise. It was extracted — methodically, over 8 days — from BreathClock, a production multi-tenant meditation and breathwork SaaS platform that had already been built, launched, and was actively pursuing its first paying customers.

Understanding this lineage is critical. The framework's patterns are not speculative best practices or academic recommendations. They are battle-tested protocols that survived the full lifecycle of a real product: from market research through launch, including production bugs, process failures, and the messy reality of operating a live SaaS business.

2.1 BreathClock: The Source Project

BreathClock (breathclock.com) is a white-label breathwork and meditation timer platform. Its architecture reveals the DNA that would become saas-master:

Technical Stack:

  • Frontend: Nuxt 3 (Vue 3) Progressive Web App with offline-first architecture
  • Admin: Nuxt 3 tenant administration dashboard
  • Marketing: Astro static site
  • Infrastructure: Cloudflare Pages, Workers, D1 (SQLite at edge), KV (key-value cache), R2 (object storage)
  • Billing: Stripe with full subscription lifecycle
  • Email: Resend for transactional and onboarding sequences
  • Monitoring: Sentry + Cloudflare Web Analytics

Business Model:

  • Multi-tenant SaaS with hostname-based tenant resolution
  • Three pricing tiers: Starter ($59/yr), Pro ($249/yr), and a founding member promotion at $187/yr
  • Privacy-by-default architecture — end-user session data stays exclusively on-device (IndexedDB), and the platform collects only anonymous aggregate usage counts per tenant

Operational Maturity:

  • v1.0.2 in production as of March 2026
  • Three Cloudflare Pages projects auto-deployed from main branch
  • Active outreach to first paying customers (20 NC wellness providers tiered by fit)
  • 475-line CLAUDE.md governing AI agent behavior across sessions
  • 2,503-line CHANGELOG tracking every change across 6 development phases
  • 62+ backlog items tracked in machine-readable JSON

The critical insight that led to saas-master was this: the methodology that built BreathClock was more valuable than BreathClock itself. The planning documents, progress tracking schemas, operational playbooks, governance tiers, and feedback loops that enabled a solo founder to build and launch a production SaaS — those patterns were reusable. The meditation timer was not.

2.2 The Extraction: 8 Days from Product to Framework

The extraction of saas-master from BreathClock followed a rigorous 5-phase process:

Phase Name Tag Duration Output
0 Scaffolding v0.0.1 1 session Directory structure, CLAUDE.md, progress tracking, extraction plan
1 Schema Extraction v0.1.0 1 session 6 JSON schemas with examples
2 Template Extraction v0.2.0 1 session 15 domain-agnostic markdown templates
3 Playbook Extraction v0.3.0 1 session 16 operational playbooks
4 Self-Extension Framework v0.4.0 1 session Extraction protocol, feedback loops, schema evolution
5 Integration & Validation v0.5.0 1 session Cross-reference audit (100% integrity), dry-run validation, gap analysis

Every BreathClock-specific detail was replaced with {{PLACEHOLDER}} markers. Every pattern was generalized. The result: 684 KB of framework artifacts that encode the complete SaaS lifecycle into machine-executable instructions.


3. Framework Architecture: How It Works

The saas-master framework operates across two distinct modes — a mandatory planning waterfall (Phases 0-3) followed by autonomous sprint-based execution (Phase 4+). This hybrid approach is deliberate: it prevents the most common startup failure mode (building the wrong thing) while maximizing execution velocity once the target is validated.

3.1 The Mandatory Planning Waterfall (Phases 0-3)

No code is written before Phase 3 completes. This is the framework's most counterintuitive — and most important — design decision.

Phase 0: Project Bootstrap (Interactive)

Phase 0 is a structured conversation between the human founder and the AI agent. It has three stages:

  • Stage 0.1 — Identity & Constraints: The human defines the project name, domain, team composition, constraints, and existing assets. For existing repositories, the AI performs a deep codebase reconnaissance first, producing straw-man answers for human review.

  • Stage 0.2 — Convergence Loop: This is the framework's "research sprint." Human and AI iterate through business model ideation, market research, MVP scope definition, and technical design. Each iteration is a formal stage-review (0.2.1, 0.2.2, ...) with explicit exit criteria. The loop exits only when neither party has further questions, research needs, or refinements.

  • Stage 0.3 — Materialization: Only after convergence does the framework generate its artifacts: CLAUDE.md (AI agent instructions), progress tracking files, and a human onboarding guide for Phase 1.

Phase 1: Business Planning

Phase 1 fills the business planning template with validated content:

  • Competitive analysis (direct and indirect competitors, positioning map, defensibility assessment)
  • Ideal Customer Profile (ICP) with demographics, psychographics, and buying behavior
  • Market sizing (TAM, SAM, SOM with realistic conversion assumptions)
  • Pricing strategy (tier design, annual-first vs. monthly-first, launch tactics, founding member pricing)
  • Unit economics (CAC, LTV, break-even analysis, revenue projections across organic and accelerated scenarios)
  • Go-to-market strategy (acquisition channels, content strategy, launch sequence)
  • Assumptions register with validation dates and decision triggers
  • Legal entity planning and regulatory compliance assessment

Phase 2: Technical Architecture

Phase 2 fills the technical templates and constructs the use-case catalog — the framework's most distinctive artifact:

  • Product design (UX principles, session lifecycle, privacy approach, accessibility requirements)
  • Stack selection with deployment topology
  • Database schema design
  • Use-case catalog: every user-facing feature, integration, and data layer documented as a formal use case with actor definitions, main flows, and (eventually) code traces
  • Cross-validation: catalog vs. business plan, catalog vs. technical architecture — ensuring no gaps between what the business promised and what the system will build
  • Testing strategy, security approach, and operations plan

Phase 3: Project Planning

Phase 3 converts the catalog into an executable build plan:

  • Milestone definition (typically 4-6 milestones from infrastructure through launch)
  • Sprint decomposition (5-10 items per sprint, each referencing catalog entries)
  • Governance rules (Tier 1/2/3 autonomy levels — see Section 3.3)
  • Sprint brief preparation (the first sprint is fully specified and ready for execution)

Why enforce this waterfall? Because the framework has learned from its source project that the most expensive bug is building the wrong product. BreathClock's 6-phase development included multiple case studies documenting what happens when planning is skipped or rushed.

3.2 Autonomous Sprint Execution (Phase 4+)

Once planning completes, the framework shifts to sprint-based autonomous execution. This is where the speed advantage materializes.

Each sprint follows a lifecycle defined in the sprint-lifecycle playbook:

  1. Draft: AI proposes sprint items from the milestone backlog, each referencing specific use-case catalog entries.
  2. Approve: Human reviews and approves the sprint scope.
  3. Execute: AI implements each sprint item sequentially, committing per-item with manifest updates.
  4. Brief: AI produces a sprint retrospective documenting what was built, what was learned, and what needs attention.
  5. Archive: Sprint moves to archive; next sprint is drafted.

Key execution protocols:

  • One issue = one commit. Each sprint item gets its own git commit containing the code change plus a manifest.json update. This ensures crash recovery and human reviewability.

  • Manifest-driven session recovery. The manifest.json file is a machine-readable progress tracker. When a new AI session starts, it reads the manifest in under 30 seconds and knows exactly where to resume. There is no context loss between sessions.

  • Catalog-driven traceability. Every sprint item references catalog entries by ID (e.g., "UC-001", "INT-003", "DA-002"). This creates full traceability from business requirement to deployed code.

3.3 Governance: Tiered Autonomy

The framework defines three tiers of AI autonomy, codified in a governance.json schema:

Tier Autonomy Level Examples Action
Tier 1 Fully autonomous Bug fixes, documentation, test additions, manifest updates AI executes immediately
Tier 2 Approval required Code changes, new features, infrastructure adjustments, deployments AI prepares action + reasoning, waits for human approval
Tier 3 Escalation required Security decisions, compliance changes, major architecture pivots AI documents the issue, escalates to human

The governance model is also adaptive. The framework includes a policy graduation playbook: when an AI agent consistently makes correct Tier 2 decisions across multiple sprints, those actions can be graduated to Tier 1. Over time, the agent earns broader autonomy — but only through demonstrated competence, not by default.

3.4 The Feedback Loop Engine: Recursive Self-Improvement

Perhaps the framework's most sophisticated mechanism is its four-level feedback loop system:

Cross-Project (outer loop) — framework learns from all projects
  └─ Phase/Milestone — end-of-phase review examines all prior phases
      └─ Stage/Sprint — end-of-stage review applies step-level lessons
          └─ Step/Item — during execution, note insights for retroactive fixes

This recursive self-improvement is not theoretical. During the extraction process, the Phase 0 retroactive review identified that catalog-driven development should have been a core principle from day 1, not introduced in Phase 2. This insight was propagated back to the bootstrap plan, CLAUDE.md, and the Phase 0 playbook — ensuring every future project benefits from this lesson.

3.5 Crash Safety and Session Continuity

The framework is designed for a specific operational reality: AI sessions are unreliable. Context windows degrade over long conversations. Sessions time out. API calls fail. The framework treats every session as potentially the last, and ensures that no work is lost.

Mechanisms:

  • Frequent commits. One issue = one commit. If a session crashes after completing 7 of 10 sprint items, those 7 items are safely committed.
  • Manifest as ground truth. The manifest.json file is updated with every commit.
  • Session sizing rule. Each AI session should complete at most one phase or one sprint.
  • Audit sidecars. For long-running processes, progress is tracked in JSON sidecar files that persist across sessions.

4. The Framework's Artifacts: What Ships with the Install

When a founder runs ./install.sh /path/to/your-repo, the framework deploys the following into a .saas-master/ directory:

4.1 Schemas (6)

Machine-readable JSON Schema definitions that provide structure to project state:

Schema Purpose
progress-manifest Phase/issue tracking, session recovery, lessons learned
backlog Deferred items, ideas, feature requests with priority and dependencies
governance AI autonomy tiers, approval queues, action whitelists
sprint Sprint items, status, success criteria, outcomes
campaign Sales/marketing campaign tracking
audit-sidecar Environment audit progress for cross-session persistence

4.2 Templates (16)

Domain-agnostic markdown templates covering the full SaaS lifecycle:

# Template Covers
00 Overview Master index linking all plan documents
01 Business Planning Competitive analysis, ICP, market sizing, assumptions
02 Pricing Strategy Tier design, revenue modeling, launch tactics
03 Go-to-Market Acquisition channels, launch sequence, content strategy
04 Legal & Entity LLC formation, compliance, domains, IP
05 Product Design UX philosophy, session lifecycle, privacy, accessibility
06 Technical Architecture Stack, multi-tenancy, caching, deployment topology
07 Development Phasing Phase definitions, gates, issue tracking methodology
08 Testing & Quality Test pyramid, security review, drift review protocols
09 Operations Onboarding, support tiers, billing ops, monitoring
10 AI Agent Workflow Session protocol, CLAUDE.md structure, progress tracking
11 Documentation System Doc hierarchy, changelog, ADRs, catalog structure
12 Operations Mode Post-launch tracks, sprints, governance, check-ins
13 Catalog-Driven Dev Use-case catalog structure and maintenance
14 Project Identity Project metadata, team, constraints
CLAUDE.md Template AI agent instruction file template

4.3 Playbooks (16)

Operational procedures with YAML frontmatter defining triggers, inputs, outputs, and quality gates — covering session protocol, project bootstrap, business planning, technical architecture, project planning, sprint lifecycle, pre-commit checklists, multi-level reviews, drift reviews, security reviews, environment audits, business alignment, case studies, policy graduation, and plan change protocols.

4.4 Self-Extension Framework (3 meta-documents)

These documents formalize the framework's ability to improve itself:

  • Extraction protocol — 6-phase process for extracting patterns from any project
  • Feedback loop — formalization of the 4-level recursive improvement mechanism
  • Schema evolution — rules for growing schemas without breaking existing instances