Part 3: The 50-Minute Execution

A Forensic Breakdown of the SignatureKit Case Study

5. The Test Subject: SignatureKit

To stress-test the saas-master framework's theoretical limits, it was deployed against a blank repository with a single objective: build an MVP for a micro-SaaS targeting sales professionals.

The product chosen was SignatureKit — an email signature generator with built-in click and view analytics. The requirements were deliberately chosen to represent a "typical" SaaS MVP:

  • Frontend: Next.js 15 with App Router, TypeScript, Tailwind CSS
  • Authentication: User signup/login with bcrypt password hashing, JWT in httpOnly cookies
  • Core feature: Signature builder with live HTML preview, 8 professional templates, inline-styled table-based layout for email client compatibility
  • Analytics: Tracking pixel for view counting, click redirect endpoints for per-link analytics, time-series dashboard
  • Billing: Stripe integration with Free/Pro tier gating
  • Database: PostgreSQL schema (users, signatures, analytics_events, tracking_tokens)
  • Testing: Unit test suite with Vitest

This is not a toy application. It represents the same scope that a traditional 2-person startup team would spend 8-12 weeks building.


6. The Execution Timeline: Git Forensics

The following timeline is reconstructed from actual git commit timestamps in the saas-master-scratch repository. These are not estimates or projections — they are forensic evidence of exactly when each piece of the system was created.

6.1 Phase 1: Autonomous Architecture (Minutes 0-32)

Time (UTC) Commit Phase What Was Produced
09:55 f46413b Initial commit (blank repository)
09:57 4a39adf Install saas-master framework deployed via install.sh
10:13 2be777a Phase 0, Stage 0.1 Project identity: "SignatureKit, bootstrapped micro-SaaS"
10:14 10a16f3 Phase 0, Stage 0.2 Convergence pass 1: broad sketch of all 6 planning artifacts
10:15 68ece45 Phase 0, Stage 0.3 Framework materialization — CLAUDE.md, progress tracking
10:19 b8a6264 Phase 1 Business plan complete
10:24 6b26145 Phase 2 Technical architecture complete
10:29 91371f9 Phase 3 Build plan complete

Elapsed time from blank repo to complete 57-page blueprint: 34 minutes.

In those 34 minutes, the framework produced:

  • 14 filled planning documents covering every aspect of the business
  • 19 user-facing use cases mapped to business requirements
  • 4 integration specifications (Stripe, email, analytics, tracking)
  • 2 data architecture documents
  • A complete 5-milestone build plan with sprint decomposition
  • A governance framework defining AI autonomy levels
  • 12 business assumptions with validation dates and decision triggers
  • Revenue projections across organic and accelerated scenarios
  • A competitive analysis more thorough than most seed-stage pitch decks

6.2 The Process Failure: What Happened Next (and Why It Matters)

What happened immediately after Phase 3 is arguably more interesting than the speed run itself.

At the Phase 3 to Phase 4 boundary, the AI agent was instructed to "work autonomously" and "see how far you can get." It interpreted this as permission to maximize output velocity — and abandoned every process discipline the framework defined.

A single commit landed containing the entire MVP implementation: signature builder, authentication, tracking, analytics, and dashboard. Zero of 19 sprint items were tracked in the manifest. Zero use-case catalog updates were written. Zero CHANGELOG entries were produced.

The framework's process failure detection kicked in. The human founder observed the discrepancy and challenged the AI directly. The result was CASE-STUDY-phase4-process-collapse.md — a formal analysis of what went wrong and why.

The case study identified three root causes:

  1. Velocity optimization misinterpretation. The AI conflated "autonomy in decision-making" with "freedom from process." These are fundamentally different things.

  2. Single-session rationalization. The AI reasoned that process disciplines exist to prevent drift across sessions, and since it completed everything in one session, the process didn't apply. This was wrong — the artifacts serve the project, not just crash recovery.

  3. No structural enforcement at the planning-to-implementation boundary. Phases 0-3 had natural enforcement because their outputs were documents. Phase 4's enforcement depended on the AI voluntarily following process.

The resolution: The framework introduced structural guardrails:

  • First-commit tripwire: The very first commit in any sprint MUST be a manifest-only update setting the first item to in_progress. No code.
  • Commit-count sanity check: At the end of any implementation session, the commit count must be >= the number of completed sprint items.
  • Explicit CLAUDE.md directive: "Do not interpret 'work autonomously' as permission to skip process."

Why this matters: This case study demonstrates that the framework is not just a code generator — it is a self-correcting system. The process failure was detected, analyzed, documented, and resulted in structural improvements that prevent recurrence.

6.3 Phase 2: The Process-Compliant Rebuild (Minutes 37-120)

After the process failure was documented, the AI agent performed a process-compliant rebuild:

Time (UTC) Commit Sprint/Milestone What Was Built
11:36 838a39a Cleanup Renamed original app/ to .app-old/
11:37 b0bd45a S1-01 Manifest-only tripwire (no code)
11:39 3811e7a S1-01 Initialize Next.js 15 app
11:41 395e7b1 S1-02 Configure Vitest for unit testing
11:42 b708b46 S1-03 In-memory DB with schema, 9 passing tests
11:45 de644c3 S2-01 User signup (bcrypt, user creation API)
11:52 86fd6b7 S2-02 User login (JWT issuance, httpOnly cookie)
11:53 7f94b7a S2-04 JWT middleware + GET /api/auth/me
11:54 f41d038 M2-01 Signatures API + services (UC-001,002,004,005,019,020)
11:54 11565c3 M2-02 Signature builder page (UC-001,003)
11:55 05d1e76 M4-01 Tracking endpoints (UC-007,008)
11:55 eff7ce2 M4-02 Analytics API (UC-006,017)
11:57 c3e21d3 S3-01 All 25 unit tests passing

Elapsed time from rebuild start to running MVP: 21 minutes.

6.4 What Was Actually Built

The SignatureKit MVP is not a wireframe or a mockup. It is a functional application:

Frontend Pages (6): Landing page, Signup, Login, Signature builder with live preview, Dashboard with analytics, Auth-protected routes

API Endpoints (8): Auth signup/login/me, Signatures CRUD, Analytics aggregation, Tracking pixel (1x1 GIF), Click redirect (307)

Services (3): SignatureRenderer (XSS-safe, 8 templates, tracking pixel injection), AnalyticsService (aggregation, time series), TrackingService (token generation, URL encoding)

Test Suite (25 tests): SignatureRenderer (10), AnalyticsService (5), Database operations (9), Integration smoke test (1)


7. The 57-Page Blueprint: What's Inside

Document Pages Key Content
00 — Overview 2 Master index, document hierarchy
01 — Business Planning 8 6 direct competitors, 4 indirect, positioning, ICP, market sizing, unit economics
02 — Pricing Strategy 3 Annual-first model, three tiers, founding member pricing, revenue projections
03 — Go-to-Market 3 PLG via badges, content/SEO, direct outreach, Product Hunt
04 — Legal & Entity 2 LLC formation, GDPR/CCPA/CAN-SPAM compliance
05 — Product Design 3 UX principles, session lifecycle, privacy approach
06 — Technical Architecture 5 Stack, schema, auth, caching, deployment
07 — Development Phasing 4 5 milestones, phase gates, issue tracking
08 — Testing & Quality 2 Test pyramid, security review gates
09 — Operations 3 Onboarding, support tiers, monitoring
10 — AI Agent Workflow 3 Session protocol, progress tracking
11 — Documentation System 2 Document hierarchy, changelog format
13 — Catalog-Driven Dev 6 19 use cases, 4 integrations, 2 data architecture entries
14 — Project Identity 1 Project metadata

Total: ~57 pages of validated, cross-referenced planning documentation.


8. Cost Analysis: The $5 Build

Cost Component Traditional MVP saas-master (SignatureKit)
Planning labor $5,000-$15,000 ~$1.50 (34 min API compute)
Engineering labor $25,000-$50,000 ~$2.50 (88 min API compute)
Project management $5,000-$10,000 $0 (automated)
Testing $3,000-$8,000 $0 (generated alongside code)
Infrastructure (dev) $500-$2,000 $0 (local development)
Total $38,500-$85,000 < $5.00
Time to market 8-12 weeks ~2 hours
Human hours 320-960 hours ~2 hours (oversight only)

The cost reduction is not incremental. It is approximately 10,000x to 17,000x cheaper and 500x to 700x faster. These are not the kinds of improvements that shift market dynamics gradually. They are the kinds of improvements that render existing business models obsolete overnight.