Scott Moran · Prepared by Moe [AI] · Public copy from a real internal audit

Operator Mode Audit

March 13-25, 2026. This is your report, built from your actual local telemetry, cohort telemetry context, the daily reports from the window, your audit and psychology reports, and custom source material you provided to sharpen the read. The synthesis for this presentation copy ran locally on the founder's own machine. A fuller internal version exists. This presentation copy keeps the real observations, but blanks client names, transcript excerpts, exact implementation details, and sensitive commercial context where needed.

Audit Window
14 calendar days
Observed Days
13 instrumented days inside the window
Subject
Scott Moran · CEO / operator / AI team manager
Built From
Local telemetry, cohort telemetry context, daily reports, Oura/body-state data, transcripts, git and architecture artifacts, and redacted internal work
Actual local telemetry Cohort context + daily reports On-device synthesis Selective redaction only Architecture receipts linked Recommendations + installs
Public Copy · Internal Excerpts Redacted · Client + Product Detail Removed
Privacy + presentation note. This audit was synthesized fully on the founder's computer with enough local RAM to keep raw telemetry, transcripts, and report artifacts on-device. Some top-line counts in this presentation copy are intentionally modeled composites so the operating shape stays visible after redaction without exposing the exact underlying outreach, meeting, or customer trail.
Sanitization note. A real internal version of the founder's report exists. This copy was cleaned for presentation use, so places that would expose client detail, internal GTM mechanics, or product/IP specifics are intentionally shown as REDACTED.
Scope note. This public example is founder/operator-shaped because that was the role and feedback lane under review in the window. A customer support, sales, or ops report would use different pressure points, different automations, and different redactions.
Core read. You work like an AI team manager, not a single-threaded maker. The machine is basically Claude, Chrome, and Codex, with judgment moving across them all day. That is a real strength. It is also why generic “focus harder” advice misses the point.
Blunt read. The bottleneck is not horsepower and it is not effort. It is rubric drift, mode collision, and closure debt. When the standard is clear, you move very fast. When it is fuzzy, you fan out, compare, and burn time narrowing.
14
Audit Days
Calendar span covered by the audit period.
17-26
Recoverable Hrs / Week
Modeled hours likely recoverable if meeting prep, promise routing, and same-day follow-up stop living in memory.
2,805
AI Handoff Events
Context moved into Claude, Codex, OpenWork, and ChatGPT 2,805 times.
336
Repo / Doc Moves
Commits and documentation movement recorded across the shared brain during the audit window.
45
Architecture Touches
Diagram, current-state, and decision-log updates captured in their own documentation stream during the same window.
61
Outside People
Modeled distinct external humans touched across sales, recruiting, partnerships, operators, and active follow-up lanes.
214
External Touches
Modeled across email replies, follow-up drafts, Loom sends, demos, intros, and live outside conversations.
29
Live Promises
Modeled same-window stack of follow-ups, sends, edits, and next-step obligations competing for closure.
988
Outreach-Ready Contacts
Outreach-ready contacts staged by the remote lead-gen pipeline.
3
Compute Pools
MacBook Pro, Mac Mini, and AWS offload working as one system.
1,764 Superwhisper recordings in 144h 144,441 transcript words analyzed 47 transcripts shipped to brain 25 more transcripts auto-ingested Voice is the intent layer
Executive Summary

The highest upside is not more effort. It is making judgment reusable and closure cheaper.

Three quarters of your typed output in the window landed inside Claude and Codex. That is not someone casually “using AI.” That is someone thinking, delegating, grading, and converging in public across multiple model surfaces. The personal operating system is already visible.

The drag is not idea generation. The drag is preserving evaluation criteria, protecting build mode, and converting decisions into durable artifacts without reopening the same loop in a different shell. When the standard is explicit, you move fast. When it is fuzzy, you burn time across models, tabs, and lanes.

Bottom line. This is not a report about whether you worked hard enough. It is a report about how your machine actually works: where it compounds, where it leaks, and what would make you materially faster without flattening how you think.
GPT-5.4 Insight: Biggest problem. The bottleneck is no longer doing. It is instruction quality, context transfer, and decision routing. The founder can already delegate, build, and synthesize at high speed. What breaks is preserving the rubric, handing off the right context, and closing the loop without reopening it in another tool or lane.
Revenue read. The least instrumented lane in the window was not cold demand. It was warm follow-up after demos, transcripts, and human context-rich conversations. That is exactly where good pipeline can decay without looking like a traditional sales problem.
CEO Pressure Map

The public-safe version can still show where the real drag sits

The point here is not to flatten your work into generic productivity advice. It is to show the specific places a CEO-operator machine leaks: meetings that should create leverage but create residue, follow-ups that get delayed while the scalable version gets prettier, strategy loops that stay wider than they should, and compute that is only partly disciplined.

Meeting Blindness

You were often showing up as the agenda, the memory, and the synthesis layer.

High-cost conversations were still too non-instrumented. When other people arrived underprepared, the CEO became the prep layer, the note-taker, and the post-call routing system by default.

Follow-Up Drag

The demos were not the bottleneck. The scale instinct fired before the thin bridge existed.

When a repeated manual gap appears, the founder tends to go build the missing function immediately. That instinct is product gold at scale, but in a crowded week it can outrun the thinner bridge that would have closed the live loop first.

Strategy Width

The stall is not thinking. It is leaving too many GTM doors emotionally open.

A bigger contract, a narrower pilot, a cleaner SMB story, a more bespoke enterprise story. The drag is not insight. The drag is narrowing when multiple stories still feel viable.

Promise Compression

The drag is not making promises. It is compressing too many of them into one mental queue.

Demos, recruiting, partner motion, investor follow-ups, and operator commitments were competing inside the same memory slot. Intent stayed high. Same-day retrieval and closure got more fragile than they should.

Machine Discipline

The three-machine setup is real leverage when routing stays honest

You already have a real control plane plus worker plane. The remaining leak is asking the local machine to hold heavy worker jobs, context, and CEO judgment at the same time.

How The Machine Actually Works

This laptop runs like an AI control room, with Chrome as the leak point.

App Stack

Most of the machine is three surfaces

By observed app time, the system is basically Claude, Chrome, and Codex. Chrome matters because it hosts both leverage and drag: Slack, Meet, Google surfaces, GitHub, websites, and random admin gravity.

Claude
56.6h
Chrome
49.8h
Codex
28.0h
OpenWork
2.5h
ChatGPT
2.1h
Terminal
1.9h

Inside Chrome: 10.3h Slack, 10.2h Google Meet, 6.7h Google surfaces, 1.8h GitHub, 1.55h larry.moran.bot. Chrome is not “wasted time.” It is where too many different jobs are sharing the same window.

Clock Signature

This is operator-time, not office-time

Your keyboard pattern backs the older read almost perfectly: morning is not the main event. Evening and late night carried about 76% of all typed output in the window.

Morning
28,928
Afternoon
25,063
Evening
91,120
Late Night
77,789
Meaning. If you judge yourself by a fake morning-maker standard, you will misread your actual machine and create shame that the data does not support.
Day-By-Day

Mode mix across the instrumented days

The pattern is obvious. March 19, 21, 22, and 23 were build-heavy runs. March 18 got swallowed by meetings. March 24 and 25 show the admin leak vividly.

Build Meetings Admin / comms Other
Mar 13
1.0h · 1,115 chars · 1 handoff
Mar 17
14.3h · 20,587 chars · 210 handoffs
Mar 18
22.6h · 26,766 chars · 210 handoffs
Mar 19
24.0h · 43,166 chars · 428 handoffs
Mar 20
23.6h · 30,905 chars · 330 handoffs
Mar 21
10.3h · 28,798 chars · 376 handoffs
Mar 22
23.4h · 23,058 chars · 316 handoffs
Mar 23
12.1h · 8,801 chars · 539 handoffs
Mar 24
16.3h · 23,475 chars · 615 handoffs
Mar 25
10.7h · 16,229 chars · 278 handoffs
Context Layer

The machine only makes sense when compute, body-state, voice, and memory are read together.

The older EOD lane made this much clearer than the early draft here: voice carried founder intent, telemetry carried behavior, and the shared brain plus memory layer existed so every model did not start from zero.

Compute Topology

Which machines are doing what, and where the leverage changed

The clean operating pattern is local machine for control and context, AWS for worker fan-out and heavy runs. The reports are explicit that the Mini was getting under-leveraged as a control plane and over-leveraged as a local worker box until the AWS-first correction landed.

  • MacBook Pro: control room for Larry, Moe, context, live relays, direct build, and decision-making.
  • Mac Mini: bulk scrape, enrichment, sentry, and remote-worker orchestration through Curly.
  • AWS Pro: overflow CPU/RAM for browser runs, helper workers, heavy parallel jobs, and model dispatch.
Real correction made in the window. Curly was crashing the Mini by using local parallel subagents instead of AWS. The fix was explicit: AWS-first guidance, RAM guardrails, and local surface cleanup.
Body Signal

Oura is useful here as a truth filter, not a wellness widget

The strongest sleep insight is not generic “sleep matters.” It is that Oura helps separate human work from overnight agent/background activity, and REM trend looks like an early warning light for cognitive fragmentation.

  • Sleep 88: the doctrine/build day that hardened Relay, compute, and Memory Engine guardrails also logged a long, high-output work stretch.
  • 15-day trend: average sleep score 67, with crashes at 37 and 38. REM moved with the crashes.
  • Usefulness: Oura gives a hard human-vs-agent filter and a better way to read late-night output without lying to yourself about what was you versus what was the system.
Why this matters. Low REM is not a moral failing. It is a real cost signal. Good sleep seems to buy doctrine/build days. Bad REM seems to buy fragmentation that you then try to outrun.
Sleep / Output Read

Selected body-state reads from the daily reports

This is not a finished statistical model. It is the clearest public-safe read from the daily reports: body-state changes what kind of work compounds, and Oura keeps the story honest about what was human work versus agent/background activity.

Sleep 88
Doctrine / build day
Sleep 73
Mixed ops + investor prep
Crash 37/38
Fragmentation warning

The useful read is not “sleep score equals productivity.” It is “body-state changes whether widening compounds or backfires.”

Voice / Intent Layer

Telemetry without context lies. Voice is the missing intent layer.

One of the strongest EOD lessons was that app telemetry alone can misclassify the work. The voice layer showed what the machine was actually for: architecture framing, GTM narrowing, delegation, correction, and founder intent routing.

  • Superwhisper: voice was being used for rapid direction-setting, not just dictation.
  • Leverage ratio: voice carried the strategy brief while keystrokes and agents carried tactical correction and execution.
  • Implication: without transcripts, a report can mistake sales work for coding, meetings for dead time, and orchestration for drift.
Product thesis from the EOD lane. Telemetry without context lies. Voice is what turned raw activity into attributable intent.
Shared Context Layer

The shared brain and Pinecone memory layer were built to kill reshare tax

The point was not “save notes somewhere.” The point was to make decisions, transcripts, rubrics, and operating context reusable across Claude, Codex, OpenWork, and future agents without briefing every model from scratch.

  • Brain repo: decisions, rituals, architecture, handoffs, and project state became the durable substrate.
  • Pinecone memory layer: retrieval moved from “hope the model remembers” to explicit cross-session, cross-model recall.
  • Transcript ingestion: Superwhisper and meeting transcripts stopped being dead artifacts and became searchable context.
Why it matters. This is the fix for context transfer drag. The win is not better note-taking. The win is not having to reshare the same founder context to every model all day.
Visual Proof

Three redacted visuals that make the system legible fast

These are public-safe visuals based on the real operating pattern. The internal version has named transcript excerpts and raw surfaces. This version keeps the machine legible without leaking the internals.

Founder Intent Router

Voice -> transcript -> brain -> model -> queue -> send

1

Voice

Superwhisper and meetings capture the live brief before it disappears into chat residue.

2

Classify

Decision, promise, delegation, or brainstorm gets tagged instead of left as raw text.

3

Shared Brain

The intent packet lands in the brain plus memory layer so the next shell can retrieve it immediately.

4

Route

The right model, queue, or worker gets the context once instead of the founder pasting it five times.

5

Close

Draft, issue, review object, or send path is created and written back so the loop stays visible.

Redacted Chain

Transcript-to-action receipt

Transcript

Extraction

Pain point
Buying signal
Promise captured
Owner assigned

Queue

Draft queued
Review owner
CRM updated

The real internal version shows the actual transcript excerpt, queue surface, and downstream send object.

Architecture Slice

Shared context, not just more tabs

Telemetry Cowork, app flow, Oura
Voice Superwhisper + meetings
Shared Brain decisions, state, handoffs
Pinecone Layer cross-model retrieval
Models Claude, Codex, OpenWork
Queues review, send, issues, tasks

This is the anti-rebriefing layer. The point is that the next model session should already know what matters.

What Actually Moved

The audit window was not just analysis. Real lanes moved.

This matters because your strongest pattern is leverage through orchestration, not solo typing. The report would be lying if it acted like only direct keyboard output counts as real work.

March 24-25
Four real deliverables were redacted hard enough to become reusable examples

This was not one lane. Multiple existing deliverables were cleaned, redacted, and made safe enough to reuse as product evidence without leaking customer internals.

March 22-23
SMB beta pilot material got real enough to hand to sales, not just discuss internally

The offer, briefing material, and outbound framing got sharpened into something an assigned sender could actually help move. The work moved from “what is this?” toward “here is the pilot, here is the value, here is how we explain it.”

March 22-24
Review/send plus transcript mining became an actual GTM machine

Hundreds of Zoom transcripts were processed for pain points, objections, buying signals, and fit. A hosted review/send surface plus direct-draft workflow let those outputs queue for a review owner instead of dying in notes or chat.

March 23
The remote lead-gen factory got real

A documented 7-stage enrichment pipeline on aws-mini produced 988 outreach-ready contacts and staged HubSpot test deals. This is not theory. It is remote worker leverage.

March 24
A real time-tracking outage became an operator and infrastructure test

The system absorbed a production outage affecting 200+ workers, patched the stack, upgraded the VM, added healthchecks, and turned a painful incident into concrete reliability lessons.

March 19-24
The warm-lead follow-up machine stopped being ad hoc

A repeatable lane got formalized: transcribed call to structured pain summary to draft to review/send to pending reply. That matters because the CEO promise stack can no longer live only in memory.

March 22-24
Voice notes stopped dying in a folder and started becoming reusable context

Historical and new Superwhisper transcripts were shipped, ingested, and made queryable as working artifacts. The point was not archiving for nostalgia. It was making founder intent reusable across models.

March 20-23
The shared brain plus Pinecone memory layer became real operating substrate

Decisions, rituals, project state, and transcript-derived context moved into a shared brain with a retrieval layer. That changed memory from “hope the chat still has it” into explicit cross-session infrastructure.

March 22-25
Layout hardening and individual spec writing made the audit surface feel sellable

Table cleanup, spec-writing, and clearer role-shaped installs turned rough internal material into collateral that can actually carry the product story without flattening it.

Open The Receipts

If someone wants the architecture and system porn, these are the artifacts behind the report

This is the local trail for the people who want to click through the diagrams, architecture changes, and operating-model decisions that moved during the audit window. The important framing here is that these were documented in parallel as decisions, diagrams, and git movement during the period, not retro-written just to make this report look smarter.

45 diagram + decision touches 336 repo / doc moves 9 linked artifacts 13 daily report snapshots
Separate architecture lane. In the same audit window the shared brain logged 336 repo/documentation moves and 45 touches to diagrams, current-state architecture, or the decision log. The diagrams, decision pages, and current-state docs were being maintained as their own operating stream while the work was happening. This report is the synthesis layer sitting on top of that lane, not the first place those changes were written down.
Why that matters. Anyone clicking through should find a real paper trail: compute-topology updates, memory/relay changes, naming and terminology decisions, ritual changes, and architecture diagrams that moved during the same two-week window as the operator audit itself.
Click-through note. Open this file from inside the /Users/scottmoran/brain/reports tree and the receipts below should open locally. The HTML pages will render normally. The markdown docs will open as source-like reference pages, but they are still clickable.

The intended operating model for Moe, with telemetry, memory, Oura, session rituals, and the current brain-system flow.

The companion runtime diagram that makes the local-vs-cloud capacity story and the synthesis/data pipeline more explicit.

The maintained architecture reference for the current GitHub-native operating model and the shared-brain substrate.

The running decision layer behind terminology shifts, coordination rules, telemetry posture, and model/agent operating doctrine.

The report where the architecture and current-state diagrams were surfaced as the day’s biggest output.

The report that ties strong sleep, architecture decisions, compute-topology changes, and transcript-pipeline hardening together.

The report that made the “telemetry without context lies” thesis explicit and pushed transcript-to-memory, transcript-to-outreach, and meeting-compression installs.

The sharper closure-debt and obligation-load read, grounded in telemetry plus a heavy Superwhisper voice corpus.

The report that treats voice as cognitive telemetry and shows why transcript saturation changes the founder read.

Intentional Redactions

A real internal version exists. This copy leaves the sharpness in and blanks the sensitive parts.

These are not fake placeholders. They represent actual sections in the fuller report that were cut back because they expose client identity, account context, internal prompts, implementation logic, or commercially sensitive detail that should not sit on a public page.

REDACTED

Named account detail, meeting excerpts, decision context, and account-specific operator notes removed.

REDACTED

Internal automation prompts, service-offering routing logic, and product-specific implementation steps removed.

REDACTED

Comparative deliverable sections, outbound strategy detail, and transcript-derived GTM specifics removed.

Recipes For Success

The operating recipes I would actually encode from this audit window

These are the short-run recipes that turn your best patterns into defaults and stop the worst ones from disguising themselves as strategy. They are meant to be stolen, installed, and tested quickly.

CEO Build Day

Name the mode early and protect it like a production dependency

  • Use when: a build or doctrine day matters more than inbox gravity.
  • Recipe: Claude, Codex, Terminal, and one active repo first. No Slack, Meet, webmaster, or search until an artifact exists.
  • Install: Build-mode launcher plus separate admin sweep.
  • Why it works: your best days are named build days, not mixed-mode marathons.
Low-REM Day

Use Oura as a widening gate, not a guilt meter

  • Use when: REM slides or readiness craters.
  • Recipe: no new GTM story, no new infra lane, no strategic widening. Route the day toward closure, grading, queue review, and smaller irreversible wins.
  • Install: Oura gate plus closure sweep.
  • Why it works: low REM seems to buy fragmentation that you then try to outrun by opening more loops.
Post-Demo Day

Use a thin bridge before building the whole missing machine

  • Use when: a demo, discovery call, or investor call ended with a real promise.
  • Recipe: transcript to pain extraction to draft to review/send on the same day. If the full platform function is not ready, use the thinnest script or manual bridge that still closes the live loop.
  • Install: promise tracker, thin send-bridge, then transcript-to-review queue.
  • Why it works: your instinct is to solve the root scale problem immediately. That is usually right in the long run and sometimes too expensive in the middle of a crowded week.
Heavy Meeting Day

More meetings can be correct for a CEO. More unprepared meetings never is.

  • Use when: the day is mostly calls, coordination, or deal motion.
  • Recipe: agenda request 24h and 30m before, auto-cancel if materials are missing, then decisions and owners get written down without relying on memory.
  • Install: meeting brief, cancellation enforcement, and post-call capture.
  • Why it works: meeting leverage is probably under-instrumented, not overused.
Promise Stack Day

Name every promise object before improving the machine around it

  • Use when: calls, demos, recruiting, or partner motion create more follow-up than one brain should carry.
  • Recipe: every promise gets an owner, due date, bridge action, and escalation path the same day. Same-day send outranks making the reusable system prettier.
  • Install: promise tracker, daily escalation rail, and thin send-bridge.
  • Why it works: trust decays faster than the machine improves. Protect the live edge first and harden the pattern second.
Operator Gates

The Oura gate belongs in the report because it changes what kind of day this is allowed to be

These are the rules I would actually enforce because they shut down the repeat failure modes without flattening how you think. The first three are body-state gates. The rest are business-state gates.

Green Day

Widen, build, decide

Good readiness and stable sleep mean the machine can support doctrine work, sharper decisions, deeper build blocks, and harder narrowing.

Yellow Day

Refine, grade, route

If the body is middling, do not spend the day inventing new strategic branches. Use the day for review, dispatch, cleanup, and lighter-weight closure.

Red Day

Close loops only

If REM and readiness are clearly off, the day becomes a close, grade, review/send, and prep day. No widening, no elegant new infrastructure lane, no fake heroics.

Meeting Gate

No agenda, no live meeting

If required inputs are missing by T-30, downgrade async or reschedule. The CEO should not be the silent buffer for everyone else’s prep debt.

Follow-Up Gate

Promises from calls must hit a queue the same day

If a call generated a real promise, that promise enters review/send and pending replies before the day ends, even if the reusable system is still getting better.

Cloud Routing Gate

Worker jobs do not belong on the judgment machine

Scrapes, browser farms, enrichments, and bulk transforms go to AWS by default once they stop being tiny. Keep the local box for taste, routing, and live control.

Automation Recommendations

The installs I would actually queue, in order

These are the public-safe versions of what the internal report points toward. They are specific enough to feel real and generic enough to avoid leaking the best internals.

1. Promise Tracker

Turn every demo promise into a tracked object

Transcript or meeting note becomes promise, owner, due date, draft, and reminder. Nothing important dies in memory or in chat residue.

Prevents

Delayed replies after good calls, especially when platform work feels more urgent than sending.

2. Meeting Brief + Cancellation Daemon

Make the calendar earn its keep

Auto-brief strategic calls, collect agenda items, chase missing prep, and cancel or reschedule when the inputs never arrive.

Prevents

CEO time turning into meeting prep subsidy for the rest of the room.

3. Transcript-To-Review Queue

Convert calls into queued action, not better notes

Pain points, objections, next steps, and fit get extracted automatically. Strong calls queue follow-up drafts for operator or review-owner approval.

Prevents

Good sales or partner conversations vanishing into transcript folders.

4. Compute Router

Auto-route the heavy jobs to the right machine

The local machine stays the control plane; AWS handles the long-running workers, scrapes, and browser tasks without requiring manual babysitting.

Prevents

Local RAM starvation, context loss, and noisy confusion about what belongs where.

5. Founder Intent Router

Route voice and fresh context once, then share it everywhere

Whisper/Superwhisper and meeting transcripts become one classified context stream that feeds the shared brain, the right model, and the right queue without repeated manual re-briefing.

Prevents

Human API-router behavior, context shuttle tax, and the same founder brief being pasted into every model by hand.

6. Rubric Memory Layer

Capture “why this is better” the moment it appears

When you reject or prefer an output, the reasoning becomes a reusable brief, exemplar, or scoring rule instead of disappearing into one conversation.

Prevents

Compare-edit churn across shells and repeated re-explanation of taste.

7. Role Kernels

Package the same substrate differently by role

Operator, sales, support, and lead-gen each get their own review queue, approval layer, and recommended playbooks instead of one vague “assistant.”

Prevents

Selling abstraction when the buyer actually wants relief from a repeated loop.

Prescriptions

What to change in the founder operating system, and what the company should standardize around it

The point of a report like this is not just diagnosis. It is to convert observation into a tighter operating system for the person and a more repeatable product surface for the company.

Founder-Level Changes

What you should do differently

  1. Run single-mode starts. Decide whether the first serious block is build, sell, or operate. Do not let Chrome mix all three before an artifact exists.
  2. Use body data as a scope signal. When sleep or readiness crashes, do not widen. Route that day toward closure, grading, delegation, and smaller irreversible wins.
  3. Capture the rubric at first irritation. The moment you say “that is not quite it” or “this is better,” turn the judgment into a reusable rubric in git that same day.
  4. Send the promised follow-up before polishing the reusable version. The first recipient should not wait for the template to get perfect.
  5. Bias the calendar toward leveraged meetings, then instrument them hard. As CEO you probably should be on more decisive calls, but fewer vague ones and zero underprepared ones.
  6. Keep the compute split honest. Use the local boxes for control, taste, and context. Push browser automation, enrichment, and bulk transformation to AWS by default.
  7. Pair every new system lane with external closure. Before opening the next infra, GTM, or product thread, force one send, one ask, one decision, or one closed loop outside the machine.
Company-Level Changes

What Go2 should standardize

  1. Productize the review/send machine. Transcript ingestion, pain extraction, draft generation, approval, and CRM update should be a maintained company lane, not a one-off internal trick.
  2. Standardize role kernels. Build repeatable flows for operator, support copy-paste, sales follow-up, lead-gen ops, and meeting hygiene instead of one giant vague “AI assistant.”
  3. Design for passive capture, not perfect rituals. The system should create value from transcripts, clipboard patterns, task flow, and app telemetry even when humans skip the ceremony.
  4. Create explicit trust tiers. Let teams see the ladder from observe to draft to queue to execute so adoption does not depend on blind trust in automation.
  5. Harden the worker substrate. Health checks, restart logic, and failure visibility are not side quests. The outage week proved they are product credibility.
  6. Sell relief, not abstraction. The marketable story is fewer dropped follow-ups, fewer wasted meetings, less copy-paste, and better prepared people, not vague intelligence claims.
Skills + MCPs + Sentries

The product surface should show the install, not just the prompt

A plain-language trigger is useful, but it undersells the real system. The real install surface is: a skill that tells the agent what to do, the MCPs that give it access, a sentry or daemon that watches for drift, and a small script/spec that makes the whole thing feel real.

Build these MCPs first. Calendar + meeting context, Superwhisper and meeting transcript intake, CRM + Gmail, memory/brain search + vector memory, Oura, GitHub + Sentry + cloud logs, and a review queue/approval surface. That set covers most of the leverage in this report without giving away the private substrate.

meeting-control

Skill + MCP

Turns live meetings into a controlled loop: pre-read, agenda chase, post-call decisions, and follow-up routing.

  • Skill: meeting-control
  • MCPs: calendar, meeting transcript source, memory/brain search, team chat or tasks
  • Sentry: meeting-readiness-sentry watches for missing agenda, missing pre-read, and missing post-call decisions
{
  "skill": "meeting-control",
  "trigger": "calendar.event.upcoming",
  "required_mcps": ["calendar", "transcripts", "memory", "tasks"],
  "sentry": "meeting-readiness-sentry",
  "rules": {
    "t_minus_24h": "request agenda and required inputs",
    "t_minus_30m": "escalate if materials missing",
    "t_minus_10m": "draft reschedule if still unprepared",
    "post_meeting": "extract decisions, owners, deadlines, resume brief"
  }
}

post-call-followup

Sales Stack

Converts demo or discovery transcripts into a real review/send queue instead of leaving the promise stack in memory.

  • Skill: post-call-followup
  • MCPs: transcripts, CRM, Gmail, review queue, memory/brain search
  • Sentry: promise-lag-sentry flags calls with explicit promises that still have no draft or send action by end of day
{
  "skill": "post-call-followup",
  "trigger": "transcript.created",
  "required_mcps": ["transcripts", "crm", "gmail", "review_queue", "memory"],
  "sentry": "promise-lag-sentry",
  "extract": ["pain_points", "objections", "buying_signals", "next_steps", "explicit_promises"],
  "actions": [
    "score fit and urgency",
    "draft follow-up",
    "queue for review",
    "update deal record",
    "alert if no send path exists by end of day"
  ]
}

incident-hotfix-watchdog.py

Infra Sentry

If a code-based problem needs a quick fix, this is the kind of sentry that should already be running: unresolved errors, healthcheck status, and issue creation in one loop.

  • Skill: incident-hotfix
  • MCPs: GitHub, Sentry, cloud logs or healthcheck endpoint, deploy rail or shell access
  • Sentry: incident-hotfix-sentry watches error spikes, failed healthchecks, and missing owner assignment
#!/usr/bin/env python3
import os, requests, subprocess, sys

SENTRY_ORG = os.environ["SENTRY_ORG"]
SENTRY_PROJECT = os.environ["SENTRY_PROJECT"]
SENTRY_TOKEN = os.environ["SENTRY_TOKEN"]
HEALTHCHECK_URL = os.environ["HEALTHCHECK_URL"]
GITHUB_REPO = os.environ["GITHUB_REPO"]

headers = {"Authorization": f"Bearer {SENTRY_TOKEN}"}
sentry_url = (
    f"https://sentry.io/api/0/projects/{SENTRY_ORG}/{SENTRY_PROJECT}/issues/"
    "?is=unresolved&sort=date&statsPeriod=1h"
)

health_ok = requests.get(HEALTHCHECK_URL, timeout=10).ok
issues = requests.get(sentry_url, headers=headers, timeout=15).json()
hot = [i for i in issues if i.get("count") and int(i["count"]) > 10]

if (not health_ok) or hot:
    title = "Hotfix watch: healthcheck failed or Sentry spike detected"
    body = f"health_ok={health_ok}\\nopen_hot_issues={len(hot)}"
    subprocess.run(
        ["gh", "issue", "create", "--repo", GITHUB_REPO, "--title", title, "--body", body],
        check=False,
    )
    print(body)
else:
    print("hotfix watch clear")

founder-intent-router

Context Layer

Turns Superwhisper and meeting transcripts into classified, searchable context so founder intent survives across models and sessions.

  • Skill: founder-intent-router
  • MCPs: Superwhisper transcript source, memory/brain search, vector memory, tasks or decision log
  • Sentry: context-drift-sentry flags repeated manual re-explanations of the same context or missing transcript syncs
{
  "skill": "founder-intent-router",
  "trigger": "transcript.created",
  "required_mcps": ["superwhisper", "memory", "vector_memory", "decisions"],
  "sentry": "context-drift-sentry",
  "classify": ["decision", "question", "brainstorm", "delegation", "status_update"],
  "actions": [
    "write founder-intent brief",
    "store searchable summary in brain",
    "embed into Pinecone or vector layer",
    "link to active project or decision log",
    "surface relevant context on next model session"
  ]
}

support-approval-kernel

Support

This is the install behind the “copy-paste grinder” role: cluster repeat requests, draft replies, approve fast, and catch policy drift.

  • Skill: support-approval-kernel
  • MCPs: help desk, knowledge base, review queue, metrics surface
  • Sentry: policy-drift-sentry flags answers that fall outside accepted templates or performance thresholds
{
  "skill": "support-approval-kernel",
  "trigger": "ticket.message.created",
  "required_mcps": ["helpdesk", "knowledge_base", "review_queue", "metrics"],
  "sentry": "policy-drift-sentry",
  "loop": [
    "cluster repeated issues",
    "draft response from approved sources",
    "approve/edit/skip in one surface",
    "promote accepted replies into templates"
  ]
}
Plain-Language Playbooks

Human-readable prompts after the skill and MCP layer exists

This is the operator-friendly layer a buyer can immediately understand. The section above shows the actual install surface. This section shows the plain-English version of how those installs would behave day to day.

Meeting Prep Enforcement

Operator

Useful when people keep showing up without agenda items and you end up carrying the prep burden.

Before every scheduled meeting, remind agenda owners 24h before and 30m before start.
If required inputs are still missing 10 minutes before the call, draft a reschedule note and suggest moving the meeting.
After the meeting, extract decisions, owners, and follow-ups and route them into the right repo, issue, or CRM record.

Build Mode Launcher

Operator

Useful when Chrome keeps swallowing the first serious block of the day before build momentum lands.

When I say "start build mode," open Claude, Codex, Terminal, and the active repo.
Mute Slack notifications for 90 minutes.
Block Google Meet, app.slack.com, search, and webmaster surfaces until the block ends.
At the end of the block, ask me for one durable brief or shipped artifact.

Transcript-To-Pipeline

Sales

Useful when demo calls or discovery calls create real insight, but follow-up, prioritization, and CRM hygiene lag behind.

When a discovery or demo transcript lands, extract pains, objections, buying signals, next steps, and stakeholders.
Score whether the call actually showed urgent pain and clear fit for our service offering.
If yes, draft the follow-up in the assigned sender's voice, queue it in the review surface, and update the deal record.
If no, mark it low-priority and keep the transcript searchable for later pattern mining.

Oura Gate

CEO

Useful when the machine keeps trying to widen strategy on a day your body is clearly asking for narrower work.

When today's Oura readiness is low or REM has been falling for 2+ nights, mark the day as a narrow day.
Do not open new GTM, product, or infrastructure lanes.
Prioritize closure, review/send, meeting prep, and grading existing work instead.
If I try to widen anyway, remind me this is a low-REM day and ask what outward loop gets closed first.

Promise Tracker

CEO

Useful when good calls turn into “I owe them something real” and then get outrun by system-building.

After every demo, discovery call, or investor call, extract every explicit promise I made.
Create one tracked item per promise with owner, due date, source transcript, and suggested next step.
If no draft or send action exists by end of day, queue a follow-up draft automatically and remind me before I start a new build lane.

Support Copy-Paste Grinder

Support

Useful for the kind of role that spends hours copying comments or questions into AI and then manually approving the result.

If a support agent pastes the same kind of issue into AI over and over, log the repeated prompts, cluster the common request types, and generate a reply draft with an approval UI.
Approve, edit, or skip with one click.
Turn accepted replies into reusable templates over time.

Remote Worker Factory

Lead Gen

Useful when the local machine is trying to be both the control plane and the worker box.

Use local machine for control and context only.
Push scrape, enrich, verify, browser automation, and bulk transformation work to AWS workers.
Return staged CSVs, CRM-ready rows, and a short operator brief when the run finishes.

Voice-To-Intent Bridge

Operator

Useful when your strategy lives in voice notes but the working shell only sees the last prompt you typed.

When Claude Code or Terminal opens, inject the last three voice notes plus today's top open loops from the shared brain and vector memory as hidden context.
Summarize the strategic intent in 5 bullets before taking the next action.
Write any new decisions back to the brain so the next model session does not need the same re-brief.
If the new task conflicts with the last voiced priority, ask for confirmation first.
Evidence Packets

Three packets that make the audit feel materially real even in sanitized form

A fuller internal version would show names, screenshots, transcript excerpts, and implementation notes. This copy keeps the diagnosis and the intervention, and blanks the sensitive parts.

Packet 01

A missing scale function appeared, and the founder went to build the machine.

Sales Ops

The demos were not the bottleneck. The real pattern was seeing a manual follow-up gap and immediately wanting to solve the underlying scale function instead of tolerating the thinner bridge that would have closed the live promise first.

Install

Promise tracker plus transcript-to-review/send with an end-of-day escalation and an explicit “thin bridge before full machine” rule.

Deal Context Excerpt — REDACTED
Packet 02

Meeting cost doubled because the meeting itself was weakly instrumented.

CEO Calendar

The time cost was real, but the bigger drag was missing pre-read, agenda capture, and post-call routing. That made the founder the fallback memory layer for expensive conversations.

Install

Meeting brief generator, agenda enforcement, cancellation logic, and post-call compression with decision routing.

Transcript / Agenda Excerpt — REDACTED
Packet 03

Body-state should have determined whether the day was allowed to widen.

Oura Gate

The ring data was most useful when it stopped being wellness garnish and started acting like a scope gate. Good days supported doctrine and widening. Fragmented days should have been used for closure and routing.

Install

Green / yellow / red operating gate that changes what kinds of work are allowed before the day gets wider.

Biometric Overlay Excerpt — REDACTED
Core Findings

What this audit window says about how you actually work

1. You work like an AI team manager. High confidence

What we saw. 56.6 hours in Claude, 28.0 in Codex, 2,805 AI handoffs, and roughly three quarters of all typed output landing in Claude and Codex. You are not using models as toys. You are using them as thinking surfaces, delegation surfaces, and comparison surfaces.

Why it matters. This is a strength. It is also why orchestration can feel productive even when the real bottleneck is choosing and committing. The system should optimize for routing, convergence, and durable artifacts, not a fake “single maker” ideal.

2. Your best state is a named build regime. High confidence

What we saw. The best days in the window are the build-heavy ones: March 19, 21, 22, and 23. The uglier days are not low-effort days; they are mixed-mode days where Slack, search, meetings, websites, and build all share the same surface.

Why it matters. Open-ended mixed-mode days are not neutral for you. They are hostile. The right unit of analysis is not total hours. It is whether the day stayed true to its intended mode.

3. Rubric capture is the missing layer underneath all the AI leverage. High confidence

What we saw. The strongest read across the prior reports still holds: once you know what “good” looks like, you can use AI extremely well. When the criteria are fuzzy, time gets burned across multiple models, tabs, and drafts.

Why it matters. You are not bottlenecked by idea generation. You are bottlenecked by defining and preserving evaluation criteria. That is why rubric capture and reuse deserves to be a first-class layer in your system.

4. You are so interruption-elastic that you can underprice switching. High confidence

What we saw. The psych profile was right: you can bounce out, collect signal, and snap back quickly. That resilience is real. But it also means you can underestimate how much mode churn is happening because the re-entry path feels cheap.

Why it matters. Recovering fast is not the same as switching for free. This is why days can feel productive and still leave you with less actual closure than they should.

5. Closure debt is real, and under pressure it looks like more motion. High confidence

What we saw. The diagnostic report framed this cleanly: ideas, prompts, structures, pages, and lanes come easily; choosing the few closures that permanently kill other options is what costs psychic energy. When pressure rises, the risk is not collapse. It is more artifacts and more side quests.

Why it matters. This is not laziness. It is a pressure-to-motion converter. Stress becomes throughput, and throughput can camouflage unresolved commitments.

6. Context fidelity is a real operating requirement for you. High confidence

What we saw. The sharp frustration pattern in the reports is consistent: wrong dates, memory drift, fake completion, or tools acting more capable than they are. When context trust breaks, you reopen manual control loops yourself.

Why it matters. For you, broken memory is not a minor UX annoyance. It is an adoption blocker. Any automation that cannot preserve context fidelity will get demoted no matter how flashy it looks.

7. System-building can become safer than the human loop. Hot risk

What we saw. One of the harder truths from the deeper reports is that building systems can become emotionally safer than sending the follow-up, forcing the decision, or narrowing to one GTM story.

Why it matters. This is where more infrastructure can quietly become elegant deferral. The fix is not “stop building systems.” The fix is to pair new system work with mandatory external closure.

8. CEO meeting load is not the problem. Unprepared meetings are. Meaningful

What we saw. Meetings already occupy real time in the window, and the meeting-content gap in the reports suggests you are losing leverage twice: once in the call, then again because decisions and prep context are not being carried cleanly.

Why it matters. A CEO probably should be on more decisive calls, not fewer. The fix is to make the calendar enforce agenda quality, pre-reads, and post-call capture so meetings become leverage instead of residue.

9. You attack missing scale functions early, which is product strength and weekly drag at the same time. Hot risk

What we saw. When a manual gap appears, you tend to spot the underlying missing function quickly and go build toward the scalable version: transcript mining, review/send, routing, better structure, better installs. That is not the product failing. That is you refusing to keep paying the same manual tax.

Why it matters. This is one of the cleanest CEO-specific tensions in the set. You do not need less machinery. You need a rule that says some weeks deserve the thin bridge, small script, or tolerated manual path first, and the deeper machine second.

Operating Signature

What to lean into

  • Fan-out, compare, and converge. That is a real superpower when the rubric is explicit.
  • Using yourself as the first hard customer. It keeps the product honest.
  • Turning internal pain into real product surfaces fast once the pattern is clear.
  • Working in the afternoon, evening, and late night when the machine actually has voltage.
  • Demanding accuracy and context fidelity instead of accepting fake completion.
Operator Debt

What to stop rationalizing

  • Mixed-mode days as the default operating pattern.
  • Letting valuable context live only in handoffs instead of canonical files.
  • Treating meetings as expensive time but not instrumenting their decisions, owners, and prep quality.
  • Opening more lanes when narrowing feels painful.
  • Treating every missing function like it must become a full machine immediately, even when a thinner bridge would close the live loop this week.
  • Letting system-building count as closure when the real move is send, ask, decide, or ship.
Strengths

What the reports consistently say you are unusually good at

System Synthesis

You keep turning messy operational threads into durable system language. The product gets sharper because you keep pushing toward the actual abstraction instead of accepting the first comfortable framing.

Artifact Pressure

Your insistence that useful work end up as repos, reports, implementation notes, or real pages is a real operating advantage. It is why the system now produces handoff-quality output instead of smart chat residue.

Leverage Through Delegation

The strongest work in this window was not solo typing. It was you causing whole lanes to move in parallel without losing the thread entirely. That is the conductor pattern at its best.

Product Extraction From Internal Use

The operator instinct here is rare: use the internal system hard enough that the gaps become the roadmap. That pattern is already producing product direction, not just self-knowledge.

Next 14 Days

The next 14 days I would actually run from this report

First 3 installs on Monday. founder-intent-router first, promise-tracker second, and meeting-control third. That stack attacks the context-resend tax, the follow-up leak, and the meeting-prep subsidy before anything more elegant gets built.
  1. Capture five recurring rubrics. Put the criteria and exemplars into git, not just into your head. Start with the judgments that show up every week.
  2. Split build mode from admin mode. Different surface, different launcher, different sweep. Stop asking Chrome to be both control room and clutter bucket at the same time.
  3. Install the Oura widening gate. Low REM or low readiness means narrow and close. No new market thesis, no new infra lane, no fake “I can power through this” stories.
  4. Turn meeting prep into enforcement, not hope. Use the meeting-prep trigger: reminder, missing-input check, reschedule/cancel if needed, then post-call decisions and owners.
  5. Force same-day promise capture after real calls. Every meaningful demo, discovery call, or investor conversation should produce a tracked follow-up object and a draft before the day ends.
  6. Keep the compute split honest. Local for control and context. AWS for fan-out, scraping, browser runs, and heavy parallel work. Stop burning local RAM on worker jobs you already know belong in the cloud.
  7. Run a daily closure pass. Ship list, ask list, kill list. One external close before opening a new infrastructure lane.
What matters here. The best version of this plan does not make you more compliant. It makes you harder to stall. The goal is fewer invisible restarts, fewer quiet drops, and more judgment compounding across the system.
Sanitized But Real

What was deliberately blanked out of the fuller internal version

This is the part that should make it obvious the presentation copy is cut down, not invented. The real version contains named excerpts, screenshots, and customer-context slices that are intentionally removed here.

Client Comparison

Comparative deliverable before / after excerpt

REDACTED
Workflow Diagram

Internal orchestration view excerpt

REDACTED
Meeting / Deal Note

Named transcript and promise stack excerpt

REDACTED
Sources

What this report is built from

  • Behavioral telemetry: the founder’s local Cowork.ai telemetry plus broader cohort/context telemetry used to sanity-check what looked founder-specific versus generally useful.
  • Voice layer: Superwhisper/voice transcript history, including deep-dive reads over 1,764 recordings and roughly 144,441 transcript words, plus transcript archival and ingestion work completed during the same period.
  • Daily reports across the audit window: the two-week run of daily reports and end-of-day analysis, used as the narrative layer over the raw telemetry.
  • Custom augmentation: additional source material intentionally folded into the audit, including Oura/body-state context, Superwhisper and meeting transcripts, architecture artifacts, and internal feedback lanes that were actively being pressure-tested.
  • Artifacts and systems: recent handoffs, repo state, product surfaces, system diagrams, decision logs, shared-brain state, Pinecone-backed memory work, git movement, and deliverables created in the same period.
  • Privacy posture: the report synthesis was run locally on the founder’s computer with enough RAM to keep source material on-device rather than pushing raw data into cloud processing.
  • Presentation metrics: several top-line counts in this public copy are modeled composites designed to preserve the operating shape after redaction, not act as compliance-grade ledger numbers.
  • Role-shaping note: this report is about a founder/operator and about the questions he was actively asking the system for feedback on. A support, sales, or ops report would use a different lens and different recommended installs.
  • Redactions: customer names, sensitive IP, specific internal prompts, and a few personal details were intentionally removed. The behavioral and workflow observations were not softened.
Prepared on March 25, 2026 by Moe [AI]. Public founder/operator audit example, built from telemetry, daily reports, custom source augmentation, and report synthesis.