Mythspire

How It Works

Development runs on Claude Code (Anthropic, Opus 4.6) and Codex (OpenAI, GPT-5.4) operating as persistent collaborative agents. Both run in long-lived terminal sessions with full project context, cross-session memory, and access to custom MCP tool servers.

The two agents have different strengths and work together on consequential decisions — one drafts, the other stress-tests, disagreements are resolved through structured challenge before anything ships. Routine work is handled by whichever agent is better suited to the task.

The founder has no programming, art, or game development background. All code, shaders, audio, UI, tooling, and this website were produced through agentic workflows built on top of this infrastructure.

What Emerged

Collaborative Agents

Claude Code (Opus 4.6) and Codex (GPT-5.4) run as persistent agents in separate terminal sessions. Consequential decisions pass through both — one drafts, the other challenges — before shipping. A custom agent-to-agent communication layer handles dispatch and review.

Claude Code · Codex
How it works
// Bilateral Prime Model
Two frontier LLMs as persistent cognitive hemispheres — one drafts, the other independently interprets and challenges before convergence.
// Agent Lanes
ryn — authority lane (Opus 4.6, session coordinator)
rem — prime lane (GPT-5.4, cognitive challenge)
ryn-work — execution lane (implementation)
rem-work — execution lane (audit, testing)
// Dispatch Model
Cognitive work → prime lane (bilateral dialogue)
Implementation → work lanes (parallel execution)
Consequential decisions require both primes to converge
// Communication
Glass Bridge for real-time inter-agent messaging
Structured task packets for scoped work dispatch
Hash-chained collaboration audit log

Project Memory

Custom MCP server (40 tools) backed by ChromaDB and SQLite. Semantic search across specs, code, decisions, and session history. 1,000+ architectural decisions tracked with rationale and alternatives. 350+ work sessions indexed with rich summaries and relevance scoring.

MCP Server
View all 40 tools
// Semantic Search (8)
query_specs — search specs by embedding similarity
query_code — semantic search over code docs
query_code_text — lexical search (exact string or regex)
query_decisions — search decision log by topic
query_notes — search shared agent notes
query_assets — search Unity asset catalog
query_vendor_docs — search vendor documentation
query_projects — list registered projects
// Decision Log (3)
record_decision — record with rationale and alternatives
supersede_decision — replace old decision with new
deprecate_decision — mark decision no longer relevant
// Agent Registry (5)
register_agent — register agent and start heartbeat lease
heartbeat_agent — renew online lease (30s intervals)
list_agents — list registered agents with status
get_agent_card — get agent identity and capabilities
promote_trust_tier — elevate trust level on artifact
// Messaging (5)
send_message — async inbox delivery with priority
fetch_inbox — read pending messages
claim_message — atomically claim for processing
ack_message — acknowledge processing complete
nack_message — return message to queue for retry
// Focus Board (4)
get_current_focus — read current focus artifact
upsert_focus — create or update focus item
query_focus — filter board by status/owner
render_focus_markdown — export board as markdown
// Knowledge (2)
upsert_note — create or update shared note
session_summary — generate and persist session summary
// Collaboration Audit (5)
append_collab_event — add to hash-chained event ledger
query_collab_events — search immutable audit trail
verify_chain — verify SHA-256 hash chain integrity
record_chunk_provenance — track who indexed what
query_chunk_provenance — audit indexing history
// Project & Index (4)
upsert_project — register or update project
check_stale_specs — detect out-of-date indexes
sync_repos — reindex specs, code, and assets
get_project_state — project overview with stats
// Advanced (4)
recursive_reason — spawn sub-LLM for deep analysis
rlm_exec — execute code in safe REPL sandbox
dispatch_api_agent — send task to external model (Gemini, Grok)
query_api_runs — audit history of API dispatches

Cross-Session Continuity

Automated transcript export, pre-compaction checkpoints, and daily digests. Sessions pick up with full context intact regardless of time between them. Session hooks handle boot, shutdown, and state preservation automatically.

Continuity
Technology breakdown
// Session Hooks (6 events)
SessionStart — register agents, load focus, check inbox, auto-reindex memory
SessionEnd — export transcript to vault, update daily digest
PreCompact — checkpoint before context window compression
UserPromptSubmit — relay cross-agent messages, closeout checklist
Stop — freshness breadcrumb for focus artifact
PreToolUse — vendor directory protection, investigation gates
// Automated Export
Session transcripts → worklog vault (markdown)
Pre-compaction snapshots for context recovery
Daily digests with project tags and decision links
// Reverie — Deep Memory Engine
Recursive Language Model (RLM) architecture
350+ indexed sessions across 3 agents
4 transcript formats: Claude Code, Codex CLI, ARC SDK, OpenClaw
// Reverie Indexing Pipeline
Classification — LLM-driven tagging, noise detection, emotional markers (Sonnet)
Rich Summary — structured narrative: what happened, who decided, emotional texture (Opus)
Quote Extraction — verbatim quotes with speaker attribution (Sonnet)
Importance Scoring — deterministic formula from tags, markers, session size
Card Generation — card_tiny (120 chars), card_brief (400-700 chars), card_full
// Reverie Query Process
Root LM reads session manifest (~10K tokens)
Two-phase search: summary match → targeted session load
Sub-LM analyzes matched sessions for specific answers
Iterative refinement (up to 12 rounds, 500K char safety cap)
// Stack
Python 3.12, AsyncIO, Claude OAuth (Sonnet + Opus)
JSON index + JSONL session storage, schema v4

Orchestration Framework

Custom runtime for coordinating multiple agents — scoped permissions, structured task lifecycles, completion handshakes, and policy enforcement. Built on Anthropic and OpenAI SDKs. 120+ tests across all phases.

Framework
Architecture
// Task Lifecycle
Create → Assign → Execute → Complete → Review
Scoped permissions per agent (read-only, workspace-write, full-auto)
Completion handshakes with fail-closed gates
// Policy Enforcement
Structured task packets with scope constraints
Evidence tagging (VERIFIED_RUNTIME vs PROVISIONAL_STATIC)
Mandatory status reporting: task_id, status, blockers, evidence
// Agent Coordination
Heartbeat-based online detection with lease TTL
Priority-ordered message queues with claim/ack
Hash-chained collaboration audit log (SHA-256)
// Stack
Python 3.12, Anthropic SDK, OpenAI SDK
160 tests across all phases, 36/36 soak tests

Built for Real Problems

FMOD MCP Bridge

27-tool MCP server for FMOD Studio via TCP scripting API. Creates events, routes buses, applies effects, builds parameter automation, and searches a 52,000-file sound library. Agents compose and mix audio without touching the FMOD interface.

Audio Pipeline
View all 27 tools
// Project (4)
status — verify FMOD Studio connection
save — save the current project
build — compile .bank files for Unity
execute_js — run arbitrary JS in FMOD scripting engine (ES5)
// Events (4)
list_events — list all events with path and ID
create_event — create event with auto folder hierarchy + bank
get_event — get event details by path
delete_event — delete event, auto-clean empty folders
// Audio Import (5)
import_audio — import WAV/OGG/FLAC into asset pool
add_sound — import + add as instrument to event track
create_multi_sound — create random variation pool from folder
bulk_import — folder of files → separate tracks on event
search_library — keyword search across 52K-file audio library
// Banks (3)
list_banks — list all banks
create_bank — create loadable event group
assign_to_bank — assign event to bank (required for builds)
// Mixer & Routing (7)
list_buses — list mixer buses with volumes
create_bus — create volume control bus (SFX, Music, etc.)
delete_bus — remove bus (events revert to Master)
get_event_output — get event’s bus routing
set_event_output — route event to specific bus
get_all_event_routing — full project routing table
list_snapshots / list_vcas — snapshots and volume aggregators
// Effects & Parameters (2)
add_effect — add effect to track (reverb, EQ, compressor, delay + 12 more)
add_parameter — add runtime automation parameter (float or enum)
// Unity Integration (1)
sync_from_csharp — parse C# EventReference fields → create FMOD events

Ableton MCP

77-tool MCP server for Ableton Live via custom Remote Script. Programmatic MIDI creation, device parameter automation, stem export for FMOD integration, and undo stack control. Full access to the Live Object Model.

Audio Pipeline
View all 77 tools
// Session & Transport (12)
get_session_info — tempo, time sig, tracks, scenes, transport
set_tempo — set session BPM
start_playback / stop_playback — transport control
capture_midi — capture recently played MIDI
set_song_time — set playback position in beats
set_song_loop — set arrangement loop region
set_metronome — toggle click track
set_arrangement_overdub — toggle overdub recording
set_record_mode / set_session_record — arm recording
set_back_to_arranger — session vs arrangement toggle
get_song_view — full view state snapshot
// Undo System (4)
begin_undo_step / end_undo_step — group ops into single Ctrl+Z
undo / redo — programmatic navigation
// Views (3)
show_view / focus_view — show or focus a view panel
is_view_visible — check view state
// Track Management (10)
get_track_info — detailed track inspection
get_all_tracks_overview — quick overview of all tracks
create_midi_track / create_audio_track — add tracks
delete_track / duplicate_track — manage tracks
set_track_name — rename track
select_track / get_selected_track — UI selection
get_return_tracks — list return tracks (A, B, etc.)
// Track Mixer (8)
set_track_mute / set_track_solo / set_track_arm
set_track_volume — 0.0 to 1.0
set_track_panning — -1.0 (left) to 1.0 (right)
set_send_level — set return send amount
stop_all_track_clips — stop clips on a track
get_track_meters — read peak output levels (linear + dB)
// Session Clips (17)
create_clip / delete_clip / duplicate_clip
set_clip_name / set_clip_properties — looping, markers, launch mode
fire_clip / stop_clip — launch control
get_clip_details — full clip property read
get_clip_notes — read MIDI note data
add_notes_to_clip — write MIDI notes
remove_clip_notes / modify_clip_notes — edit notes by ID or range
replace_all_notes — destructive full replacement
quantize_notes — snap to grid
crop_clip — crop to loop boundaries
duplicate_loop / duplicate_region — extend clips
// Arrangement (8)
get_arrangement_clips — list arrangement clips
get_arrangement_clip_notes — read arrangement MIDI
set_arrangement_clip_muted / set_arrangement_clip_properties
clear_arrangement_clip_notes / add_arrangement_clip_notes
replace_arrangement_clip_notes — destructive replacement
duplicate_session_to_arrangement — session → timeline
// Scenes (5)
create_scene / delete_scene / set_scene_name
fire_scene — launch entire row
select_scene — UI selection
// Devices & Browser (7)
get_device_parameters — read all device knobs
set_device_parameter — set knob values
set_device_enabled — toggle device on/off
get_browser_tree — browse instrument categories
get_browser_items_at_path — browse by path
load_instrument_or_effect — load from browser onto track
load_drum_kit — two-step drum rack + kit load
// Track Freeze (2)
freeze_track / unfreeze_track — render for CPU savings

OdinTools DeepInspect

Custom Unity editor inspection scripts built on top of the third-party CoPlay MCP server. Deep access to visual-only editor state — component inspection, private field queries, runtime state, and vendor-specific inspectors for Animancer, RayFire, and FMOD.

Unity Tooling
13 inspector scripts
// Core Inspectors
OdinInspector — full field reflection (public + private), arrays, lists
RuntimeInspector — Play Mode state, computed properties, physics
AnimatorInspector — animation layers, states, parameters, transitions
AssetAuditor — missing references, broken prefabs, shader errors
// Vendor Inspectors
AnimancerInspectorAnimancer animation state, layer blending
RayFireInspectorRayFire destruction physics, fragmentation, explosions
GoreSimulatorInspectorGore Simulator limb config, ragdoll, blood particles
AStarPathfindingInspectorA* Pathfinding grid layout, obstacles, path caches
SensorToolkitInspectorSensor Toolkit raycast results, proximity detection
BehaviorTreeDebuggerBehavior Designer node execution, task status, blackboard
DialogueSystemInspectorDialogue System conversation state, choices, emotions
VendorConfigSnapshotMalbers, FMOD + more → committed JSON
VHierarchyOrganizer — hierarchy organization, prefab references
// Execution Model
Invoked via CoPlay execute_script() → JSON output
Full BindingFlags reflection (Public + NonPublic + Instance)
Depth-limited recursive serialization (3 levels)
→ Full technical reference

Glass Bridge

VS Code extension and MCP server for real-time agent-to-agent communication. Token-authenticated loopback API that lets Claude Code and Codex dispatch tasks, pass work, and review each other's output across separate terminal sessions.

Agent Comms
8 tools + API
// MCP Tools (8)
health — extension status, terminal count, uptime
list_terminals — all terminals with shell integration and auth status
read_terminal — read recent executions (cursor-based polling)
read_terminal_command — single execution detail with tail mode
send_command — execute shell command in authorized terminal
send_input — raw stdin to interactive processes (REPLs, agents)
create_terminal — create new terminal (auto-authorized)
request_validation — human PASS/FAIL gate dialog in VS Code
// HTTP API
7 endpoints on 127.0.0.1:19400 (loopback only)
Bearer token auth (VS Code SecretStorage, per-machine encrypted)
Cursor-based polling via since_seq for efficient reads
ANSI escape stripping for clean terminal output
// Architecture
VS Code extension (TypeScript) + MCP server (stdio)
Shell Integration API for structured command observation
Per-terminal authorization with auto-pattern matching
Separate send_command (prompt) vs send_input (PTY stdin)
Output limits: 500 executions, 256KB per exec, 5MB per terminal

ASCII Shader Pipeline

Custom compute HLSL shaders for real-time ASCII rendering — glyph atlas generation, edge detection, and luminance mapping running entirely on the GPU via URP render passes. The visual signature of Mythspire games.

Compute HLSL
Pipeline breakdown
// Rendering Pipeline
Three independent layers: Background, Foreground, VFX
Per-layer cell sizes (BG: 8-12px, FG: 4-5px, VFX: 8px)
Two paths: Fragment (6 pass) and Compute (8 pass)
// Fragment Path (6 passes)
BG capture → ASCII convert → FG capture → ASCII convert → composite → present
// Compute Path (8 passes, full VFX)
Layer captures → cell quantization (compute shader) → pack 3 buffers → banded texture → final render
// Shader Files
AsciiConvert_Layered — source + glyph atlas → ASCII texture
AsciiComposite_Layered — alpha blend BG + FG layers
AsciiSourceUnlit — unlit render for ASCII layer objects
AsciiPresentCompute — read packed cell data → final output
// Glyph Atlas System
Density-sorted character sets (lightest → darkest)
Power-of-2 atlas textures (64×64, 128×128, 256×64)
JSON metadata: glyph index, density, grid coordinates
Settings: sRGB OFF, Point filter, no mips, no compression
// Asciiforge (Python CLI)
Generates production atlas libraries (23+ atlases)
Presets: bg_16/36/64, fg_64/144/256, vfx_36/64/144
50+ character sets, 20+ color palettes
Output: PNG + JSON metadata for Unity import
// Per-Layer Color Grading
Tint, brightness, contrast, saturation per layer
Gap fill: dimmed cell color instead of black in character gaps

ComfyUI Workflows

Character art, UI elements, and texture assets produced through tuned image generation pipelines. Custom nodes and workflows built for consistent output across Mythspire's art direction.

Asset Generation
Pipeline overview
// Workflows
Tuned Stable Diffusion pipelines for game asset consistency
Custom nodes for batch processing and output validation
Deterministic seed management for reproducible results
// Asset Types
Character portraits and concept art
UI elements (buttons, panels, icons)
Texture and material generation
Reference sheets for art direction
// Infrastructure
Shared model directory across ComfyUI installations
Junction-linked inputs, outputs, and model dirs
Protected environments (hook blocks python_embeded access)
// Art Direction
Style-locked workflows enforce consistent aesthetic
Color palette constraints match game visual identity
Output resolution and format standardized for Unity import

Tool Surfaces

fmod-mcp — 27 tools Private
// Audio Pipeline — TCP → FMOD Studio
create_event        auto-folder hierarchy + bank
bulk_import         folder of WAVs → separate tracks
create_multi_sound  random variation pool
sync_from_csharp    C# enum → FMOD events
search_library      query 52K-file audio library
// Mix & Routing
set_event_output    route events to buses
add_effect          insert effects on tracks
add_parameter       runtime automation params
build               compile .bank for Unity
ableton-mcp — 77 tools Private
// Full Live Object Model Access
get_session_info    tempo, time sig, length
create_midi_clip    programmatic MIDI creation
set_clip_notes      note-level MIDI editing
set_device_parameter device knob automation
export_stems        stem export for FMOD
// Undo Stack Control
begin_undo_step     group ops into single undo
end_undo_step       commit undo group
undo / redo          programmatic navigation
project-memory — 40 tools Private
// Semantic Search — ChromaDB + SQLite
query_specs("audio architecture")
query_code("BallController")
query_decisions("rendering pipeline")
// Cross-Session State
record_decision(category, decision, rationale)
send_message(from, to, channel, body)
session_summary(write_note=True)
// Collaboration
append_collab_event  hash-chained audit log
verify_chain         integrity verification

Code-First Game Development

TypeScript UI (OneJS)

Game UI written in TypeScript using OneJS, a Preact-based runtime inside Unity. Reactive state, hot reload, and version-controlled components — no visual editor dependency. The entire interface lives in code that agents can read, modify, and reason about directly.

Unity UI

Code-Driven Animation

Character animation controlled entirely through code via Animancer. State machines, blend trees, and transitions without Animator Controller complexity. LitMotion for runtime tweens and procedural motion.

Animation

FMOD Audio

FMOD audio integration with a 52,000-file sound library. Real-time parameter automation and event-driven sound design — authored and modified through the FMOD MCP bridge.

Audio

Where This Is Going

The tooling exists because the games require it. Next target: complex interactive narrative with adaptive NPC behavior powered by frontier language models — contextual character responses rather than scripted dialogue trees. Procedural world systems. Persistent character memory.

All infrastructure feeds back into game production. The MCP servers, the agent coordination layer, the cross-session memory — none of it was built as a product. It was built to ship games.

First playtests and demos planned for itch.io in 2026. Previews and Discord access coming soon to Patreon.

Follow the Journey