Skip to content

The Stack

Sidespace needs to do three things well: run native terminals with real PTY sessions, maintain a rich data layer with vector search and real-time updates, and render a fast, reactive UI. A fourth requirement (running AI agents with tool loops, streaming, and scheduled pipelines) needs a dedicated compute layer. No single framework covers all four, so the stack is composed from purpose-fit pieces with a clear portability boundary between them.

What's Built

Why These Technologies

Tauri 2.0 provides the native desktop shell. It gives us real PTY terminals (via xterm.js), OS-level file access, and a small binary. Electron does the same, but at a fraction of the resource cost. The Rust backend handles system-level operations like CLI detection and process management.

Railway is the compute layer. A Hono server on Railway handles all server-side AI work: the Hoshi brain (Vercel AI SDK with streamText), 54 MCP tools served via a stateless HTTP endpoint, agent pipelines (Umbra, Kosmos, hoshi-review), and cron scheduling via node-cron. Chat responses stream to the frontend via SSE.

Supabase is the data layer. Sidespace relies on PostgreSQL features that go well beyond basic CRUD: pgvector for semantic search over memories, Realtime for live UI updates, Row Level Security for access control, and custom RPCs for complex queries. Only three edge functions remain on Supabase (embedding generation, memory health checks, and system integrity checks); all other compute has moved to Railway.

React + TypeScript powers the frontend, with Vite for bundling, Tailwind for styling, and shadcn/ui for component primitives.

The Portability Principle

The most important architectural decision is where the boundary sits. Tauri is a thin shell. The real product lives in two places: the Sidespace MCP server (54 tools served from a Railway HTTP endpoint) and Supabase (the database). All data flows through MCP tools -- not through Tauri commands, not through direct database calls from the frontend.

This means the tool surface is portable. Hoshi uses the MCP server. Claude Code in Squad View uses the same MCP server. Umbra and Kosmos call the same Supabase tables from Railway pipelines. If Tauri were replaced tomorrow, the data layer and tool surface would survive intact. Railway itself is the compute layer that could be swapped for any other host. The portability boundary is MCP plus Supabase.

Agent Execution Environments

Each agent runs where it makes sense:

AgentRuntimeWhy
HoshiRailway (AI SDK streamText, SSE to frontend)Needs tool loops, streaming, multi-model fallback
Claude CodeTerminal PTY (Squad View)It is a CLI tool; it needs a real terminal
UmbraRailway pipelinesBackground research pipeline, triggered by node-cron, no UI needed
KosmosRailway pipelinesBatch memory operations on a schedule
AtlasGitHub ActionsDocumentation agent, triggered by dispatch or cron

Hoshi uses the Vercel AI SDK with Gemini as primary model and Claude as automatic fallback via provider swap. When Gemini fails (503, timeout, rate limit), the SDK immediately retries with Claude. No feature flags, no dual-engine configuration.

Data Layer

Supabase serves as the database and persistence layer. Background compute has moved to Railway.

FeatureWhat it does in Sidespace
pgvectorHNSW indexes over the memories table for semantic search
RealtimePushes live updates for projects, tasks, and todos tables to the UI
RLSRow-level security policies on all tables
Custom RPCsget_cost_summary, search_items_in_container, and others
Edge functionsThree remaining: embed (embedding generation), memory-health-check, system-integrity-check

Most scheduling has moved from pg_cron to node-cron on Railway, and all agent pipelines run on Railway rather than as edge functions. pg_cron and pg_net remain available for database-level tasks but are no longer the primary execution mechanism.

All schema changes go through local migration files deployed via supabase db push. Direct SQL against the remote database is prohibited: it creates orphaned migration history entries that break subsequent pushes.

Frontend Architecture

React Query handles all data fetching with stale-time and garbage-collection caching. When a Hoshi tool mutates data, it emits a Tauri event; a bridge hook (useTauriEventBridge) maps 9 event types to React Query cache invalidations. Supabase Realtime serves as a backup path for the same updates.

Zustand manages client-side state across 10 stores: tab navigation, squad panel state, floating panel position, right panel visibility, Hoshi chat state, active project, view filters, user preferences, diagram state, and toast notifications. Each store persists to localStorage or sessionStorage depending on whether the state should survive app restarts.

shadcn/ui provides the component primitives. The design system uses Tailwind utility classes with a consistent type scale.

Squad View Multi-Agent Cockpit

Squad View enables running, monitoring, and coordinating multiple CLI agents simultaneously in a three-zone layout: task sidebar, focused terminal, and thumbnail strip. Each terminal gets a constellation name (Orion, Lyra, Vela) for identity across sessions and memories. The CLI registry auto-detects available tools and launches them with environment variables that tag their outputs.

Design Principles

These eight principles guide implementation decisions across the codebase:

PrincipleIn practice
Cost-tiered model selectionHaiku for scanning, Sonnet for evaluation, Gemini for primary chat with Claude fallback via AI SDK provider swap, Opus reserved for high-stakes
Soft deletes over hard deletesstage=archived instead of DELETE. Data is preserved and reversible.
Agent separation of concernsEach agent has a narrow scope. Hoshi manages, Umbra researches, Kosmos grooms.
Staging before promotionExternal content (research findings) passes through a staging layer before entering the knowledge base
Local-first where appropriateDevice-specific state (panel positions, tab order) stays in localStorage, not the database
Lazy over eagerViews mount lazily. React Query caches aggressively. No unnecessary network calls.
Progressive consolidationFewer surfaces, richer surfaces. Hub replaced separate Desk and Feed views.
Idempotent operationsTools and migrations are safe to retry. State is tracked, not assumed.

Where It's Heading

The MCP server continues to expand as the universal tool surface. Recent additions include Squad MCP tools (for creating terminals, navigating previews, and opening Squad View programmatically) and a diagram generation tool that produces Mermaid SVG diagrams into the Whiteboards tab. Near-term work includes tools for headless browser control so agents can run automated tests and scraping within the app.