A movement-learning platform built on AI pose estimation and 3D motion analysis.
Founded a B2B SaaS platform for movement-based practitioners — martial arts schools, yoga and Pilates studios, dance academies, and physical-therapy practices. Designed and shipped the platform end-to-end: mobile capture app, multi-camera 3D motion-capture pipeline, GPU-accelerated server-side media pipeline, multi-camera video player, 3D pose viewer, SaaS web app, and programmatic video editor. Built SOMA on top: AI-based coaching analysis of 3D movement.
PNXPlayer is a 29K-line TypeScript video player distributed via CDN and npm — a manager-based architecture spanning 28+ subsystems, with synchronized seek across stacked video elements, AirPlay and Chromecast, A/B loop, and transcript search with word-level timing. PNXCamera, the mobile counterpart for iOS and Android, synchronizes 4K capture across N devices via network channels with NTP-style time sync and audio-beep cross-correlation — sub-millisecond frame alignment, with network traffic in lobby state cut 77% via change-detection broadcasting. Photogrammetry-grade capture meant patching react-native-vision-camera at the native layer — Swift/AVFoundation on iOS, Kotlin/Camera2Interop on Android — for deterministic focus, exposure, and white-balance lock.
To manage the engineering work behind all of this, I built (a soon to be Open Sourced) agentic development workflow on the Claude Agent SDK with permission policy, per-chat cost caps, and multi-engine routing across Claude tiers and a local Ollama — anchored by a 450-document hybrid RAG (pgvector HNSW + Postgres tsvector, 70/30 semantic plus full-text with issue-ID exact-match boosting), exposed to LLM clients via custom MCP and UTCP servers.
OutcomeShipping product, custom AI infrastructure, full-stack founder-engineering