The current wave of AI consulting is a gold rush — thick with people who discovered LLMs during the past year. My first applied-AI work was just over two decades earlier: the AI Group at Countrywide Bank in the early 2000s, building expert systems for automated property valuation and loan appraisal. Followed in 2005 by a Guinness World Record for ChessBrain — a distributed chess engine running across 2,070 machines in 50+ countries. Chess is the field AI used to measure itself against; I was building on that side of it. In the years since, the work I've been involved with has tracked the field as it widened: real-time pose analysis, speech-to-text transcription at scale, RAG pipelines, and LLM infrastructure — modalities that weren't on the table in 2005.
What that means today: when the AI-accelerated prototype starts hitting its limits, I can debug the distributed system underneath, untangle the architectural shortcut, and tell you which parts of the stack will hold up and which will break under realistic load. If you shipped fast with Cursor, Claude Code, Lovable, or v0 and you're starting to wonder whether the foundation will survive real users — I can share what I think holds up, what needs to be rebuilt, and in what order.
For businesses trying to figure out where AI actually fits — not the pitch-deck version, but the one you have to ship, support, and measure — I separate signal from theater, identify workflows where AI moves the needle, and sequence pilots before you commit the roadmap. Same twenty-five years of applied AI, pointed at a different problem.