But wait, the Architect diagram says "Claude", and later referencing a MCP package from/for Anthropic/Claude Code. Is this ultimately just running CC in that VM? I'm not sure this is "local" as people typically understand it. Are they calling it local because the agent harness doesn't run "in the cloud"?
The title is technically correct; eventually models will run on local machines. We're just at another cycle of terminals not yet being powerful enough and needing to connect to a server.
Oh the agent running locally, not the LLM itself. We'll see. The amount of prompting I do with Claude code via the app and not at my desk is way more than I ever would have thought. Flash of inspiration for a thing while I'm on the bus? Open the app on my phone and start a session from my phone, for me to check when I next get to a computer.