haiku-45-for-routine-pipeline-stages-not-gpt4o-mini

When routing LLM calls across pipeline stages, prefer Haiku 4.5 over GPT-4o-mini or Gemini Flash for routine/low-reasoning stages to stay within the Anthropic ecosystem. This avoids cross-provider API key management, aligns with the existing Max subscription, and keeps model governance centralized. Opus 4.6 handles the 8 stages requiring deep reasoning; Haiku 4.5 handles the 15 routine stages.