haiku-4-5-for-routine-researchclaw-stages-not-gpt4o-mini

For AutoResearchClaw’s routine stages (topic init, resource planning, export, formatting), use Claude Haiku 4.5 as the fallback model — not GPT-4o-mini or Gemini Flash — to stay within the Anthropic ecosystem and avoid cross-provider API key management. AutoResearchClaw natively supports llm.primary and llm.fallback model configs. Reserve Claude Opus 4.6 for the 8 critical reasoning stages (hypothesis generation, experiment design, paper writing, peer review simulation).