haiku-4-5-for-routine-researchclaw-stages-not-gpt4o-mini
For AutoResearchClaw’s routine stages (topic init, resource planning, export, formatting), use Claude Haiku 4.5 as the fallback model — not GPT-4o-mini or Gemini Flash — to stay within the Anthropic ecosystem and avoid cross-provider API key management. AutoResearchClaw natively supports llm.primary and llm.fallback model configs. Reserve Claude Opus 4.6 for the 8 critical reasoning stages (hypothesis generation, experiment design, paper writing, peer review simulation).
Related
- hybrid-llm-model-pattern-opus-haiku-pipeline-stages
- researchclaw-hybrid-llm-haiku45-for-routine-stages
- autoresearchclaw-hybrid-llm-routing-opus-haiku
- researchclaw-hybrid-llm-opus-haiku-stage-split
- hybrid-opus-haiku-llm-strategy-for-researchclaw-pipeline
- hybrid-llm-routing-opus-critical-haiku-routine-for-researchc
- hybrid-llm-opus-critical-haiku-routine-anthropic-only
- researchclaw-hybrid-llm-haiku-not-gpt4o-mini
- hybrid-llm-routing-opus-critical-haiku-routine-in-pipeline