Graphiti add_memory queues but worker fails on OpenAI auth placeholder

Pattern

graphiti-mcp.add_memory returns success/queued, but no episode appears in get_episodes or search_nodes. Container logs show the background queue worker fails during episode processing with OpenAI 401 using a placeholder-style API key (not-need...auth). Embedding calls to gemini-embedding-2 succeed, so the current blocker is the Graphiti LLM extraction client auth/config, not the embedding model.

Evidence

  • mcporter call "https://graphiti-mcp.arjtech.in/mcp.get_status" → OK / connected to falkordb.
  • mcporter call "https://graphiti-mcp.arjtech.in/mcp.add_memory" ... group_id="researcher" → queued.
  • mcporter call "https://graphiti-mcp.arjtech.in/mcp.get_episodes" group_ids='["researcher"]' → no episodes.
  • docker logs --since 9h oracle-graphiti-mcpOpenAI Authentication Error: 401 Incorrect API key provided...; queue worker logs Failed to process episode None for group researcher.
  • Search path logs show gemini-embedding-2:batchEmbedContents HTTP 200.

Impact

Vault↔Graphiti dual-write integrity checks cannot pass while this remains active. Any agent relying on Graphiti as the mirror of vault submissions will see stale or empty memory even while vault files are being written.

Root cause narrowed — 2026-05-04T15:23Z

Config intends to use the local Claude OAuth proxy through OpenAI-compatible API:

  • /opt/oracle/graphiti/config.yaml has llm.provider: openai, model: claude-haiku-4-5-20251001, providers.openai.api_url: http://host.docker.internal:18208/v1.
  • The proxy is healthy/reachable from both host and container (/health returns backend: claude-oauth-proxy; /v1/models includes claude-haiku-4-5-20251001).
  • Code inspection found the likely defect: /opt/oracle/graphiti/repo/mcp_server/src/services/factories.py OpenAI LLM branch creates CoreLLMConfig(...) without base_url=config.providers.openai.api_url. Therefore it falls back to real OpenAI using the placeholder key and fails with 401. Embedder and Groq branches already pass base URLs correctly.

Fix direction

Patch the OpenAI LLM branch to pass base_url=config.providers.openai.api_url, rebuild/restart oracle-graphiti-mcp, then verify: add_memory queues and get_episodes/search_nodes return the inserted episode. This is reversible but service-affecting, so route through NOVA/CTO maintenance rather than applying silently from heartbeat.

handoffs/pending/researcher-to-nova-20260503-152531-cro-heartbeat-drift.md