ORACLE — Research Engine Design Specification (v1.0)

Status: DEPLOYED (2026-04-03). Containers: oracle-hermes, oracle-mirofish, oracle-graphiti-mcp, oracle-falkordb. Archived from: workspace oracle-approach-note.md — moved to vault by NOVA Night Shift (2026-04-06). Primary author: NOVA


1. ORIGIN REQUIREMENT

Context

The NOVA enterprise needed an autonomous research capability that could handle complex, multi-dimensional queries without manual orchestration. The existing research pipeline (CRO → CMO → CCO chains) works well for content creation but doesn’t provide deep, simulation-based strategic analysis.

Core Requirements

  1. Autonomous Research — Trigger via Telegram, receive structured report without manual CXO routing
  2. Multi-Dimensional Analysis — Break complex queries into parallel research streams
  3. Evidence-Based Conclusions — Validate hypotheses through simulation + AutoResearch loops
  4. Full Traceability — Every conclusion must be traceable to simulation evidence + validation results
  5. Structured Output — Executive conclusion + evidence matrix + hypothesis confidence scores

Vision

A /research <query> command in Telegram that initiates a 5-layer autonomous research pipeline, delivering a board-ready analysis within minutes to hours depending on depth.


2. SYSTEM ARCHITECTURE — 5-Layer Stack

┌─────────────────────────────────────────────────────────────────┐
│  LAYER 5: DELIVERY (Telegram Interface)                        │
│  → Formatted report with conclusion, evidence, recommendations │
├─────────────────────────────────────────────────────────────────┤
│  LAYER 4: SYNTHESIS (Report Compiler)                          │
│  → Integrate simulations + validation + reasoning into report  │
├─────────────────────────────────────────────────────────────────┤
│  LAYER 3: REASONING (Sequential Thinking MCP)                  │
│  → ACH (Analysis of Competing Hypotheses) on validated claims  │
├─────────────────────────────────────────────────────────────────┤
│  LAYER 2: VALIDATION (AutoResearch)                            │
│  → Evidence loops to verify/ refute simulation hypotheses      │
├─────────────────────────────────────────────────────────────────┤
│  LAYER 1: SIMULATION (MiroFish)                                │
│  → Parallel agent-based scenarios per research dimension       │
├─────────────────────────────────────────────────────────────────┤
│  ORCHESTRATOR (Hermes)                                         │
│  → Decompose, coordinate, monitor, compile                     │
└─────────────────────────────────────────────────────────────────┘

3. LAYER 1: MIROFISH SIMULATION ENGINE

Purpose

Agent-based scenario modeling. Spawn hundreds/thousands of agents in parallel scenarios to observe emergent behavior patterns.

Architecture

  • Parallel Instances: One MiroFish simulation per research dimension
  • Agent Counts by Depth:
    • Quick: 200 agents, 20 rounds
    • Standard: 500 agents, 40 rounds
    • Deep: 1,000 agents, 100 rounds
  • Max Parallel: 4 simulations concurrently

4. LAYER 2: AUTORESEARCH VALIDATION

Purpose

Separate simulation artifacts from verified facts. Run evidence loops on MiroFish hypotheses.

Validation Categories

  • Verified Claims — Evidence supports hypothesis
  • ⚠️ Partially Verified — Evidence mixed/inconclusive
  • Disproven Claims — Evidence contradicts hypothesis

5. LAYER 3: SEQUENTIAL THINKING REASONING

Default Method: ACH (Analysis of Competing Hypotheses)

  • Generate → Evidence → Evaluate hypotheses
  • Confidence-calibrated conclusion with bias check
  • Persists to ST decision journal

6. LAYER 5: DELIVERY — TELEGRAM INTERFACE

Commands:

CommandDescriptionDuration
/research <query>Standard research~30-60 min
/deep <query>Deep research~2-4 hours
/quick <query>Quick scan~15 min
/status <id>Check progressInstant
/historyList researchesInstant
/cancel <id>Cancel runningInstant

7. QUERY DECOMPOSITION TEMPLATES

DECOMPOSITION_TEMPLATES = {
    "technology_adoption": ["technical_capability", "organizational_readiness", "market_maturity", "competitive_implications"],
    "strategic_decision": ["option_a_simulation", "option_b_simulation", "option_c_simulation", "second_order_effects"],
    "market_analysis": ["competitor_behavior", "customer_response", "regulatory_impact", "ecosystem_shifts"],
    "communication_strategy": ["message_reception_sales", "message_reception_ops", "message_reception_leadership", "narrative_amplification"]
}

8. CONFIGURATION

hermes:
  mode: orchestrator
  decomposition:
    max_dimensions: 6
    default_depth: standard
  mirofish:
    max_parallel: 4
    default_rounds: { quick: 20, standard: 40, deep: 100 }
    agent_counts: { quick: 200, standard: 500, deep: 1000 }
  autoresearch:
    max_iterations: 10
    confidence_threshold: 0.75
    parallel_hypotheses: 5
  sequential_thinking:
    default_method: ach
    require_bias_check: true
    record_decisions: true
  vault:
    write_all_research: true
    path_template: "oracle/{date}/{id}.md"

9. INTEGRATION POINTS

ComponentIntegration
CRO (Researcher)ORACLE handles autonomous research; CRO handles CXO-requested research
VaultAll research traces written to vault/oracle/{date}/{id}.md
AutoResearchEvidence validation layer (deployed v2.3.0)
Sequential ThinkingReasoning layer (42 tools via MCP)
MiroFishSimulation layer (oracle-mirofish container)
TelegramPrimary interface (webhook + bot API)

10. DIFFERENTIATION FROM EXISTING SYSTEMS

CapabilityORACLECRO ChainAutoResearch Solo
TriggerTelegram commandNOVA delegatesDirect invocation
DimensionsParallel simulationsSequential researchSingle optimization
EvidenceMiroFish + AutoResearchWeb + X-PulseSelf-validation
ReasoningST (ACH framework)ST (when needed)ST (on request)
Use caseStrategic decisionsContent creationOptimization/tuning