ORACLE — Research Engine Design Specification (v1.0)
Status: DEPLOYED (2026-04-03). Containers:
oracle-hermes,oracle-mirofish,oracle-graphiti-mcp,oracle-falkordb. Archived from: workspaceoracle-approach-note.md— moved to vault by NOVA Night Shift (2026-04-06). Primary author: NOVA
1. ORIGIN REQUIREMENT
Context
The NOVA enterprise needed an autonomous research capability that could handle complex, multi-dimensional queries without manual orchestration. The existing research pipeline (CRO → CMO → CCO chains) works well for content creation but doesn’t provide deep, simulation-based strategic analysis.
Core Requirements
- Autonomous Research — Trigger via Telegram, receive structured report without manual CXO routing
- Multi-Dimensional Analysis — Break complex queries into parallel research streams
- Evidence-Based Conclusions — Validate hypotheses through simulation + AutoResearch loops
- Full Traceability — Every conclusion must be traceable to simulation evidence + validation results
- Structured Output — Executive conclusion + evidence matrix + hypothesis confidence scores
Vision
A /research <query> command in Telegram that initiates a 5-layer autonomous research pipeline, delivering a board-ready analysis within minutes to hours depending on depth.
2. SYSTEM ARCHITECTURE — 5-Layer Stack
┌─────────────────────────────────────────────────────────────────┐
│ LAYER 5: DELIVERY (Telegram Interface) │
│ → Formatted report with conclusion, evidence, recommendations │
├─────────────────────────────────────────────────────────────────┤
│ LAYER 4: SYNTHESIS (Report Compiler) │
│ → Integrate simulations + validation + reasoning into report │
├─────────────────────────────────────────────────────────────────┤
│ LAYER 3: REASONING (Sequential Thinking MCP) │
│ → ACH (Analysis of Competing Hypotheses) on validated claims │
├─────────────────────────────────────────────────────────────────┤
│ LAYER 2: VALIDATION (AutoResearch) │
│ → Evidence loops to verify/ refute simulation hypotheses │
├─────────────────────────────────────────────────────────────────┤
│ LAYER 1: SIMULATION (MiroFish) │
│ → Parallel agent-based scenarios per research dimension │
├─────────────────────────────────────────────────────────────────┤
│ ORCHESTRATOR (Hermes) │
│ → Decompose, coordinate, monitor, compile │
└─────────────────────────────────────────────────────────────────┘
3. LAYER 1: MIROFISH SIMULATION ENGINE
Purpose
Agent-based scenario modeling. Spawn hundreds/thousands of agents in parallel scenarios to observe emergent behavior patterns.
Architecture
- Parallel Instances: One MiroFish simulation per research dimension
- Agent Counts by Depth:
- Quick: 200 agents, 20 rounds
- Standard: 500 agents, 40 rounds
- Deep: 1,000 agents, 100 rounds
- Max Parallel: 4 simulations concurrently
4. LAYER 2: AUTORESEARCH VALIDATION
Purpose
Separate simulation artifacts from verified facts. Run evidence loops on MiroFish hypotheses.
Validation Categories
- ✅ Verified Claims — Evidence supports hypothesis
- ⚠️ Partially Verified — Evidence mixed/inconclusive
- ❌ Disproven Claims — Evidence contradicts hypothesis
5. LAYER 3: SEQUENTIAL THINKING REASONING
Default Method: ACH (Analysis of Competing Hypotheses)
- Generate → Evidence → Evaluate hypotheses
- Confidence-calibrated conclusion with bias check
- Persists to ST decision journal
6. LAYER 5: DELIVERY — TELEGRAM INTERFACE
Commands:
| Command | Description | Duration |
|---|---|---|
/research <query> | Standard research | ~30-60 min |
/deep <query> | Deep research | ~2-4 hours |
/quick <query> | Quick scan | ~15 min |
/status <id> | Check progress | Instant |
/history | List researches | Instant |
/cancel <id> | Cancel running | Instant |
7. QUERY DECOMPOSITION TEMPLATES
DECOMPOSITION_TEMPLATES = {
"technology_adoption": ["technical_capability", "organizational_readiness", "market_maturity", "competitive_implications"],
"strategic_decision": ["option_a_simulation", "option_b_simulation", "option_c_simulation", "second_order_effects"],
"market_analysis": ["competitor_behavior", "customer_response", "regulatory_impact", "ecosystem_shifts"],
"communication_strategy": ["message_reception_sales", "message_reception_ops", "message_reception_leadership", "narrative_amplification"]
}8. CONFIGURATION
hermes:
mode: orchestrator
decomposition:
max_dimensions: 6
default_depth: standard
mirofish:
max_parallel: 4
default_rounds: { quick: 20, standard: 40, deep: 100 }
agent_counts: { quick: 200, standard: 500, deep: 1000 }
autoresearch:
max_iterations: 10
confidence_threshold: 0.75
parallel_hypotheses: 5
sequential_thinking:
default_method: ach
require_bias_check: true
record_decisions: true
vault:
write_all_research: true
path_template: "oracle/{date}/{id}.md"9. INTEGRATION POINTS
| Component | Integration |
|---|---|
| CRO (Researcher) | ORACLE handles autonomous research; CRO handles CXO-requested research |
| Vault | All research traces written to vault/oracle/{date}/{id}.md |
| AutoResearch | Evidence validation layer (deployed v2.3.0) |
| Sequential Thinking | Reasoning layer (42 tools via MCP) |
| MiroFish | Simulation layer (oracle-mirofish container) |
| Telegram | Primary interface (webhook + bot API) |
10. DIFFERENTIATION FROM EXISTING SYSTEMS
| Capability | ORACLE | CRO Chain | AutoResearch Solo |
|---|---|---|---|
| Trigger | Telegram command | NOVA delegates | Direct invocation |
| Dimensions | Parallel simulations | Sequential research | Single optimization |
| Evidence | MiroFish + AutoResearch | Web + X-Pulse | Self-validation |
| Reasoning | ST (ACH framework) | ST (when needed) | ST (on request) |
| Use case | Strategic decisions | Content creation | Optimization/tuning |
Related
- oracle/index
- hermes-consul-bootstrap-report-01-apr-2026
- autoresearch-v2-5-0-upgrade-8-gaps-absorbed
- frontend-data-bm25-engine-zero-external-dependencies
- pred-2026-iran-war-outcome-frozen-conflict-33-settlement-28
- test-decision-from-hook-self-test-v2
- st-reasoning-chain-precedes-record-decision