← Internal Press

Technical Report · V1 · 23 March 2026

By Federico Bottino, Carlo Ferrero, and Nicholas Dosio (Kakashi Venture Accelerator) and Pierfrancesco Beneventano (Massachusetts Institute of Technology)

16 pages · 14 min read

OIDA: Organizational Knowledge in the Age of Associative Intelligence

Organizational corpora used by AI agents typically fail to distinguish between decisions, hypotheses, stale observations, and unresolved contradictions. OIDA is a framework that makes these distinctions computable — giving every unit of knowledge an epistemic class, a confidence score, a decay profile, and a contradiction status that AI agents can reason over directly.

↓ Download the full technical report (PDF)

The problem: epistemically flat knowledge

The quantity of information available to modern organisations is not the binding constraint on decision quality. The binding constraint is the inability to determine, at any given moment, what that information means: which claims are well-supported, which are contested, which have become obsolete, and which stand in direct contradiction to active commitments.

When AI agents attempt to reason over organisational knowledge, this gap becomes operationally consequential. Without explicit epistemic structure, an agent cannot distinguish a verified decision from an abandoned hypothesis, cannot identify that a strategic assumption is contradicted by recent evidence, and treats an open question as equivalent to a settled conclusion — because nothing in the substrate told it otherwise.

Better retrieval does not resolve the underlying issue on an epistemically flat substrate. These questions require epistemic structure, not better retrieval.

From addressed lookup to associative retrieval

The paper identifies a deeper structural reason why better retrieval alone is insufficient. For a century, data structures and access algorithms have been optimised for a single consumer: the CPU executing addressed, sequential instructions. Today, the primary consumers of organisational knowledge are neural networks — LLMs, embedding models, agent systems — and the humans who must decide from what those systems surface. Both operate through associative retrieval: contextual activation of related representations, weighted by relevance, recency, and structural importance.

OIDA does not propose new data structures. It proposes a new access layer on top of established infrastructure — PostgreSQL, pgvector, graph queries — designed to prioritise knowledge by its utility to associative consumers rather than by its proximity to a query string.

The Knowledge Object

At the core of OIDA is the Knowledge Object (KO) — a typed unit of organisational knowledge carrying its epistemic class, confidence, temporal validity, and contradiction status as computable properties. Every KO is assigned to one of nine epistemic classes, drawn from two orthogonal axes: epistemic commitment strength (from explicit ignorance to binding commitments) and temporal behaviour under absence of reinforcement (from non-decaying to inversely decaying).

The nine epistemic classes

DECISION — Formalized choice, valid until superseded. Never decays.

CONSTRAINT — Non-negotiable structural boundary. Never decays.

EVIDENCE — Verifiable supporting or refuting data. Half-life ~365 days.

NARRATIVE — Persistent contextual anchor. Never decays.

PLAN — Structured intention with time horizon. Half-life ~69 days.

EVALUATION — Informed qualitative assessment. Half-life ~198 days.

OBSERVATION — Weak signal not yet interpreted. Half-life ~90 days.

HYPOTHESIS — Unverified testable claim. Half-life ~50 days.

QUESTION — Open question requiring resolution. Urgency grows over time.

The QUESTION class is the dual of knowledge: it models what is not known. Unresolved questions become more urgent, not less, as the organisation continues making decisions in their shadow. This inverse decay is a distinctive design choice — most knowledge systems model only what is known; OIDA also models what is not.

Every KO also carries a seven-axis Knowledge Object Coordinate (KOC) — an immutable identifier encoding entity, domain, class, epoch, relational depth, author, and variant. The KOC enables O(1) structural similarity computation between any two Knowledge Objects without database access.

The Knowledge Gravity Engine

The Knowledge Gravity Engine (KGE) is the deterministic update rule at the heart of OIDA. Every six hours, it recomputes the importance score K for every active Knowledge Object. The computation decomposes into three forces: momentum (carrying forward current importance), injection (new signals from usage, evidence, and graph propagation), and negative forces (class-specific decay and contradiction penalties).

The core design decision is that epistemic maintenance is computational, not editorial. No language model decides what is obsolete. No human curator manually flags contradictions. The framework computes epistemic state — importance, decay, contradiction pressure — cycle by cycle, from class-specific parameters and real organisational signals.

KOs are classified by their current K-score into four disjoint memory zones: Core Memory (always injected into agent context), Working Memory (retrieved when query-relevant), Peripheral (targeted queries only), and Dormant (excluded from active computation). No KO is ever deleted from the historical record — only excluded from active computation.

Hybrid retrieval

In standard RAG architectures, all retrieved chunks are treated as epistemically equivalent. In OIDA, every retrieved Knowledge Object carries a computable epistemic weight. The hybrid retrieval score combines three independent similarity layers: structural similarity (computed from KOC axis alignment), semantic similarity (cosine similarity over embedding vectors), and topological similarity (inverse hop distance in the epistemic graph). At retrieval time, the hybrid similarity is multiplied by the contextual importance of each KO, producing a final ranking that reflects both relevance and epistemic weight.

0.30

Structural weight (α)

0.50

Semantic weight (β)

0.20

Topological weight (γ)

Contradiction as a first-class signal

Relationships between Knowledge Objects are typed from a closed vocabulary of ten edge types, each carrying a signed semantic coefficient. Positive edges (SUPPORTS, BASED_ON, IMPLEMENTS, SUPERSEDES, REFINES, DERIVES_FROM, ENABLES, PRECEDES) propagate importance through the graph. Negative edges — BLOCKS and CONTRADICTS — actively suppress it. When two pieces of evidence conflict, the contradiction is computationally visible to any agent querying the system. Most knowledge systems optimise for surfacing supporting evidence; OIDA models contradiction explicitly.

End-to-end: an organisational decision under epistemic pressure

The paper illustrates the system through a concrete case from KVA's operations. A hypothesis about B2B SaaS legal compliance being a high-growth segment sits alongside a supporting evaluation and a market observation. When new contradicting evidence arrives — a potential customer reports regulatory blocking of AI tool adoption — the ingestion layer classifies it as an OBSERVATION and creates a CONTRADICTS edge to the hypothesis.

At the next KGE cycle, the hypothesis's importance drops. A team member creates a QUESTION KO — “should the legal compliance investment thesis be revised?” — whose urgency will increase each cycle it remains unresolved. When an agent is asked “what is our current position on the legal compliance segment?”, the retrieval layer returns ranked KOs with epistemic metadata: the evaluation, the contradicting observation, the weakened hypothesis, and the blocking question flagged with increasing urgency. The agent's response reflects the full epistemic state — not just the most semantically similar documents.

Preliminary observations from deployment

OIDA is deployed as the operational knowledge infrastructure of KVA's venture studio. The paper reports internal observations from approximately 500 Knowledge Objects across five ventures, three client engagements, and internal strategy — observed over four weeks of KGE cycles, with sources migrated from Notion, Google Calendar, and Slack.

~500

Knowledge Objects in the live corpus

10–15%

Of KOs settled in Core Memory

4 weeks

Observation window of KGE cycles

50

Evaluation queries from real team requests

The hybrid retrieval system produced noticeably more relevant results for causal queries (“why was this decided?”) and relational queries (“what supports this claim?”), where structural and topological similarity contribute most. For simple factual lookups, performance was comparable to vector-similarity-only baselines. These are qualitative team assessments, not controlled experiments.

Limitations and honest assessment

The paper is deliberately transparent about what the system can and cannot guarantee. Classification at ingestion is LLM-assisted and therefore fallible; all subsequent maintenance and retrieval is deterministic. The configured parameters are working heuristics from operational experience at KVA — not empirically validated optima. Whether the full coupled system converges under dynamic graph conditions with coupled K-scores has not been formally proven. Systematic calibration against retrieval quality metrics is the next major technical milestone.

What remains hard

Cold start — new deployments begin without K-score history or gravity calibration
Parameter sensitivity — many configurable parameters whose interactions are not fully characterised
Taxonomy completeness — the nine-class taxonomy is operationally motivated but not proven minimal
Ingestion classification quality — the entire downstream system amplifies classification errors

The contribution

The current implementation and single-site deployment support three provisional claims. First, the architecture is deployed and operational. Second, computational epistemic maintenance — typing, decay, contradiction propagation — is tractable and produces a more informative retrieval substrate than unstructured alternatives. Third, design requirements for organizational epistemic infrastructure can be articulated from building experience, and these requirements are non-obvious: that contradiction handling matters more than agreement surfacing, that class-specific decay solves problems retrieval improvements cannot, and that manual epistemic hygiene does not scale.

The contribution is infrastructural, not theoretical. We have built a system that models epistemic state computationally and reported what we learned from building it. OIDA should be understood as a computational hypothesis under active validation.

A forthcoming companion report will present a controlled empirical evaluation of OIDA against current state-of-the-art retrieval and knowledge management systems — including structured RAG baselines and agent memory architectures — with quantitative metrics for contradiction detection, epistemic ranking accuracy, and decision traceability.


Federico Bottino, Carlo Ferrero, and Nicholas Dosio are affiliated with Kakashi Ventures Accelerator. Pierfrancesco Beneventano is affiliated with the Massachusetts Institute of Technology (MIT).

Download the technical report

OIDA: Organizational Knowledge in the Age of Associative Intelligence

PDF · 16 pages · V1, March 2026

Download PDF

For enquiries about the framework or partnership opportunities, contact the KVA team.

Epistemic knowledgeKnowledge graphsRAGContradiction detectionOrganizational AIKnowledge Gravity EngineAssociative retrieval

Kakashi Ventures Accelerator Srl · Turin, Italy
Published 23 March 2026