Skip to content

Adaptive Reasoning Graph (ARG) StandardA reliable protocol for agent decision-making with evolutive long-term memory.

Combining deterministic ontology and bounded vector acceleration for safe, scalable automation.

Introduction

This project defines an architecture and protocol for agents with a strong focus on:

  • long-term memory,
  • adaptive/evolving memory (able to refine and consolidate over time),
  • robust decision-making for automation agents.

The goal is not to build yet another enhanced RAG pipeline, but to provide a coherent protocol that allows agents to reason, act, remember, and evolve without sacrificing reliability, governance or auditability.

Scope

This protocol covers:

  1. An operational ontology

    • structured through a taxonomy (clusters/labels)
    • and explicit relationships (edges)
    • that constrain navigation and decision-making.
  2. A “leaf-oriented” reasoning graph

    • where nodes are terminal units of information, decision or action,
    • and edges form the grammar of movement between these units.
  3. A two-speed memory system

    • online episodic memory (capturing events and weak signals),
    • offline semantic consolidation (controlled promotion into stable knowledge).
  4. Fast classification and routing

    • driven by lexical and vector signals,
    • but strictly bounded by the taxonomy.

This framework targets enterprise-grade agents that require traceability and safe evolution with support, operations, compliance and workflow automation.

The problem we are solving

Agents that rely solely on LLM generation and/or vector-only RAG typically fail in four ways:

  • non-governable decisions (implicit logic),
  • unstable memory (redundant or contradictory facts),
  • structure drift over time,
  • false quality signals driven by user silence.

The main risk is building a system that “often works” but is not reliable, not auditable and not safe to evolve.

Core idea

We combine two complementary strengths:

  • the stability and unlimited structural scalability of graphs,
  • with the flexibility and speed of vector retrieval.

In this design:

  • taxonomy + graph provide durable structure and deterministic navigation,
  • vector search accelerates detection, classification and local retrieval.

Crucially, vector systems are treated as approximators, not as truth-makers.

Why not rely on a vector database alone?

Vector retrieval is excellent for fast semantic matching but it is not a safe long-term substitute for structure.

As the number of indexed items grows inside a shared latent space:

  • semantic neighborhoods become noisier,
  • disambiguation becomes harder,
  • the system accumulates near-duplicates,
  • and memory quality degrades unless strong constraints exist.

In contrast, a well-designed graph can keep growing through explicit branching, typed relationships and controlled evolution (see ARG Core).

Therefore, the architecture intentionally avoids infinite, unconstrained vector growth.
Instead, it uses:

Why not rely on a knowledge graph alone?

A knowledge graph provides durable structure, explicit relationships, and strong auditability.
However, a graph-only approach is often too slow and too rigid for real-world agent routing and intent capture at scale.

Typical limitations of a graph-only stack include:

  • slower or more expensive early-stage intent detection without a semantic shortcut layer,
  • higher manual modeling cost to cover long-tail phrasing and evolving user language,
  • weaker performance on fuzzy matching when the taxonomy is still young,
  • more friction for cold-start scenarios.

This is why the architecture combines:

The vector layer remains an approximator, not a structure-definer, and its growth is governed by taxonomy constraints and offline evolution rules (see ARG Core and Guides).

Design principles

This project is built on a few non-negotiable principles:

  • Taxonomy validity always wins
    A deterministic validator is the final arbiter of allowed labels and paths.

  • Vector signals narrow the search space
    They never define the structure by themselves.

  • Memory is a controlled system, not a dump
    Online writes are conservative; semantic promotion happens offline.

  • Silence is not confirmation
    Weak signals are tracked separately from confirmed success.

  • Evolution is staged and versioned
    Changes follow lifecycle rules (ACTIVE → DEPRECATED → REMOVED)
    with alias tables to protect both reasoning and memory.

What you will find here

  • The ARG protocol (online inference + offline refinement).
  • The Context Weaver architecture for taxonomy-safe classification under tight latency budgets.
  • The Policy Manager kernel for pre/post governance.
  • A structured approach to memory write, deduplication and consolidation.
  • Guides for ontology construction, auditing and safe lifecycle evolution.

If you are building agents that must remain reliable as domains, products, and user behavior evolve, this framework is designed to offer a practical middle path between:

  • purely probabilistic LLM reasoning,
  • and fully hand-engineered decision trees.

It aims to deliver fast online behavior,
with deterministic structure,
and safe long-term learning.