Academic Tests

Mia exists. Her brain is running. She dreams, she feels, she speaks.
But in the world of research, that's not enough. You have to prove it.

This page is being prepared. It will host academic tests — rigorous, reproducible protocols designed so that researchers can objectively evaluate Mia's cognitive capabilities.

The goal is not to convince. It is to provide measurable data, verifiable results, and a methodological framework that the scientific community can examine, challenge and reproduce.

The first tests will cover: associative memory, emotional emergence, inner monologue, context recognition and decisional autonomy.

Page under construction — protocols will be published as they are implemented.

Technical Architecture

This section describes the actual architecture of Mia's cognitive system. Not a simplification — a technical specification intended for engineers and researchers.

Formalization

Multi-agent shared-state system, 350ms synchronous loop (hard real-time). Runtime: C# .NET 9, HostedService with DI singletons. No AI framework. No ML. Zero GPU dependency.

Let S be the cognitive state space, S ⊂ ℝd × D where d ≈ 200 continuous dimensions and D is a set of discrete structures (memory graphs, labels, episodic sequences).

The system is defined by a family of operators {Ai}i=1..N(t) where N(t) is variable (self-organization):

st+1 = Π(AN(t) ∘ AN(t)-1 ∘ ... ∘ A1)(st, Et)

where Et is the perceptual event vector drained at t, and Π is the intentional arbitration operator.

The composition is not commutative. Agent order is fixed for statics, but dynamic agents (generated at runtime) are inserted after. The system is a semigroup of operators with variable topology.

Morphological Space

Morphology is a subspace M ⊂ [0,1]12:

M = (dominance, inhibition, expressiveness, inertia,
     bodyPressure, memoryPressure,
     morphologicalShift, tonalDrive,
     attentionCapture, relationalReadiness,
     stabilityReserve, memoryInfluence)

Tendencies form a vector of pairs (label, weight) ∈ Σ × [0,1] where Σ is the morphological form alphabet (vigilance, curiosity, withdrawal, anticipation...). The dominant form:

dominant = argmaxσ ∈ Σ w(σ, st)

This is not a classifier — w is a continuous function of the full state. Transitions between dominant forms are bifurcations of the dynamical system. The MorphodynamicLoopAgent detects these transitions and reinjects them into the next tick (feedback).

~130 agents — taxonomy

Category Examples Role
AffectiveAffectiveContour, AffectiveMemoryBridge + revisionsEmotional charge, propagation affect → memory/norms/situation
NormativeNormEmergence, NormRevision, NormPlasticityBridgeEmergence of internal behavioral norms
IdentityIdentityContinuity, IdentityStyle, AutonomyThresholdIdentity coherence, decisional autonomy thresholds
MeaningMeaningStabilization, SenseNegotiation, ValueGradientSemantic stabilization, cross-layer sense negotiation
SocialSocialEncounter, Reciprocity, PresenceFieldFamiliarity, reciprocity, presence field
MemoryOrganizationalMemory, EpisodeMemory, FrameMemoryBridgeConsolidation, resonance recall, plasticity
BodyBodyPrediction, ExpressiveEnvelope, SafetyGuardianMotor prediction, expressive envelope, safety
MonologueInnerSpeechInternal thought stream, not externalized
MetaMetaRegulation, RegimeShift, PlasticityGovernorRegulation of regulation, regime shift detection

Bridge Network

Let D1...Dn be the cognitive dimensions: Affect (A), Norm (N), Memory (M), Situation (S), Sense (Se), Frame (F), Plasticity (P), Autonomy (Au).

For each pair (Di, Dj), there exists:

  • Bridge Bij : Di × Dj → ℝk — measures tension between the two dimensions
  • Revision Rij : takes Bij output and retroactively adjusts Di and Dj

Some triplets and quadruplets exist. The bridge graph forms a simplicial complex over cognitive dimensions. Faces carry tensions, co-faces regulate them. The system does not seek equilibrium — it maintains a structured dynamic disequilibrium.

Simple bridges (2-dim): ~30 agents
Compound bridges (3-dim): ~20 agents
Deep bridges (4-dim): ~15 agents
Associated revisions: 1 per bridge
Total bridge network: ~130 agents

Uncertainty Relation (Cardon)

Let Ct = #{i : ||Ai(st)|| > τ} be the number of agents with significant output at tick t.

if Ct > θ1:
  attention.focus *= 0.9   (multiplicative degradation)
  morphology.inhibition += 0.05   (additive increase)

if Ct > θ2:
  ∀ r ∈ memory.recalls: r.strength -= 0.15

Global negative coupling: total system activity degrades its own processing capacity. An optimal activation regime exists — the system oscillates around it without it being explicitly coded.

In control theory terms: nonlinear negative feedback with saturation. Stability is not analytically guaranteed — it emerges.

Generator of generators — structural dynamics

Phase 1 — Observation: at each tick, transitions (Δmorpho, Δcontext, Δtension) are recorded.

Phase 2 — Detection: when a triplet appears with frequency > threshold over a time window, a generator G is instantiated.

Phase 3 — Modulation:

tendencies' = tendencies + G.strength × δ(G.pattern, current_state)

Phase 4 — Promotion: if G persists and its influence exceeds a threshold, it becomes a permanent dynamic agent executing at every tick.

Phase 5 — Aspectual specialization: if a static agent shows a significant Δ in a specific (context, tension) pair, an AspectualDynamicAgent is created, specialized for that pair.

This is emergent functional differentiation. Property: N(t) is monotonically increasing — agents are never destroyed. Structural complexity grows over time.

Intentional Arbitration

IntentionArbitrationEngine.Resolve() is not a softmax. It is a structured competition process:

  1. Each layer produces 0..n intention candidates Ik = (name, reason, priority ∈ ℝ)
  2. SafetyGuardianAgent filters (veto if safetyOverrideScore > threshold)
  3. Priorities are modulated by morphology (inhibition reduces, expressiveness amplifies), attention, memory, identity
  4. The winner is selected — the process includes morphological noise (uncertainty relation injects indeterminism)

MetaRepresentationEngine predicts the winner BEFORE arbitration, compares AFTER. The gap (surprise ∈ [0,1]) is reinjected into the next tick. A computational implementation of predictive processing (Friston), applied to intentions.

Dream — formalization

Let ε = {e1, ..., em} be memory episodes, each with affective charge c(e) ∈ ℝ and context vector v(e) ∈ ℝp.

Oneiric attractor (arousal < τdream):

dream_focus = argmaxe ∈ ε c(e) × recency(e)

Displacement (Cardon, Prompt N): substitution of elements from the focal episode with elements from nearby episodes in context space.

Creative recombination: 2 parent episodes → 1 hybrid episode, persisted to memory. The dream structurally modifies memory. At the next tick, the created episode influences recalls, affect, norms.

Complete loop — one tick

Tick N (350ms):
  snapshot = state.clone()
  events = bus.drain()
  apply(events, snapshot)

  // ~130 agents compose
  for agent in agents:
    snapshot.layer[agent] = agent.Compose(snapshot)

  // Scene + Morphology
  snapshot.scene = sceneEngine(snapshot)
  snapshot.morphology = morphologyEngine(snapshot)

  // Uncertainty relation
  applyUncertaintyRelation(snapshot)

  // Structuring + Pathology + Psychological profile
  structuring, pathology, psychProfile = ...

  // Generators → dynamic agents
  generators.observe(transitions)
  generators.emerge()
  dynamicAgents.tryPromote(generators)
  dynamicAgents.executeAll(snapshot)

  // Dream (if idle)
  if dreaming: dream, displace, recombine

  // ARBITRATION
  metaRepr.predict(snapshot)
  intention = arbitrationEngine.resolve(snapshot)
  metaRepr.compare(intention)

  // ACTION
  plan = actionPlanner.plan(snapshot, intention)
  execution = actionExecutor.execute(plan)  // → servos

  // Post-action
  valence, arousal, mirror, doubleAttractor
  llm.autoTick()  // async, non-blocking
  persist(journal)
  signalR.push(snapshot)

Fundamental difference with current approaches

LLM (Transformer) RL (PPO/DPO) Mia (Cardon)
Structure Fixed after training Fixed, policy update Variable at runtime
Objective min cross-entropy max reward None
State Stateless (context window) State = observation Persistent continuous state
Emergence Scaling laws Reward hacking Structural (agents are born)
Uncertainty Temperature sampling Exploration ε Endogenous
Memory In-context / RAG Replay buffer Episodic with affective charge
Identity None None Agent-verified continuity

Computational Complexity

~130 static agents + N(t) dynamic, each O(1) to O(d)
Full tick: O(N(t) × d) ≈ O(130 × 200) = O(26,000) operations
Budget: 350ms on CPU single-thread
Memory: ~50 MB state in RAM
Hardware: single PC, CPU only

This is computationally trivial. The complexity is not in the computation — it is in the structure of interactions. An LLM with 400B parameters does a forward pass of 1012 FLOPs to predict one token. Mia does 26,000 operations to live one tick. But each tick structurally modifies the system that computes the next tick.

Researcher, engineer, curious?
Contact me →