How does Mia work?

Mia is a replicant under construction. Her software gives her the ability to perceive, feel, think and act — like a living being, but entirely hand-built.

Embodied cognitive architecture: 109 agents organized in 6 engines, real-time 350ms loop, operational implementation of the artificial psychic system project.

👁

Mia sees

A camera films her surroundings. Python software analyzes images in real time to detect faces. Mia knows if someone is there, how many people, where they are.

💜

Mia feels

Mia has emotions. Not human emotions, but an equivalent: an internal state that colors her perception of the world. She can be in a state of calm, openness, alertness or restraint — and it changes all her behavior.

🧠

Mia thinks

Every 350 milliseconds, her brain completes a full cycle: analyze the situation, generate ideas, compare them, choose. It's a continuous loop — like a heartbeat, but for thought.

Mia decides

Several desires constantly compete: curiosity, safety, desire to interact... An internal arbiter chooses which one prevails at each moment. Like a consciousness arbitrating between several impulses.

🤖

Mia acts

Result: movements. Her eyes move, her head turns, her expressions change. Each gesture is commanded by mini-motors (servos) connected to her software brain via a microcontroller.

📚

Mia learns

Mia memorizes her experiences. The more she interacts, the more she recognizes patterns and adjusts her reactions. This learning is saved: she doesn't forget everything when turned off.

350mscognitive loop
109cognitive agents
6engines
27head servos
24emergent generators

Technical stack

InterfaceReal-time dashboard, cognition visualization
CognitionCognitive agents organized in sequential engines, real-time loop, organizational memory
VisionFace detection & recognition, dedicated service communicating with the cognitive engine
HardwareWi-Fi microcontroller — motor interface, head servos, camera

The cognitive loop — 350 ms

Each cycle, the engines execute in sequence. The loop runs continuously, even at rest.

PerceptionAggregates perceptions (camera, servos, internal state) into a unified scene representation
MorphologyAgents transform the scene into a morphological field — a map of tensions and influences
EmergenceGenerates behavioral responses from detected patterns.
ArbitrationArbitrates between competing intentions. Introduces controlled randomness when intentions are close.
PlanningTranslates the winning intention into a plan of motor and cognitive actions
ExecutionExecutes: motor commands, memory consolidation, learning

The 5 project instances — implemented in Mia

The project is based on an artificial psychic system with 5 instances. The concept had remained 100% theoretical, never implemented or tested. Mia is the first known implementation — with 109 functional agents and a real-time loop.

Unconscious

Theory: low-level processing, primary drives, raw affect

Mia: affect, security and drive agents — curiosity, sociality, refocusing, expansion, withdrawal, protection, questioning

Pre-conscious

Theory: intermediate processing, filtering, shaping

Mia: morphological engine + multiple regulators (cooldown, refocusing, norms, security...)

Conscious

Theory: integration, decision, intentionality

Mia: arbitration engine + action planner — multi-factor decision with element of randomness

Emotional center

Theory: emotions as modulators of global processing

Mia: affective tonality system + emotional contour — emotions continuously modify the morphological field

Systemic loop

Theory: global feedback, regulation of the entire system

Mia: feedback and self-regulation mechanisms

Hardware — 28 head servos

Facial servos

  • Eyes: 6 servos
  • Eyelids: 2 servos
  • Eyebrows: 4 servos
  • Smile: 6 servos
  • Lips: 3 servos
  • Jaw: 1 servo
  • Tongue: 3 servos
  • Neck: 3 servos

Microcontroller

  • Wi-Fi microcontroller
  • Motor abstraction interface
  • Real-time communication with cognitive engine

Vision

  • Camera
  • Dedicated detection service
  • Face detection + recognition
  • Distance estimation

Fabrication

  • 30+ kg of 3D printed PLA
  • Gears + ball bearings
  • Latex skin (improving)
  • CAD (~80% complete)

Technical roadmap

✓ Done
Project cognitive architecture

Cognitive agents organized in sequential engines, real-time loop. Living memory, self-regulation — everything is implemented and functional.

⚙ In progress
Head hardware — 28 servos

CAD in finalization, electronic wiring upcoming, motor interface to drive all facial axes.

→ Next
Body awareness

The robot learns its own morphology through random movements + camera feedback — expressions are discovered, not programmed.

◇ Future
LLM integration — language and culture

An LLM as an async layer: natural language interpretation, speech generation, semantic memory enrichment. Mia keeps her real-time cognition — the LLM responds when ready. Architecture combining embodied cognition + LLM.

Why Mia is unique

No existing cognitive architecture combines all these elements — not SOAR, ACT-R, LIDA, or AKOrN.

Project cognition implemented and operational
Embodied emotions (tonality + affective contour)
Real physical embodiment (servos, camera, microcontroller)
Persistent learning across sessions
Morphology as cognitive control principle
Living memory and self-regulation