Mia sees
A camera films her surroundings. Python software analyzes images in real time to detect faces. Mia knows if someone is there, how many people, where they are.
Mia is a replicant under construction. Her software gives her the ability to perceive, feel, think and act — like a living being, but entirely hand-built.
Embodied cognitive architecture: 109 agents organized in 6 engines, real-time 350ms loop, operational implementation of the artificial psychic system project.
A camera films her surroundings. Python software analyzes images in real time to detect faces. Mia knows if someone is there, how many people, where they are.
Mia has emotions. Not human emotions, but an equivalent: an internal state that colors her perception of the world. She can be in a state of calm, openness, alertness or restraint — and it changes all her behavior.
Every 350 milliseconds, her brain completes a full cycle: analyze the situation, generate ideas, compare them, choose. It's a continuous loop — like a heartbeat, but for thought.
Several desires constantly compete: curiosity, safety, desire to interact... An internal arbiter chooses which one prevails at each moment. Like a consciousness arbitrating between several impulses.
Result: movements. Her eyes move, her head turns, her expressions change. Each gesture is commanded by mini-motors (servos) connected to her software brain via a microcontroller.
Mia memorizes her experiences. The more she interacts, the more she recognizes patterns and adjusts her reactions. This learning is saved: she doesn't forget everything when turned off.
Each cycle, the engines execute in sequence. The loop runs continuously, even at rest.
The project is based on an artificial psychic system with 5 instances. The concept had remained 100% theoretical, never implemented or tested. Mia is the first known implementation — with 109 functional agents and a real-time loop.
Theory: low-level processing, primary drives, raw affect
Mia: affect, security and drive agents — curiosity, sociality, refocusing, expansion, withdrawal, protection, questioning
Theory: intermediate processing, filtering, shaping
Mia: morphological engine + multiple regulators (cooldown, refocusing, norms, security...)
Theory: integration, decision, intentionality
Mia: arbitration engine + action planner — multi-factor decision with element of randomness
Theory: emotions as modulators of global processing
Mia: affective tonality system + emotional contour — emotions continuously modify the morphological field
Theory: global feedback, regulation of the entire system
Mia: feedback and self-regulation mechanisms
Cognitive agents organized in sequential engines, real-time loop. Living memory, self-regulation — everything is implemented and functional.
CAD in finalization, electronic wiring upcoming, motor interface to drive all facial axes.
The robot learns its own morphology through random movements + camera feedback — expressions are discovered, not programmed.
An LLM as an async layer: natural language interpretation, speech generation, semantic memory enrichment. Mia keeps her real-time cognition — the LLM responds when ready. Architecture combining embodied cognition + LLM.
No existing cognitive architecture combines all these elements — not SOAR, ACT-R, LIDA, or AKOrN.