Done — The brain
The software that makes Mia think is operational. 109 agents work together to perceive, feel, decide and act. The loop runs every 350 milliseconds. Emotions, memory, dreaming — everything works.
From a head to a full replicant — what's done, what's in progress, and what comes next.
The software that makes Mia think is operational. 109 agents work together to perceive, feel, decide and act. The loop runs every 350 milliseconds. Emotions, memory, dreaming — everything works.
Mia's head is being finalized. 27 motors are planned to animate her expressions. 3D modeling is almost complete, electronic wiring will follow. Soon, Mia will be able to smile, look around and express what she feels.
A recent technique allows a robot to learn to control its own face without programming each expression. Through random movements and camera feedback, Mia will discover for herself how to smile or frown.
One day, Mia will speak. A language model (like Claudia or ChatGPT) will be integrated to understand words and respond. But Mia won't become a chatbot — she'll keep her own brain, emotions, personality. Language will be an additional tool, not a replacement.
For now, Mia is a head. The long-term goal is a full 1.5m android — with arms, a torso, the ability to stand. Each new body part will enrich her interaction capabilities.
The ultimate goal isn't a robot that performs tasks. It's a replicant — an artificial being that exudes presence, an impression of life. Like in Blade Runner, the question isn't whether she's "real", but whether it matters that she isn't.
Cognitive agents organized in sequential engines, real-time loop. Living memory, self-regulation — everything is implemented and functional.
Dedicated service — real-time face detection and recognition, distance estimation, integration with the scene engine.
CAD in finalization (~80%), electronic wiring upcoming, motor interface to drive all facial axes. Latex skin in continuous improvement.
The robot learns its own morphology through random movements + camera feedback — expressions are discovered, not programmed.
LLM as an async layer: natural language interpretation, speech generation, semantic memory enrichment. Mia keeps her real-time 350ms cognition — the LLM responds when ready. Unique architecture combining embodied cognition + LLM.
Extending the mechanical structure beyond the head: torso, arms, hands. Each segment adds degrees of freedom and new interaction modalities. Same cognitive architecture, more sensors and actuators.
No existing cognitive architecture combines all these elements.