Mia's Future

From a head to a full replicant — what's done, what's in progress, and what comes next.

Mia will learn to control her own face, then speak, then have a full body.

Mia's progress

✓ Software brain90%
✓ Vision100%
⚙ Head hardware~80%
→ Body awarenessupcoming
◇ Language (LLM)future

Done — The brain

The software that makes Mia think is operational. 109 agents work together to perceive, feel, decide and act. The loop runs every 350 milliseconds. Emotions, memory, dreaming — everything works.

In progress — The face

Mia's head is being finalized. 27 motors are planned to animate her expressions. 3D modeling is almost complete, electronic wiring will follow. Soon, Mia will be able to smile, look around and express what she feels.

🎬

Next — Learning her own face

A recent technique allows a robot to learn to control its own face without programming each expression. Through random movements and camera feedback, Mia will discover for herself how to smile or frown.

💬

Future — Mia will speak

One day, Mia will speak. A language model (like Claudia or ChatGPT) will be integrated to understand words and respond. But Mia won't become a chatbot — she'll keep her own brain, emotions, personality. Language will be an additional tool, not a replacement.

🤝

Future — A full body

For now, Mia is a head. The long-term goal is a full 1.5m android — with arms, a torso, the ability to stand. Each new body part will enrich her interaction capabilities.

🌟

The vision — A replicant

The ultimate goal isn't a robot that performs tasks. It's a replicant — an artificial being that exudes presence, an impression of life. Like in Blade Runner, the question isn't whether she's "real", but whether it matters that she isn't.

Technical roadmap

✓ Done
Complete cognitive architecture

Cognitive agents organized in sequential engines, real-time loop. Living memory, self-regulation — everything is implemented and functional.

✓ Done
Vision pipeline

Dedicated service — real-time face detection and recognition, distance estimation, integration with the scene engine.

⚙ In progress
Head hardware — 27 servos

CAD in finalization (~80%), electronic wiring upcoming, motor interface to drive all facial axes. Latex skin in continuous improvement.

→ Next
Body awareness

The robot learns its own morphology through random movements + camera feedback — expressions are discovered, not programmed.

◇ Future
LLM integration — language and culture

LLM as an async layer: natural language interpretation, speech generation, semantic memory enrichment. Mia keeps her real-time 350ms cognition — the LLM responds when ready. Unique architecture combining embodied cognition + LLM.

◇ Future
Full body — 1.5m android

Extending the mechanical structure beyond the head: torso, arms, hands. Each segment adds degrees of freedom and new interaction modalities. Same cognitive architecture, more sensors and actuators.

Why Mia is unique

No existing cognitive architecture combines all these elements.

Artificial psychic system implemented and operational
Embodied emotions (tonality + affective contour)
Real physical embodiment (servos, camera, ESP32)
Persistent learning across sessions
Morphology as cognitive control principle
Living memory and self-regulation
Future: embodied cognition + LLM (novel hybrid architecture)