Real buildings do not behave like labs or warehouses.
Most robots perform well in controlled environments. Buildings where people live are different: narrow hallways, elevators, residents who pause and turn unpredictably, building-specific workflows, and social norms that actually matter for task completion.
The failure mode is rarely dramatic. It is supervision. The moment a human has to watch closely, step in, or work around the robot, autonomy is already breaking down. At scale, that friction becomes the product.
The Behavioral AI platform for robots in buildings where people live.
Every robot company deploying into lived spaces will need the same thing — a behavioral intelligence layer trained on how robots actually perform useful work around people. That data does not exist at meaningful scale. We are generating it through live deployments, building by building, task by task.
The system runs across three layers: See — spatial search that identifies what matters across floors, rooms, and dynamic indoor spaces. Think — spatial reasoning that infers, adapts, and responds. Act — spatial execution that turns perception into real-world action: navigation, handoffs, retries, and human fallback when needed.
Each deployment feeds all three layers. Every completed task strengthens the models. The system becomes more reliable with every building it operates in.
Lily is a reference design, not the business. The hardware specification will be made freely available. The behavioral AI system running on it is the product.