When We Played With NeRFs
At AugustMille, our journey to build the Spatial Graph, the infrastructure layer that lets robots see, think, and act in real buildings, has taken us down some fascinating experimental paths. One of those was Neural Radiance Fields (NeRFs).
For context: AugustMille’s Spatial Search is the “see” layer of our platform. It gives robots a searchable, dynamic understanding of indoor environments that span rooms, floors, and constantly changing layouts. To get there, we tested many different spatial representations, from traditional SLAM maps to 3D voxel grids. But we were also curious about NeRFs.
NeRFs promise something compelling: the ability to reconstruct unseen views of a space from just a handful of images. In theory, a robot could recreate an entire hallway from a few snapshots and interpret it visually, capturing lighting, textures, and subtle context cues. That level of detail could make spatial search far more intuitive: “What’s the coordinate of the red chair in the lobby so that the robot can <accomplish chair-related goals>?” or “Which door has unit #2305 on it?”
Experiments were promising, but they also revealed limitations. NeRFs are beautiful, but they can be heavy, slow to train, compute intensive, and brittle in dynamic spaces, all traits that don’t lend themselves to the real-time, resilient, adaptable, and speedy qualities of robotics. This is why we evolved toward hybrid approaches that blend NeRF-like embeddings with more lightweight, queryable graph structures.
Still, our NeRF explorations shaped how we think about searchability of 3D spaces. It nudged us toward the idea that what matters most is not just visual fidelity, it is semantic utility. Robots do not need to admire the marble flooring; they need to know how to get past the concierge desk, into the elevator, stop by the red chair in the lobby, and to the apartment door.
In short:
NeRFs were a fun, important chapter in building Spatial Search. It showed us what is possible, and clarified what is practical. And like most experiments, NeRFs left us with a deeper conviction: the future of indoor autonomy will not be solved by any one model, but by orchestration across them.