Skip to main content

The real-world model for physical AI

Robots, agents, and autonomous systems need world models grounded in physics and geometry, not imagination

Establishing ground truth for physical AI

We're building for the 80% of economic activity beyond screens — so delivery robots navigate city streets safely, agents can inspect critical infrastructure, and field teams can operate where GPS is interrupted. Trained on billions of images, terrain, drone, and satellite data, our models power three core capabilities: reconstruct, localize, understand.

Reconstruct

We build geometrically accurate digital twins from smartphones, 360° cameras, drones, satellites, and LiDAR — output as meshes and Gaussian splats. We created SPZ, the open-source file format for Gaussian splats now used across the industry.

Localize

Our Visual Positioning System turns aerial and ground capture devices into a precise positioning sensor, delivering accurate localization almost anywhere, including where GPS fails — powered by 50 million neural networks and 150 trillion parameters.

Understand

We are building spatial intelligence: an AI interface to the physical world that you can talk to — one that can understand, measure, and reason about every point in 3D. This marks the shift from physical AI that navigates the world to physical AI that interprets it: understanding what it sees, why it matters, and how to respond/what to do next.

Our research is at the frontier of physical AI

We’re building the foundational models that enable AI to operate in the real world, powering precision, automation, and real-time understanding across physical operations.

DoubleTake Geometry Guided Depth Estimation

ECCV 2024

Completes full offline scene reconstruction in 13.8 seconds on edge hardware, with no cloud dependency, and supports persistent scene models that update as ground truth changes — directly relevant to ISR workflows in dynamic or denied environments.

Learn more

Trusted by industries that demand real-world intelligence