Object Goal Navigation-requiring an agent to locate a specific object in an unseen environment-remains a core challenge in embodied AI. Although recent progress in Vision-Language Model (VLM)-–based agents has demonstrated promising perception and decision-making abilities through prompting, none has yet established a fully modular world model design that reduces risky and costly interactions with the environment by predicting the future state of the world. We introduce WMNav, a novel World Model-based Navigation framework powered by Vision-Language Models (VLMs). It predicts possible outcomes of decisions and builds memories to provide feedback to the policy module. To retain the predicted state of the environment, WMNav proposes the online maintained Curiosity Value Map as part of the world model memory to provide dynamic configuration for navigation policy. By decomposing according to a human-like thinking process, WMNav effectively alleviates the impact of model hallucination by making decisions based on the feedback difference between the world model plan and observation. To further boost efficiency, we implement a two-stage action proposer strategy: broad exploration followed by precise localization. Extensive evaluation on HM3D and MP3D validates WMNav surpasses existing zero-shot benchmarks in both success rate and exploration efficiency (absolute improvement: +3.2% SR and +3.2% SPL on HM3D, +13.5% SR and +1.1% SPL on MP3D).
The WMNav framework. After acquiring the RGB-D panoramic image and pose information at step $t$, the PredictVLM first predicts the state of the world, and the state is merged with the curiosity value map \(s_{t-1}\) from the previous step to get the current curiosity value map \(s_t\). After that, the updated map projects the scores of each direction back onto the panoramic image, and the direction with the highest score is selected. Secondly, given the selected direction image, the new subtask and the goal flag are determined by PlanVLM and are stored in memory as cost \(c_t\), and the memory $h_t$ is combined by \(s_t\) and \(c_t\). Finally, the two-stage action proposer annotates the action sequence on the selected image and sends it into ReasonVLM to obtain the final polar coordinate vector action \(a_t\) for execution. Note that PlanVLM and ReasonVLM are configured by the cost \(c_{t-1}\).
Use the slider here to observe the specific navigation process.
Start Frame
End Frame