InternVLA-N1 is a state-of-the-art navigation foundation model built on a
multi-system design
. Within this framework, it introduces a
dual-system approach
that joint trains the
System 2
for high-level reasoning and
System 1
for low-level action and control. This asynchronous architecture enables smooth, efficient, and robust instruction-following navigation in both simulated and real-world environments.
🔗 Resources
Key Features
🧩
Modular Multi-System Support
Combines
System 2
(reasoning/planning) with
System 1
(action/control) in an asynchronous framework, delivering the first
Dual-System Vision-Language Navigation (VLN) Foundation Model
.
🚀
Zero-Shot Sim2Real Generalization
Trained exclusively on simulation data (
InternData-N1
) while generalizing effectively to real-world deployments.
🏆
State-of-the-Art Performance
Achieves leading results on multiple VLN benchmarks, including
VLN-CE R2R/RxR
and
VLN-PE
.
⚡
Asynchronous Inference
Enables smooth execution and dynamic obstacle avoidance during navigation.
Optimized end-to-end performance and faster convergence; uses RGB observations
The previously released version is now called
InternVLA-N1-wo-dagger
. The lastest official release is recommended for best performance.
Usage
For inference, evaluation, and the Gradio demo, please refer to the
InternNav repository
.
Citation
If you find our work helpful, please consider starring this repository 🌟 and citing:
@misc{internvla-n1,
title = {{InternVLA-N1: An} Open Dual-System Navigation Foundation Model with Learned Latent Plans},
author = {InternVLA-N1 Team},
year = {2025},
booktitle={arXiv},
}
@misc{internnav2025,
title = {{InternNav: InternRobotics'} open platform for building generalized navigation foundation models},
author = {InternNav Contributors},
howpublished={\url{https://github.com/InternRobotics/InternNav}},
year = {2025}
}
@misc{wei2025groundslowfastdualsystem,
title={Ground Slow, Move Fast: A Dual-System Foundation Model for Generalizable Vision-and-Language Navigation},
author={Meng Wei and Chenyang Wan and Jiaqi Peng and Xiqian Yu and Yuqiang Yang and Delin Feng and Wenzhe Cai and Chenming Zhu and Tai Wang and Jiangmiao Pang and Xihui Liu},
year={2025},
eprint={2512.08186},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2512.08186},
}
Runs of InternRobotics InternVLA-N1-System2 on huggingface.co
70
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
36
30-day runs
More Information About InternVLA-N1-System2 huggingface.co Model
InternVLA-N1-System2 huggingface.co
InternVLA-N1-System2 huggingface.co is an AI model on huggingface.co that provides InternVLA-N1-System2's model effect (), which can be used instantly with this InternRobotics InternVLA-N1-System2 model. huggingface.co supports a free trial of the InternVLA-N1-System2 model, and also provides paid use of the InternVLA-N1-System2. Support call InternVLA-N1-System2 model through api, including Node.js, Python, http.
InternVLA-N1-System2 huggingface.co is an online trial and call api platform, which integrates InternVLA-N1-System2's modeling effects, including api services, and provides a free online trial of InternVLA-N1-System2, you can try InternVLA-N1-System2 online for free by clicking the link below.
InternRobotics InternVLA-N1-System2 online free url in huggingface.co:
InternVLA-N1-System2 is an open source model from GitHub that offers a free installation service, and any user can find InternVLA-N1-System2 on GitHub to install. At the same time, huggingface.co provides the effect of InternVLA-N1-System2 install, users can directly use InternVLA-N1-System2 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
InternVLA-N1-System2 install url in huggingface.co: