GeoSense: Internalizing Geometric Necessity Perception for Multimodal Reasoning

Ruiheng Liu1, *, Haihong Hao1, *, Mingfei Han1 2, *, Xin Gu3, Kecheng Zhang1, Changlin Li4, Xiaojun Chang1 2, ✉
1 University of Science and Technology of China     2 Mohamed bin Zayed University of Artificial Intelligence
3 University of Chinese Academy of Sciences     4 Stanford University
* Equal contribution     Corresponding author
Adaptive Geometric Reasoning with GeoSense
Figure 1: Adaptive Geometric Reasoning with GeoSense. GeoSense introduces an adaptive mechanism that requests geometric features only when necessary.

Abstract

Advancing towards artificial superintelligence requires rich and intelligent perceptual capabilities. A critical frontier in this pursuit is overcoming the limited spatial understanding of Multimodal Large Language Models (MLLMs), where geometry information is essential. Existing methods often address this by rigidly injecting geometric signals into every input, while ignoring their necessity and adding computation overhead.

Contrary to this paradigm, our framework endows the model with an awareness of perceptual insufficiency, empowering it to autonomously engage geometric features in reasoning when 2D cues are deemed insufficient. To achieve this, we first introduce an independent geometry input channel to the model architecture and conduct alignment training, enabling the effective utilization of geometric features. Subsequently, to endow the model with perceptual awareness, we curate a dedicated spatial-aware supervised fine-tuning dataset. This serves to activate the model's latent internal cues, empowering it to autonomously determine the necessity of geometric information. Experiments across multiple spatial reasoning benchmarks validate this approach, demonstrating significant spatial gains without compromising 2D visual reasoning capabilities, offering a path toward more robust, efficient and self-aware multi-modal intelligence.

Methodology

GeoSense dynamically modulates its reliance on geometry input. The training framework consists of two main stages:

Architectural Overview of GeoSense
Figure 2: Architectural Overview of GeoSense. We integrate a 3D visual geometry encoder alongside a standard 2D visual encoder.

Key Results

Our proposed model significantly outperforms the baseline Qwen2.5-VL-3B on spatial reasoning benchmarks under comparable training data scales.

BibTeX

@article{geosense2026, title={GeoSense: Internalizing Geometric Necessity Perception for Multimodal Reasoning}, author={Ruiheng Liu and Haihong Hao and Mingfei Han and Xin Gu and Kecheng Zhang and Changlin Li and Xiaojun Chang}, journal={Under review by the International Conference on Machine Learning (ICML)}, year={2026} }