Beyond the Destination: A Novel Benchmark for Exploration-Aware Embodied Question Answering

Kaixuan Jiang1, Yang Liu1*, Weixing Chen1, Jingzhou Luo1, Ziliang Chen2, Ling Pan 3, Guanbin Li1,2, Liang Lin1,2
1Sun Yat-sen University 2Pengcheng Laboratory 3Hong Kong University of Science and Technology *Corresponding Author

Q: What could I do to enhance the lighting in the bedroom?
A: You could turn on the lamp next to the bed.

Q: Where did I place the vase in the dining room?
A: It's placed in the center of the dining table.

Q: I invited four friends over. Are there enough seats for everyone in the living room?
A: Yes, there are enough seats for everyone.

Q: How can I practice drumming?
A: You can use the drum set in the corner of the practice room.

Q: How many lab coats hanging in the laboratory?
A: There are two.

Q: What color is the front door in the entry hallway?
A: The front door is green.

Q: Is there a plant on the window sill in my workspace?
A: No, there isn't a plant.

Q: What exercise equipment can I use in the living room?
A: You can use the treadmill.

Q: Is the curtain in the living room drawn all the way?
A: No, the curtain is partially drawn.

Q: Where did I put the plates?
A: It's on a shelf in the storage room.

Q: What is the black appliance near the window in my kitchen?
A: It's a microwave.

Q: What is the composition of the kitchen floor's surface?
A: The kitchen floor is made up of tiles.

Abstract

Embodied Question Answering (EQA) is a challenging task in embodied intelligence that requires agents to dynamically explore 3D environments, actively gather visual information, and perform multi-step reasoning to answer questions. However, current EQA approaches suffer from critical limitations in exploration efficiency, dataset design, and evaluation metrics. Moreover, existing datasets often introduce biases or prior knowledge, leading to disembodied reasoning, while frontier-based exploration strategies struggle in cluttered environments and fail to ensure fine-grained exploration of task-relevant areas. To address these challenges, we construct the EXPloration-awaRe Embodied queStion anSwering Benchmark (EXPRESS-Bench), the largest dataset designed specifically to evaluate both exploration and reasoning capabilities. EXPRESS-Bench consists of 777 exploration trajectories and 2,044 question-trajectory pairs. To improve exploration efficiency, we propose Fine-EQA, a hybrid exploration model that integrates frontier-based and goal-oriented navigation to guide agents toward task-relevant regions more effectively. Additionally, we introduce a novel evaluation metric, Exploration-Answer Consistency (EAC), which ensures faithful assessment by measuring the alignment between answer grounding and exploration reliability. Extensive experimental comparisons with state-of-the-art EQA models demonstrate the effectiveness of our EXPRESS-Bench in advancing embodied exploration and question reasoning.

EXPRESS-Bench

MY ALT TEXT Comparison of our EXPRESS-Bench with other EQA datasets. The orange trajectory in the top-down map shows a complete exploration path from EXPRESS-Bench, with observation images at key waypoints (top-right). Data for this path is in the orange box. The blue trajectory simulates OpenEQA's episodic memory, passing near the target but not ending there. The yellow box simulates how multiple-choice data is generated in HM-EQA, lacking the exploration path. For each question, answers are based on visual observations at the endpoint, scored according to each dataset's evaluation method. Unlike HM-EQA and OpenEQA, which may give higher scores based on answer similarity, EXPRESS-Bench adjusts scores for incorrect or fabricated answers by grounding them in the agent's observations.
MY ALT TEXT The construction process of EXPRESS-Bench. It consists of three key steps: trajectory generation, question-answer pair generation, and data filtering.
MY ALT TEXT
Overview of the EXPRESS-Bench statistics. The EXPRESS-Bench contains 777 trajectories, encompassing a total of 2,044 question-trajectory pairs. It predominantly consists of questions in the following seven categories: object, existence, attribute, location, state, knowledge, and counting.

Evaluation

MY ALT TEXT
Exploration-Answer Consistency Metric. We simultaneously assess both answer correctness and grounding, offering a more rigorous evaluation of model performance.

Model

MY ALT TEXT The Fine-EQA framework operates as follows: The agent initially performs coarse-grained exploration using a frontier-based strategy, then switches to goal-oriented fine-grained exploration once task-relevant regions are identified. A maximum exploration limit per region prevents excessive searching, prompting the agent to either return to frontier-based exploration or focus on the next most promising region. Throughout this process, the VLM continuously evaluates the relevance and completeness of the acquired information, guiding the agent's decision to either continue exploration or generate answers based on the most recent visual inputs.

Experiments

MY ALT TEXT We compare the performance of Fine-EQA with state-of-the-art models on EXPRESS-Bench. Fine-EQA outperforms other models in terms of both reasoning and exploration capabilities.

BibTeX

@article{EXPRESSBench,
                  title={Beyond the Destination: A Novel Benchmark for Exploration-Aware Embodied Question Answering},
                  author={Jiang, Kaixuan and Liu, Yang and Chen, Weixing and Luo, Jingzhou and Chen, Ziliang and Pan, Ling and Li, Guanbin and Lin, Liang},
                  year={2025}
                  journal={arXiv preprint arXiv:2503.11117}}