Vision-and-Language Navigation in Continuous Environments (VLN-CE) is one of the most intuitive yet challenging embodied AI tasks. Agents are tasked to navigate towards a target goal by executing a set of low-level actions, following a series of natural language instructions. All VLN-CE methods in the literature assume that language instructions are exact. However, in practice, instructions given by humans can contain errors when describing a spatial environment due to inaccurate memory or confusion. Current VLN-CE benchmarks do not address this scenario, making the state-of-the-art methods in VLN-CE fragile in the presence of erroneous instructions from human users. For the first time, we propose a novel benchmark dataset that introduces various types of instruction errors considering potential human causes. This benchmark provides valuable insight into the robustness of VLN systems in continuous environments. We observe a noticeable performance drop (up to -25%) in Success Rate when evaluating the state-of-the-art VLN-CE methods on our benchmark. Moreover, we formally define the task of Instruction Error Detection and Localization, and establish an evaluation protocol on top of our benchmark dataset. We also propose an effective method, based on a cross-modal transformer architecture, that achieves the best performance in error detection and localization, compared to baselines. Surprisingly, our proposed method has revealed errors in the validation set of the two commonly used datasets for VLN-CE, i.e., R2R-CE and RxR-CE, demonstrating the utility of our technique in other tasks.
An agent must follow instructions expressed in natural language to reach a target goal. For example: “Exit the bathroom and go left (✓right), then turn left at the big clock and go into the bedroom and wait next to the bed.” In this case, just changing “right” to “left” causes the agent to stop the exploration in the wrong location, even if during the path it did not see the “big clock” (yellow star).
We show the full episode for the example reported in the Scenario section above (also Fig.1 in the paper).
The videos shows the agent following the perturbed and correct instructions respectively.We answer the following question: What if we apply IEDL on the R2R-CE dataset?
Our method helped to detect 8 episodes from R2R-CE Val Unseen dataset that present different issues.
Here we show the BEVBert trajectories for these episodes, as well as the detected issue.
"Walk up to the photo on the wall directly in front of you."
"Turn around and go through the archway. Turn left and take the extreme left. Stop near the mantle."
"Walk past beige rug. Walk past butler's pantry. Make left after eye chart. Wait at gold polka dot picture frame."
"Go through middle room and to the right into the room with the sink, go straight through until you come to an archway on your left make a left through it and go through the doorway and wait."
"Turn right to face an old pew and chairs. Walk up to the old chairs and turn right. Walk down the row until you reach the 5th chair."
"Go through the dooorway on the right, continue straihtacross the hallway and into the room ahead. Stop near the shell."
"Go to the ottoman. Go to the bed. Go to the wardrobe. Go to the glass jug."
"Turn right and go through the door. Go down the hall way and go up the stairway to the left. Continue up the stairway to the left. Wait by the statue."
We report also the episode to be removed from the validation split of RxR-CE.
To prevent slow page loading due to longer episode lengths (resulting in heavier videos) and a higher number of instruction tokens (averaging 110 per episode) in RxR-CE compared to R2R-CE, we display only two videos on the main page.
The remaining videos are accessible here.
"Came out the wash room beside the glass window and curtain also. Go straight in front of mirrors and beside the big watch.Again go straight and turn wright come to another room"
"Too long, refer to the video caption"