3260 papers • 126 benchmarks • 313 datasets
A grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments. The task emulates a real-world scenario in that (a) the requester may not know how to navigate to the target objects and thus makes requests by only specifying high-level endgoals, and (b) the agent is capable of sensing when it is lost and querying an advisor, who is more qualified at the task, to obtain language subgoals to make progress.
(Image credit: Papersgraph)
These leaderboards are used to track progress in vision-based-navigation-with-language-based-assistance-15
No benchmarks available.
Use these libraries to find vision-based-navigation-with-language-based-assistance-15 models and implementations
No datasets available.
No subtasks available.
Adding a benchmark result helps the community track progress.