VizWiz-VQA
Introduced in VizWiz Grand Challenge: Answering Visual Questions from Blind People2017
The VizWiz-VQA dataset originates from a natural visual question answering setting where blind people each took an image and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. The proposed challenge addresses the following two tasks for this dataset: predict the answer to a visual question and (2) predict whether a visual question cannot be answered.
Source: https://vizwiz.org/tasks-and-datasets/vqa/ Image Source: https://vizwiz.org/tasks-and-datasets/vqa/