graviti logoProductOpen DatasetsAbout us
Sign in
289
0
0
General
Discussion
Code
Activities
c779b030-8cd1-11eb-88ae-0e1f58d5e9a9
d01feb9·
Jun 20, 2021 12:29 PM
·1Commits

Overview

When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover shortcomings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.

Citation

@inproceedings{johnson2017clevr,
  title={Clevr: A diagnostic dataset for compositional language and elementary visual reasoning},
  author={Johnson, Justin and Hariharan, Bharath and van der Maaten, Laurens and Fei-Fei, Li
and Lawrence Zitnick, C and Girshick, Ross},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2901--2910},
  year={2017}
}
🎉Many thanks to Graviti Open Datasets for contributing the dataset
Basic Information
Application ScenariosNot Available
AnnotationsNot Available
TasksNot Available
LicenseCC BY 4.0
Updated on2021-01-20 03:32:04
Metadata
Data TypeNot Available
Data Volume0
Annotation Amount0
File Size0B
Copyright Owner
Justin Johnson
Annotator
Unknown
More Support Options
Start building your AI now
Get StartedContact