graviti
ProductOpen DatasetsApps MarketSolutionsResourcesCompany
369
0
0
Objects365
General
Discussion
Code
Activities
c77d6b54-8cd1-11eb-88ae-0e1f58d5e9a9
db798ce·
Jun 20, 2021 2:26 PM
·1Commits

Overview

Objects365 is a brand new dataset, designed to spur object detection research with a focus on diverse objects in the Wild.

  • 365 categories
  • 2 million images
  • 30 million bounding boxes

Data collection

Data Source

To make the image sources more diverse, we collect images mainly from Flicker.

Object Categories

Based on the collected images, we first select eleven super- categories which are common and diverse to cover most ob- ject instances. They are: human and related accessories, living room, clothes, kitchen, instrument, transportation, bathroom, electronics, food (vegetables), office supplies, and animal. Based on the super-categories, we further pro- pose 442 categories which widely exists in our daily lives. As some of the object categories are rarely found, we first annotate all 442 categories in the first 100K images and then select the most frequent 365 object categories as our tar- get objects. Also, to be compatible with the existing object detection benchmarks, the 365 categories include the cate- gories defined in PASCAL VOC and COCO bench- marks.

Non-Iconic Images

As our Objects365 dataset focuses on object detection, we eliminate those images which are only suitable for image classification. For example, the image only contains one object instance around the image center. This filtering process was first adopted in COCO.

Data Annotation

We design our annotation pipeline as the following three steps. The first step performs a two-class classification. If the image is non-iconic or contains at least one object instance in the eleven super-categories, it will be passed to the next step. In the second step, the image-level tags with the eleven super-categories will be labeled. An image may be labeled with more than one tag. In the third step, one annotator will be assigned to label the object instances in one specific super-category. All object instances belonging to the super-category should be labeled with a bounding box together with an object name.

Citation

Please use the following citation when referencing the dataset:

@inproceedings{shao2019objects365,
  title={Objects365: A large-scale, high-quality dataset for object detection},
  author={Shao, Shuai and Li, Zeming and Zhang, Tianyuan and Peng, Chao and Yu, Gang and Zhang,
Xiangyu and Li, Jing and Sun, Jian},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={8430--8439},
  year={2019}
}
🎉Many thanks to Graviti Open Datasets for contributing the dataset
Basic Information
Application ScenariosNot Available
AnnotationsNot Available
TasksNot Available
LicenseUnknown
Updated on2021-01-20 04:46:10
Metadata
Data TypeNot Available
Data Volume2M
Annotation Amount0
File Size0B
Copyright Owner
Megvii Technology Ltd.
Annotator
Unknown
More Support Options