graviti logoProductOpen DatasetsAbout us
Sign in
64
0
0
Places 365
General
Activities

Overview

The Places dataset is designed following principles of human visual cognition. Our goal is to build a core of visual knowledge that can be used to train artificial systems for high-level visual understanding tasks, such as scene context, object recognition, action and event prediction, and theory-of-mind inference. The semantic categories of Places are defined by their function: the labels represent the entry-level of an environment. To illustrate, the dataset has different categories of bedrooms, or streets, etc, as one does not act the same way, and does not make the same predictions of what can happen next, in a home bedroom, an hotel bedroom or a nursery.

In total, Places contains more than 10 million images comprising 400+ unique scene categories. The dataset features 5000 to 30,000 training images per class, consistent with real-world frequencies of occurrence. Using convolutional neural networks (CNN), Places dataset allows learning of deep scene features for various scene recognition tasks, with the goal to establish new state-of-the-art performances on scene-centric benchmarks. Here we provide the Places Database and the trained CNNs for academic research and education purposes.

Citation

Please use the following citation when referencing the dataset:

@article{zhou2017places,
  title={Places: A 10 million Image Database for Scene Recognition},
  author={Zhou, Bolei and Lapedriza, Agata and Khosla, Aditya and Oliva, Aude and Torralba,
Antonio},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2017},
  publisher={IEEE}
}

License

Custom

Basic Information
Application ScenariosScenario Recognition
AnnotationsClassification
LicenseCustom
Updated on2021-03-24 19:48:10
Metadata
Data TypeImage
Data Volume10M
File Size0B
Annotation Amount0
Copyright Owner
MIT
Annotator
Unknown
More Support Options
Similar Datasets
Lsun
Created byGravitier
Start building your AI now
Get StartedContact