graviti
PlatformMarketplaceSolutionsResourcesOpen DatasetsCommunityCompany
890
0
0
General
SDK
Activities
update dataset overview and ba...
0ff4e16·
Feb 10, 2022 7:35 AM
·3Commits

Overview

SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST (e.g. the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images.

  • 10 classes, 1 for each digit. Digit '1' has label 1, '9' has label 9 and '0' has label 0.
  • 73257 digits for training, 26032 digits for testing, and 531131 additional, somewhat less difficult samples, to use as extra training data.
  • Comes in two formats:
    • Original images with character level bounding boxes.
    • MNIST-like 32-by-32 images centered around a single character (many of the images do contain some distractors at the sides).

Data Format

Format 1: Full Numbers

img

These are the original, variable-resolution, color house-number images with character level bounding boxes, as shown in the examples images above. (The blue bounding boxes here are just for illustration purposes. The bounding box information are stored in digitStruct.mat instead of drawn directly on the images in the dataset.) Each tar.gz file contains the orignal images in png format, together with a digitStruct.mat file, which can be loaded using Matlab. The digitStruct.mat file contains a struct called digitStruct with the same length as the number of original images. Each element in digitStruct has the following fields: name which is a string containing the filename of the corresponding image. bbox which is a struct array that contains the position, size and label of each digit bounding box in the image. Eg: digitStruct(300).bbox(2).height gives height of the 2nd digit bounding box in the 300th image.

Format 2: Cropped Digits

img

Character level ground truth in an MNIST-like format. All digits have been resized to a fixed resolution of 32-by-32 pixels. The original character bounding boxes are extended in the appropriate dimension to become square windows, so that resizing them to 32-by-32 pixels does not introduce aspect ratio distortions. Nevertheless this preprocessing introduces some distracting digits to the sides of the digit of interest. Loading the .mat files creates 2 variables: X which is a 4-D matrix containing the images, and y which is a vector of class labels. To access the images, X(:,:,:,i) gives the i-th 32-by-32 RGB image, with class label y(i).

Citation

Please use the following citation when referencing the dataset:

@article{netzer2011reading,
  title={Reading digits in natural images with unsupervised feature learning},
  author={Netzer, Yuval and Wang, Tao and Coates, Adam and Bissacco, Alessandro and Wu, Bo
and Ng, Andrew Y},
  year={2011}
}
Data Preview
List Dataset Files
🎉Many thanks to Graviti Open Datasets for contributing the dataset
Basic Information
Application ScenariosNot Available
AnnotationsBox2D
TasksOCR/Text Detection
LicenseUnknown
Updated on2022-02-10 07:35:14
Metadata
Data TypeImage
Data Volume248,823
Annotation Amount248,823
File Size2.34GB
Copyright Owner
Stanford University
Annotator
Unknown