Info hash | 96db21675f464480780637f1416477ac14a81107 | ||||||
Last mirror activity | 3:14 ago | ||||||
Size | 1.35GB (1,345,332,224 bytes) | ||||||
Added | 2015-12-01 17:38:56 | ||||||
Views | 2014 | ||||||
Hits | 3029 | ||||||
ID | 3073 | ||||||
Type | multi | ||||||
Downloaded | 187 time(s) | ||||||
Uploaded by | joecohen | ||||||
Folder | voc2010 | ||||||
Num files | 2 files | ||||||
File list [Hide list] |
| ||||||
Mirrors | 8 complete, 0 downloading = 8 mirror(s) total [Log in to see full list] |
voc2010 (2 files)
VOCdevkit_08-May-2010.tar | 291.33kB |
VOCtrainval_03-May-2010.tar | 1.35GB |
Type: Dataset
Tags:
Bibtex:
Tags:
Bibtex:
@article{, title= {PASCAL Visual Object Classes Challenge 2010 (VOC2010) Complete Dataset}, journal= {}, author= {Everingham, M. and Van~Gool, L. and Williams, C. K. I. and Winn, J. and Zisserman, A.}, year= {}, url= {}, abstract= {Introduction The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are: Person: person Animal: bird, cat, cow, dog, horse, sheep Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor Data To download the training/validation data, see the development kit. The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image. Some example images can be viewed online. A subset of images are also annotated with pixel-wise segmentation of each object present, to support the segmentation competition. Some segmentation examples can be viewed online. Annotation was performed according to a set of guidelines distributed to all annotators. The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission. In the second stage, the test set will be made available for the actual competition. As in the VOC2008/VOC2009 challenges, no ground truth for the test data will be released. The data has been split into 50% for training/validation and 50% for testing. The distributions of images and objects by class are approximately equal across the training/validation and test sets. In total there are 21,738 images. Further statistics are online. Best Practice The VOC challenge encourages two types of participation: (i) methods which are trained using only the provided "trainval" (training + validation) data; (ii) methods built or trained using any data except the provided test data, for example commercial systems. In both cases the test data must be used strictly for reporting of results alone - it must not be used in any way to train or tune systems, for example by runing multiple parameter choices and reporting the best results obtained. If using the training data we provide as part of the challenge development kit, all development, e.g. feature selection and parameter tuning, must use the "trainval" (training + validation) set alone. One way is to divide the set into training and validation sets (as suggested in the development kit). Other schemes e.g. n-fold cross-validation are equally valid. The tuned algorithms should then be run only once on the test data. In VOC2007 we made all annotations available (i.e. for training, validation and test data) but since then we have not made the test annotations available. Instead, results on the test data are submitted to an evaluation server. Since algorithms should only be run once on the test data we strongly discourage multiple submissions to the server (and indeed the number of submissions for the same algorithm is strictly controlled), as the evaluation server should not be used for parameter tuning. We encourage you to publish test results always on the latest release of the challenge, using the output of the evaluation server. If you wish to compare methods or design choices e.g. subsets of features, then there are two options: (i) use the entire VOC2007 data, where all annotations are available; (ii) report cross-validation results using the latest "trainval" set alone. }, keywords= {}, terms= {The VOC2010 data includes images obtained from the "flickr" website. Use of these images must respect the corresponding terms of use: "flickr" terms of use For the purposes of the challenge, the identity of the images in the database, e.g. source and name of owner, has been obscured. Details of the contributor of each image can be found in the annotation to be included in the final release of the data, after completion of the challenge. Any queries about the use or ownership of the data should be addressed to the organizers.} }