This dataset contains cropped pedestrain images from the videos in the original DukeMTMC dataset. The pedestrain images are extracted every 120 frames, yielding in total 36,411 bounding boxes with IDs. There are 1,404 identities appearing in more than two cameras and 408 identities (distractor ID) who appear in only one camera. 702 IDs are randomly select as the training set and the remaining 702 IDs are assigned as the testing set. In the testing set, the authors picked one query image for each ID in each camera and put the remaining images in the gallery. As a result, the dataset contains 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images (702 ID + 408 distractor ID).
- Zhedong Zheng, Liang Zheng, Yi Yang, "Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro", IEEE International Conference on Computer Vision (ICCV), 2017.
- Ergys Ristani, Francesco Solera, Roger S. Zou, Rita Cucchiara, Carlo Tomasi, "Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking", European Conference on Computer Vision (ECCV) workshop on Benchmarking Multi-Target Tracking, 2016.