A new pedestrian attribute dataset is released, which is by far the largest and most diverse of its kind and it is shown that the large-scale dataset facilitates the learning of robust attribute detectors with good generalization performance.
The capability of recognizing pedestrian attributes, such as gender and clothing style, at far distance, is of practical interest in far-view surveillance scenarios where face and body close-shots are hardly available. We make two contributions in this paper. First, we release a new pedestrian attribute dataset, which is by far the largest and most diverse of its kind. We show that the large-scale dataset facilitates the learning of robust attribute detectors with good generalization performance. Second, we present the benchmark performance by SVM-based method and propose an alternative approach that exploits context of neighboring pedestrian images for improved attribute inference.
Yubin Deng
2 papers