Research Output
How Well Do Computational Features Perceptually Rank Textures? A Comparative Evaluation
  Inspired by studies [4, 23, 40] which compared rankings obtained by search engines and human observers, in this paper we compare texture rankings derived by 51 sets of computational features against perceptual texture rankings obtained from a free-grouping experiment with 30 human observers, using a unify evaluation framework. Experimental results show that the MRSAR [37], VZNEIGHBORHOOD [62], LBPHF [2] and LBPBASIC [3] feature sets perform better than their counterparts. However, none of those feature sets are ideal. The best average G and M measures (measures of ranking accuracy from 0 to 1) [15, 5] obtained are 0.36 and 0.25 respectively. We suggest that this poor performance may be due to the small local neighborhood used to calculate higher-order features which cannot capture the long-range interactions that humans have been shown to exploit [14, 16, 49, 56].

  • Date:

    01 April 2014

  • Publication Status:

    Published

  • Publisher

    ACM Press

  • DOI:

    10.1145/2578726.2578762

  • Library of Congress:

    QA76 Computer software

  • Dewey Decimal Classification:

    004 Data processing & computer science

  • Funders:

    Heriot Watt University

Citation

Dong, X., Methven, T. S., & Chantler, M. J. (2014). How Well Do Computational Features Perceptually Rank Textures? A Comparative Evaluation. In ICMR '14 Proceedings of International Conference on Multimedia Retrieval, 815-824. doi:10.1145/2578726.2578762

Authors

Keywords

Computational features, Evaluation, Perceptual texture ranking, Texture ranking, Texture retrieval, Texture similarity,

Monthly Views:

Available Documents