Research Output
Unsupervised Rotation Factorization in Restricted Boltzmann Machines
  Finding suitable image representations for the task at hand is critical in computer vision. Different approaches extending the original Restricted Boltzmann Machine (RBM) model have recently been proposed to offer rotation-invariant feature learning. In this paper, we present an extended novel RBM that learns rotation invariant features by explicitly factorizing for rotation nuisance in 2D image inputs within an unsupervised framework. While the goal is to learn invariant features, our model infers an orientation per input image during training, using information related to the reconstruction error. The training process is regularised by a Kullback-Leibler divergence, offering stability and consistency. We used the γ-score, a measure that calculates the amount of invariance, to mathematically and experimentally demonstrate that our approach indeed learns rotation invariant features. We show that our method outperforms the current state-of-the-art RBM approaches for rotation invariant feature learning on three different benchmark datasets, by measuring the performance with the test accuracy of an SVM classifier. Our implementation is available at https://bitbucket.org/tuttoweb/rotinvrbm.

  • Type:

    Article

  • Date:

    15 October 2019

  • Publication Status:

    Published

  • Publisher

    Institute of Electrical and Electronics Engineers (IEEE)

  • DOI:

    10.1109/TIP.2019.2946455

  • ISSN:

    1057-7149

  • Funders:

    The University of Edinburgh

Citation

Giuffrida, M. V., & Tsaftaris, S. A. (2020). Unsupervised Rotation Factorization in Restricted Boltzmann Machines. IEEE Transactions on Image Processing, 29(1), 2166-2175. https://doi.org/10.1109/TIP.2019.2946455

Authors

Keywords

Machine learning, neural networks, rotation-invariant features, Restricted Boltzmann Machines

Monthly Views:

Available Documents