Repository logo
Acerca de Depósito
  • Español
  • English
Log In
  1. Home
  2. Productividad Cientifica
  3. Artículos
  4. Convolutional Neural Network Feature Extraction Using Covariance Tensor Decomposition
 
  • Details

Convolutional Neural Network Feature Extraction Using Covariance Tensor Decomposition

Journal
IEEE Access
ISSN
2169-3536
Date Issued
2021-01-01
Author(s)
Fonseca, Ricardo
Departamento de Electrónica  
Guarnizo, Oscar
Suntaxi, DIego
Cadiz, Alfonso
Creixell, Werner  
Departamento de Electrónica  
DOI
10.1109/ACCESS.2021.3076033
Abstract
This work describes a new method to extract image features using tensor decomposition to model data. Given a set of sample images, we extract patches from images, compute the covariance tensor for all patches, decompose with the Tucker model, and obtain the most critical features from a tensor core. To extract features, we factorize the covariance tensor (CovTen) into its core and propose a new interpretation of the resultant tensor structure, which holds relevant features in a block-wise arrangement (also named filters, weights, or kernels). This tensorial representation allows preserving the spatial structure, learning multichannel filters, and establishing linear dependence between dimensions, reducing the dimensional complexity (the curse of dimensionality). Thus, the proposed method generates filters by a single feed-forward step using a few samples per class as low as 1. Besides, in kernel generation, labels are not needed. The obtained features were extensively tested using a convolutional neural network for classification. All tests were conducted under the VGG architecture conventions. The experiments helped us identify the proposed method’s advantages versus traditional convolutional neural networks in inference capacity and kernels initialization. We also performed experiments to select hyperparameters (nonLinearity, max pooling, samples, filter size) according to their performance. The inference capacity results showed an increased classification accuracy around 67% with CIFAR 10, 64% with CIFAR 100, and 98% with MNIST, using 10,100,1000 samples with a single feed-forward training. On the other hand, the initialization experiments showed the feature extraction capability versus available initializers (He random, He uniform, Glorot, random), confirming linear tensor constraints’ usefulness to generate features. Using the method as kernel initializer returns comparable findings with state of the art around 91% with CIFAR 10, 72% with CIFAR 100, and 99% with MNIST.
Subjects

Convolutional neural ...

Convolutional neural ...

PCA

Tucker

kernel initializer.

UNIVERSIDAD

  • Nuestra Historia
  • Federico Santa María
  • Definiciones Estratégicas
  • Modelo Educativo
  • Organización
  • Información Estadística USM

CAMPUS Y SEDES

  • Información Campus y Sedes
  • Tour Virtual
  • Icono Seguridad Política de Privacidad

EXTENSIÓN Y CULTURA

  • Dirección de Comunicaciones Estratégicas y Extensión Cultural
  • Dirección General de Vinculación con el Medio
  • Dirección de Asuntos Internacionales
  • Alumni
  • Noticias
  • Eventos
  • Radio USM
  • Cultura USM

SERVICIOS

  • Aula USM
  • Biblioteca USM
  • Portal de Autoservicio Institucional
  • Dirección de Tecnologías de la Información
  • Portal de Reportes UDAI
  • Sistema de Información de Gestión Académica
  • Sistema Integrado de Información Argos ERP
  • Sistema de Remuneraciones Históricas
  • Directorio USM
  • Trabaja con nosotros
Acreditación USM
usm.cl
Logo Acceso
Logo Consejo de Rectores
Logo G9
Logo AUR
Logo CRUV
Logo REUNA
Logo Universia

DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback