Browsing by Author "Salas, Rodrigo"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
- Some of the metrics are blocked by yourconsent settings
Publication Multimodal algorithm for iris recognition with local topological descriptors(2009-12-01) ;Campos, Sergio ;Salas, Rodrigo; Castro, CarlosThis work presents a new method for feature extraction of iris images to improve the identification process. The valuable information of the iris is intrinsically located in its natural texture, and preserving and extracting the most relevant features is of paramount importance. The technique consists in several steps from adquisition up to the person identification. Our contribution consists in a multimodal algorithm where a fragmentation of the normalized iris image is performed and, afterwards, regional statistical descriptors with Self-Organizing-Maps are extracted. By means of a biometric fusion of the resulting descriptors, the features of the iris are compared and classified. The results with the iris data set obtained from the Bath University repository show an excellent accuracy reaching up to 99.867%. - Some of the metrics are blocked by yourconsent settings
Publication Robust self-organizing maps(2004-01-01); ;Moreno, Sebastian ;Rogel, CristianSalas, RodrigoThe Self Organizing Map (SOM) model is an unsupervised learning neural network that has been successfully applied as a data mining tool. The advantages of the SOMs are that they preserve the topology of the data space, they project high dimensional data to a lower dimension representation scheme, and are able to find similarities in the data. However, the learning algorithm of the SOM is sensitive to the presence of noise and outliers as we will show in this paper. Due to the influence of the outliers in the learning process, some neurons (prototypes) of the ordered map get located far from the majority of data, and therefore, the network will not effectively represent the topological structure of the data under study. In this paper, we propose a variant to the learning algorithm that is robust under the presence of outliers in the data by being resistant to these deviations. We call this algorithm Robust SOM (RSOM). We will illustrate our technique on synthetic and real data sets.Scopus© Citations 9 - Some of the metrics are blocked by yourconsent settings
Publication Robustness analysis of the neural gas learning algorithm(2006-01-01); ;Moreno, Sebastián ;Salas, RodrigoThe Neural Gas (NG) is a Vector Quantization technique where a set of prototypes self organize to represent the topology structure of the data. The learning algorithm of the Neural Gas consists in the estimation of the prototypes location in the feature space based in the stochastic gradient descent of an Energy function. In this paper we show that when deviations from idealized distribution function assumptions occur, the behavior of the Neural Gas model can be drastically affected and will not preserve the topology of the feature space as desired. In particular, we show that the learning algorithm of the NG is sensitive to the presence of outliers due to their influence over the adaptation step. We incorporate a robust strategy to the learning algorithm based on M-estimators where the influence of outlying observations are bounded. Finally we make a comparative study of several estimators where we show the superior performance of our proposed method over the original NG, in static data clustering tasks on both synthetic and real data sets.Scopus© Citations 2 - Some of the metrics are blocked by yourconsent settings
Publication Robustness analysis of the neural gas learning algorithm(2006-01-01); ;Moreno, Sebastián ;Salas, RodrigoThe Neural Gas (NG) is a Vector Quantization technique where a set of prototypes self organize to represent the topology structure of the data. The learning algorithm of the Neural Gas consists in the estimation of the prototypes location in the feature space based in the stochastic gradient descent of an Energy function. In this paper we show that when deviations from idealized distribution function assumptions occur, the behavior of the Neural Gas model can be drastically affected and will not preserve the topology of the feature space as desired. In particular, we show that the learning algorithm of the NG is sensitive to the presence of outliers due to their influence over the adaptation step. We incorporate a robust strategy to the learning algorithm based on M-estimators where the influence of outlying observations are bounded. Finally we make a comparative study of several estimators where we show the superior performance of our proposed method over the original NG, in static data clustering tasks on both synthetic and real data sets.Scopus© Citations 2 - Some of the metrics are blocked by yourconsent settings
Publication Self-improving generative artificial neural network for pseudorehearsal incremental class learning(2019-01-01) ;Mellado, Diego; ;Chabert, Steren ;Torres, RominaSalas, RodrigoDeep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43% and, therefore, proceed with incremental class learning. - Some of the metrics are blocked by yourconsent settings
Publication Self-organizing neuro-fuzzy inference system(2008-11-10); ;Veloz, Alejandro ;Salas, RodrigoChabert, SterenThe architectural design of neuro-fuzzy models is one of the major concern in many important applications. In this work we propose an extension to Rogers’s ANFIS model by providing it with a selforganizing mechanism. The main purpose of this mechanism is to adapt the architecture during the training process by identifying the optimal number of premises and consequents needed to satisfy a user’s performance criterion. Using both synthetic and real data, our proposal yields remarkable results compared to the classical ANFIS.Scopus© Citations 12
