Options
Solar, Mauricio
Loading...
Nombre
Solar, Mauricio
Departamento
Campus / Sede
Campus Santiago San Joaquín
Email
ORCID
Scopus Author ID
55411267600
7 results
Now showing 1 - 7 of 7
- PublicationDevelopment of a virtual model of fibro-bronchoscopy(2011-09-01)
; Ducoing, EugenioA virtual model of fibro-bronchoscopy is reported. The virtual model represents in 3D the trachea and the bronchi creating a virtual world of the bronchial tree. The bronchoscope is modeled to look over the bronchial tree imitating the displacement and rotation of the real bronchoscope. The parameters of the virtual model were gradually adjusted according to expert opinion and allowed the training of specialists with a virtual bronchoscope of great realism. The virtual bronchial tree provides clues of reality regarding the movement of the bronchoscope, creating the illusion that the virtual instrument is behaving as the real one with all the benefits in costs that this means.Scopus© Citations 1 - PublicationA model to assess open government data in public agencies(2012-09-05)
; ;Concha, GastónMeijueiro, LuisIn this article a maturity model is proposed, named OD-MM (Open Data Maturity Model) to assess the commitment and capabilities of public agencies in pursuing the principles and practices of open data. The OD-MM model has a three level hierarchical structure, called domains, sub-domains and critical variables. Four capacity levels are defined for each of the 33 critical variables distributed in nine sub-domains in order to determine the organization maturity level. The model is a very valuable diagnosis tool for public services, given it shows all weaknesses and the way (a roadmap) to progress in the implementation of open dataScopus© Citations 48 - PublicationAutomatic generation of roadmap for e-government implementation(2010-11-05)
; ;Pantoja, DanielValdés, GonzaloE-government research deals with ‘wicked’ problems that require multidisciplinary approaches to gain a full understanding. One of the main challenges of e-government is to induce change in the structure of public organizations to realize its full potential. This paper investigates e-government induced change using two complementary theoretical lenses applied to an egovernment case study. We use organization theories to explore aspects of organizational structure that may change when implementating e-government and structuration theory to investigate how these aspects are affected by human action within its social structure. This combination allows us to investigate the discrepancy between the ambitions of e-government induced change and the actual changes accomplished in practice. Our analysis shows that using these two frames gives us better insight into the thorny subject of e-government than using a single theory. Further research should look into how these theories can be used to deepen our knowledge of e-government. - PublicationA Data Ingestion Procedure towards a Medical Images Repository(2024-08-01)
; ;Castañeda, Victor; ; This article presents an ingestion procedure towards an interoperable repository called ALPACS (Anonymized Local Picture Archiving and Communication System). ALPACS provides services to clinical and hospital users, who can access the repository data through an Artificial Intelligence (AI) application called PROXIMITY. This article shows the automated procedure for data ingestion from the medical imaging provider to the ALPACS repository. The data ingestion procedure was successfully applied by the data provider (Hospital Clínico de la Universidad de Chile, HCUCH) using a pseudo-anonymization algorithm at the source, thereby ensuring that the privacy of patients’ sensitive data is respected. Data transfer was carried out using international communication standards for health systems, which allows for replication of the procedure by other institutions that provide medical images. Objectives: This article aims to create a repository of 33,000 medical CT images and 33,000 diagnostic reports with international standards (HL7 HAPI FHIR, DICOM, SNOMED). This goal requires devising a data ingestion procedure that can be replicated by other provider institutions, guaranteeing data privacy by implementing a pseudo-anonymization algorithm at the source, and generating labels from annotations via NLP. Methodology: Our approach involves hybrid on-premise/cloud deployment of PACS and FHIR services, including transfer services for anonymized data to populate the repository through a structured ingestion procedure. We used NLP over the diagnostic reports to generate annotations, which were then used to train ML algorithms for content-based similar exam recovery. Outcomes: We successfully implemented ALPACS and PROXIMITY 2.0, ingesting almost 19,000 thorax CT exams to date along with their corresponding reports. - PublicationDeep learning techniques to process 3D chest CT(2024-01-01)
; The idea of using X–rays and Computed Tomography (CT) images as diagnostic method has been explored in several studies. Most of these studies work with slices of CT image in 2D, requiring less computational capacity and less time to process them than 3D. The processing of volumetric data (the complete CT images in 3D) adds an extra dimension of information. However, the magnitude of the data is considerably larger than working with slices in 2D, so extra computational processing is required. In this study a model capable of performing a classification of a 3D input that represents the volume of the CT scan is proposed. The model is able to classify the 3D input between COVID–19 and Non–COVID–19, but reducing the use of resources when performing the classification. The proposed model is the ResNet–50 model with a new dimension of information added, which is a simple autoencoder. This autoencoder is trained on the same dataset, and a vector representation of each exam is generated and used together with the exams to feed the ResNet–50. To validate the proposal, the same proposed model is compared with and without the autoencoder module that provides more information to the proposed model. The proposed model obtains better metrics than the same model without the autoencoder, confirming that extracting relevant features from the dataset helps improve the performance of the model.Scopus© Citations 2 - PublicationA survey on the dynamic scheduling problem in astronomical observations(2010-09-30)
;Mora, MatiasThe tasks execution scheduling is a common problem in computer science. The typical problem, as in industrial or computer processing applications, has some restrictions that are inapplicable for certain cases. For example, all available tasks have to be executed at some point, and ambient factors do not affect the execution order. In the astronomical observations field, projects are scheduled as observation blocks, and their execution depends on parameters like science goals priority and target visibility, but is also restricted by external factors: atmospheric conditions, equipment failure, etc. A telescope scheduler is mainly in charge of handling projects, commanding the telescope’s high level movement to targets, and starting data acquisition. With the growth of observatories’ capacities and maintenance costs, it is now mandatory to optimize the observation time allocation. Currently, at professional observatories there is still strong human intervention dependency, with no fully automatic solution so far. This paper aims to describe the dynamic scheduling problem in astronomical observations, and to provide a survey on existing solutions, opening some new application opportunities for computer science.Scopus© Citations 10 - PublicationNeural recognition of minerals(2008-07-23)
; ;Perez, PatricioWatkins, FranciscoThe design of a neural network is presented for the recognition of six kinds of minerals (chalcopyrite, chalcosine, covelline, bornite, pyrite, and energite) and to determine the percentage of these minerals from a digitized image of a rock sample. The input to the neural network corresponds to the histogram of the region of interest selected by the user from the image that it is desired to recognize, which is processed by the neural network, identifying one of the six minerals learned. The network’s training process took place with 160 regions of interest selected from digitized photographs of mineral samples. The recognition of the different types of minerals in the samples was tested with 240 photographs that were not used in the network’s training. The results showed that 97% of the images used to train the network were recognized correctly in the percentage mode. Of the new images, the network was capable of recognizing correctly 91% of the samples.Scopus© Citations 3