FABIO JOSE AYRES
Projetos de Pesquisa
Unidades Organizacionais
Resumo profissional
Área de pesquisa
Nome para créditos
3 resultados
Resultados de Busca
Agora exibindo 1 - 3 de 3
Artigo Científico Detection of the Optic Nerve Head in Fundus Images of the Retina with Gabor Filters and Phase Portrait Analysis(2010) Rangayyan, Rangaraj M.; Zhu, Xiaolu; FABIO JOSE AYRES; Ells, Anna L.We propose a method using Gabor filters and phase portraits to automatically locate the optic nerve head (ONH) in fundus images of the retina. Because the center of the ONH is at or near the focal point of convergence of the retinal vessels, the method includes detection of the vessels using Gabor filters, detection of peaks in the node map obtained via phase portrait analysis, and an intensity-based condition. The method was tested on 40 images from the Digital Retinal Images for Vessel Extraction (DRIVE) database and 81 images from the Structured Analysis of the Retina (STARE) database. An ophthalmologist independently marked the center of the ONH for evaluation of the results. The evaluation of the results includes free-response receiver operating characteristics (FROC) and a measure of distance between the manually marked and detected centers. With the DRIVE database, the centers of the ONH were detected with an average distance of 0.36 mm (18 pixels) to the corresponding centers marked by the ophthalmologist. FROC analysis indicated a sensitivity of 100% at 2.7 false positives per image. With the STARE database, FROC analysis indicated a sensitivity of 88.9% at 4.6 false positives per image.Trabalho de Evento Unsupervised Improvement of Audio-Text Cross-Modal Representations(2023) Wang, Zhepei; Subakan, Cem; Subramani, Krishna; Wu, Junkai; TIAGO FERNANDES TAVARES; FABIO JOSE AYRES; Smaragdis, ParisRecent advances in using language models to obtain cross-modal audio-text representations have overcome the limitations of conventional training approaches that use predefined labels. This has allowed the community to make progress in tasks like zero-shot classification, which would otherwise not be possible. However, learning such representations requires a large amount of human-annotated audio-text pairs. In this paper, we study unsupervised approaches to improve the learning framework of such representations with unpaired text and audio. We explore domain-unspecific and domain-specific curation methods to create audio-text pairs that we use to further improve the model. We also show that when domain-specific curation is used in conjunction with a soft-labeled contrastive loss, we are able to obtain significant improvement in terms of zero-shot classification performance on downstream sound event classification or acoustic scene classification tasks.Artigo Científico Effect of Pixel Resolution on Texture Features of Breast Masses in Mammograms(2010) Rangayyan, Rangaraj M.; Nguyen, Thanh M.; FABIO JOSE AYRES; Nandi, Asoke K.The effect of pixel resolution on texture features computed using the gray-level co-occurrence matrix (GLCM) was analyzed in the task of discriminating mammographic breast lesions as benign masses or malignant tumors. Regions in mammograms related to 111 breast masses, including 65 benign masses and 46 malignant tumors, were analyzed at pixel sizes of 50, 100, 200, 400, 600, 800, and 1,000 μm. Classification experiments using each texture feature individually provided accuracy, in terms of the area under the receiver operating characteristics curve (AUC), of up to 0.72. Using the Bayesian classifier and the leave-one-out method, the AUC obtained was in the range 0.73 to 0.75 for the pixel resolutions of 200 to 800 μm, with 14 GLCM-based texture features using adaptive ribbons of pixels around the boundaries of the masses. Texture features computed using the ribbons resulted in higher classification accuracy than the same features computed using the corresponding regions within the mass boundaries. The t test was applied to AUC values obtained using 100 repetitions of random splitting of the texture features from the ribbons of masses into the training and testing sets. The texture features computed with the pixel size of 200 μm provided the highest average AUC with statistically highly significant differences as compared to all of the other pixel sizes tested, except 100 μm.