Wang, ZhepeiSubakan, CemSubramani, KrishnaWu, JunkaiTIAGO FERNANDES TAVARESFABIO JOSE AYRESSmaragdis, Paris2025-01-082025-01-082023https://repositorio.insper.edu.br/handle/11224/7245Recent advances in using language models to obtain cross-modal audio-text representations have overcome the limitations of conventional training approaches that use predefined labels. This has allowed the community to make progress in tasks like zero-shot classification, which would otherwise not be possible. However, learning such representations requires a large amount of human-annotated audio-text pairs. In this paper, we study unsupervised approaches to improve the learning framework of such representations with unpaired text and audio. We explore domain-unspecific and domain-specific curation methods to create audio-text pairs that we use to further improve the model. We also show that when domain-specific curation is used in conjunction with a soft-labeled contrastive loss, we are able to obtain significant improvement in terms of zero-shot classification performance on downstream sound event classification or acoustic scene classification tasks.Digital5 p.InglêsAudio-text representation learningData aug-mentationContrastive learningSound event classificationAcoustic scene classificationUnsupervised Improvement of Audio-Text Cross-Modal Representationsconference paper