Trabalho de Evento

URI permanente desta comunidadehttps://repositorio.insper.edu.br/handle/11224/3235

Navegar

Resultados da Pesquisa

Agora exibindo 1 - 4 de 4
  • Trabalho de Evento
    Continuous Parameter Control Using an On/Off Sensor in the Augmented Handheld Triangle
    (2021) Ferreira, Marcio Albano H.; TIAGO FERNANDES TAVARES
    In this work, we present Triaume, an augmented musical percussion instrument based on the triangle. The augmentation proposal for this instrument is based on a capacitive thumb sensor, that allows controlling digital musical devices and at the same time, preserves the original instrument idiomatic inside the context in which it is inserted. Triaume’s interaction proposals were built upon Brazilian music genres idiomatic, such as Forro, Xote, and Bai ´ ao. The instrument ˜ invasiveness is further reduced through the use of an external device (an appli cation running on a smartphone) for emulating faders related to sound parameter configuration. At first, we used the sensor as an on/off button able to trigger pre programmed percussion samples, which can be synchronized with the triangle’s acoustic sounds. This mechanism can be adjusted for triggering on pressing or releasing the sensor. Next, we convert the digital signals acquired by the sensor to continuous values by the on/off signal filtering. Such signal rectification and filtering system allows gradual change of continuous sound synthesis parameters, dealing with the on/off signal low robustness, turning it into a more controlled sig nal. Triaume can be inserted in the contexts of traditional and avant-garde music, also motivating further studies for applying the mechanism used on it in other percussion instruments.
  • Trabalho de Evento
    InFracta: the Body as an Augmented Instrument in a Collaborative, Multi-Modal Piece
    (2021) Pessanha, Thales Roel P.; Roque, Thiago Rossi; Zanchetta, Guilherme; Pereira, Lucas B.; Oliveira, Gabrielly Lima de; Pinheiro, Bruna C. M.; Paulino, Renata F. P. P.; TIAGO FERNANDES TAVARES
    This paper discusses the creative process of the piece “InFracta: Dialogue Processes in a Multi-modal Environment”. The discussion concerns the dialogues between the dance, music, image, and technology knowledge domains, which were all present in the construction of the piece’s poetics. The interaction between these domains fostered resignifying the dancers’ gestures, so that their bodies interacted with a sound environment as if they were augmented instruments. This discussion adds to previous work on technology-mediated multi-modal art, in special concerning the contribution and the emergence of meaning related to each of the knowledge domains involved in the piece.
  • Trabalho de Evento
    Comparative Latency Analysis of Optical and Inertial Motion Capture Systems for Gestural Analysis and Musical Performance
    (2021) Santos, Geise; Wang, Johnt; Brum, Carolina; Wanderley, Marcelo M.; Tavares, Tiago; TIAGO FERNANDES TAVARES; Rocha, Anderson
    Wireless sensor-based technologies are becoming increasingly accessible and widely explored in interactive musical performance due to their ubiquity and low-cost, which brings the necessity of understanding the capabilities and limitations of these sensors. This is usually approached by using a reference system, such as an optical motion capture system, to assess the signals’ properties. However, this process raises the issue of synchronizing the signal and the reference data streams, as each sensor is subject to different latency, time drift, reference clocks and initialization timings. This paper presents an empirical quantification of the latency communication stages in a setup consisting of a Qualisys optical motion capture (mocap) system and a wireless microcontroller-based sensor device. We performed event-to-end tests on the critical components of the hybrid setup to determine the synchronization suitability. Overall, further synchronization is viable because of the near individual average latencies of around 25ms for both the mocap system and the wireless sensor interface.
  • Trabalho de Evento
    Unsupervised Improvement of Audio-Text Cross-Modal Representations
    (2023) Wang, Zhepei; Subakan, Cem; Subramani, Krishna; Wu, Junkai; TIAGO FERNANDES TAVARES; FABIO JOSE AYRES; Smaragdis, Paris
    Recent advances in using language models to obtain cross-modal audio-text representations have overcome the limitations of conventional training approaches that use predefined labels. This has allowed the community to make progress in tasks like zero-shot classification, which would otherwise not be possible. However, learning such representations requires a large amount of human-annotated audio-text pairs. In this paper, we study unsupervised approaches to improve the learning framework of such representations with unpaired text and audio. We explore domain-unspecific and domain-specific curation methods to create audio-text pairs that we use to further improve the model. We also show that when domain-specific curation is used in conjunction with a soft-labeled contrastive loss, we are able to obtain significant improvement in terms of zero-shot classification performance on downstream sound event classification or acoustic scene classification tasks.