Trabalho de Evento

URI permanente desta comunidadehttps://repositorio.insper.edu.br/handle/11224/3235

Navegar

Resultados da Pesquisa

Agora exibindo 1 - 6 de 6
  • Imagem de Miniatura
    Trabalho de Evento
    Are Large Language Models Moral Hypocrites? A Study Based on MoralFoundations
    (2021) José Luiz Nunes; GUILHERME DA FRANCA COUTO FERNANDES DE ALMEIDA; Araujo, Marcelo de; Barbosa, Simone D. J.
    Large language models (LLMs) have taken centre stage indebates on Artificial Intelligence. Yet there remains a gap inhow to assess LLMs’ conformity to important human values.In this paper, we investigate whether state-of-the-art LLMs,GPT-4 and Claude 2.1 (Gemini Pro and LLAMA 2 did notgenerate valid results) are moral hypocrites. We employ tworesearch instruments based on the Moral Foundations The-ory: (i) the Moral Foundations Questionnaire (MFQ), whichinvestigates which values are considered morally relevant inabstract moral judgements; and (ii) the Moral FoundationsVignettes (MFVs), which evaluate moral cognition in con-crete scenarios related to each moral foundation. We charac-terise conflicts in values between these different abstractionsof moral evaluation as hypocrisy. We found that both mod-els displayed reasonable consistency within each instrumentcompared to humans, but they displayed contradictory andhypocritical behaviour when we compared the abstract val-ues present in the MFQ to the evaluation of concrete moralviolations of the MFV.
  • Imagem de Miniatura
    Trabalho de Evento
    Continuous Parameter Control Using an On/Off Sensor in the Augmented Handheld Triangle
    (2021) Ferreira, Marcio Albano H.; TIAGO FERNANDES TAVARES
    In this work, we present Triaume, an augmented musical percussion instrument based on the triangle. The augmentation proposal for this instrument is based on a capacitive thumb sensor, that allows controlling digital musical devices and at the same time, preserves the original instrument idiomatic inside the context in which it is inserted. Triaume’s interaction proposals were built upon Brazilian music genres idiomatic, such as Forro, Xote, and Bai ´ ao. The instrument ˜ invasiveness is further reduced through the use of an external device (an appli cation running on a smartphone) for emulating faders related to sound parameter configuration. At first, we used the sensor as an on/off button able to trigger pre programmed percussion samples, which can be synchronized with the triangle’s acoustic sounds. This mechanism can be adjusted for triggering on pressing or releasing the sensor. Next, we convert the digital signals acquired by the sensor to continuous values by the on/off signal filtering. Such signal rectification and filtering system allows gradual change of continuous sound synthesis parameters, dealing with the on/off signal low robustness, turning it into a more controlled sig nal. Triaume can be inserted in the contexts of traditional and avant-garde music, also motivating further studies for applying the mechanism used on it in other percussion instruments.
  • Imagem de Miniatura
    Trabalho de Evento
    InFracta: the Body as an Augmented Instrument in a Collaborative, Multi-Modal Piece
    (2021) Pessanha, Thales Roel P.; Roque, Thiago Rossi; Zanchetta, Guilherme; Pereira, Lucas B.; Oliveira, Gabrielly Lima de; Pinheiro, Bruna C. M.; Paulino, Renata F. P. P.; TIAGO FERNANDES TAVARES
    This paper discusses the creative process of the piece “InFracta: Dialogue Processes in a Multi-modal Environment”. The discussion concerns the dialogues between the dance, music, image, and technology knowledge domains, which were all present in the construction of the piece’s poetics. The interaction between these domains fostered resignifying the dancers’ gestures, so that their bodies interacted with a sound environment as if they were augmented instruments. This discussion adds to previous work on technology-mediated multi-modal art, in special concerning the contribution and the emergence of meaning related to each of the knowledge domains involved in the piece.
  • Imagem de Miniatura
    Trabalho de Evento
    Comparative Latency Analysis of Optical and Inertial Motion Capture Systems for Gestural Analysis and Musical Performance
    (2021) Santos, Geise; Wang, Johnt; Brum, Carolina; Wanderley, Marcelo M.; Tavares, Tiago; TIAGO FERNANDES TAVARES; Rocha, Anderson
    Wireless sensor-based technologies are becoming increasingly accessible and widely explored in interactive musical performance due to their ubiquity and low-cost, which brings the necessity of understanding the capabilities and limitations of these sensors. This is usually approached by using a reference system, such as an optical motion capture system, to assess the signals’ properties. However, this process raises the issue of synchronizing the signal and the reference data streams, as each sensor is subject to different latency, time drift, reference clocks and initialization timings. This paper presents an empirical quantification of the latency communication stages in a setup consisting of a Qualisys optical motion capture (mocap) system and a wireless microcontroller-based sensor device. We performed event-to-end tests on the critical components of the hybrid setup to determine the synchronization suitability. Overall, further synchronization is viable because of the near individual average latencies of around 25ms for both the mocap system and the wireless sensor interface.
  • Imagem de Miniatura
    Trabalho de Evento
    EyeSwipe: Dwell-free Text Entry Using Gaze Paths
    (2016) ANDREW TOSHIAKI NAKAYAMA KURAUCHI; Feng, Wenxin; Joshi, Ajjen; Morimoto, Carlos; Betke, Margrit
    Text entry using gaze-based interaction is a vital communication tool for people with motor impairments. Most solutions require the user to fixate on a key for a given dwell time to select it, thus limiting the typing speed. In this paper we introduce EyeSwipe, a dwell-time-free gaze-typing method. With EyeSwipe, the user gaze-types the first and last characters of a word using the novel selection mechanism “reverse crossing.” To gaze-type the characters in the middle of the word, the user only needs to glance at the vicinity of the respective keys. We compared the performance of EyeSwipe with that of a dwell-time-based virtual keyboard. EyeSwipe afforded statistically significantly higher typing rates and more comfortable interaction in experiments with ten participants who reached 11.7 words per minute (wpm) after 30 min typing with EyeSwipe.
  • Imagem de Miniatura
    HGaze Typing: Head-Gesture Assisted Gaze Typing
    (2021) Feng, Wenxin; Zou, Jiangnan; ANDREW TOSHIAKI NAKAYAMA KURAUCHI; Morimoto, Carlos; Betke, Margrit
    This paper introduces a bi-modal typing interface, HGaze Typing, which combines the simplicity of head gestures with the speed of gaze inputs to provide efficient and comfortable dwell-free text entry. HGaze Typing uses gaze path information to compute candidate words and allows explicit activation of common text entry commands, such as selection, deletion, and revision, by using head gestures (nodding, shaking, and tilting). By adding a head-based input channel, HGaze Typing reduces the size of the screen regions for cancel/deletion buttons and the word candidate list, which are required by most eye-typing interfaces. A user study finds HGaze Typing outperforms a dwell-time-based keyboard in efficacy and user satisfaction. The results demonstrate that the proposed method of integrating gaze and head-movement inputs can serve as an effective interface for text entry and is robust to unintended selections.