Coleção de Trabalhos Apresentados em Eventos

URI permanente para esta coleçãohttps://repositorio.insper.edu.br/handle/11224/3236

Navegar

Resultados da Pesquisa

Agora exibindo 1 - 3 de 3
  • Imagem de Miniatura
    Trabalho de Evento
    Are Large Language Models Moral Hypocrites? A Study Based on MoralFoundations
    (2021) José Luiz Nunes; GUILHERME DA FRANCA COUTO FERNANDES DE ALMEIDA; Araujo, Marcelo de; Barbosa, Simone D. J.
    Large language models (LLMs) have taken centre stage indebates on Artificial Intelligence. Yet there remains a gap inhow to assess LLMs’ conformity to important human values.In this paper, we investigate whether state-of-the-art LLMs,GPT-4 and Claude 2.1 (Gemini Pro and LLAMA 2 did notgenerate valid results) are moral hypocrites. We employ tworesearch instruments based on the Moral Foundations The-ory: (i) the Moral Foundations Questionnaire (MFQ), whichinvestigates which values are considered morally relevant inabstract moral judgements; and (ii) the Moral FoundationsVignettes (MFVs), which evaluate moral cognition in con-crete scenarios related to each moral foundation. We charac-terise conflicts in values between these different abstractionsof moral evaluation as hypocrisy. We found that both mod-els displayed reasonable consistency within each instrumentcompared to humans, but they displayed contradictory andhypocritical behaviour when we compared the abstract val-ues present in the MFQ to the evaluation of concrete moralviolations of the MFV.
  • Imagem de Miniatura
    Trabalho de Evento
    EyeSwipe: Dwell-free Text Entry Using Gaze Paths
    (2016) ANDREW TOSHIAKI NAKAYAMA KURAUCHI; Feng, Wenxin; Joshi, Ajjen; Morimoto, Carlos; Betke, Margrit
    Text entry using gaze-based interaction is a vital communication tool for people with motor impairments. Most solutions require the user to fixate on a key for a given dwell time to select it, thus limiting the typing speed. In this paper we introduce EyeSwipe, a dwell-time-free gaze-typing method. With EyeSwipe, the user gaze-types the first and last characters of a word using the novel selection mechanism “reverse crossing.” To gaze-type the characters in the middle of the word, the user only needs to glance at the vicinity of the respective keys. We compared the performance of EyeSwipe with that of a dwell-time-based virtual keyboard. EyeSwipe afforded statistically significantly higher typing rates and more comfortable interaction in experiments with ten participants who reached 11.7 words per minute (wpm) after 30 min typing with EyeSwipe.
  • Imagem de Miniatura
    HGaze Typing: Head-Gesture Assisted Gaze Typing
    (2021) Feng, Wenxin; Zou, Jiangnan; ANDREW TOSHIAKI NAKAYAMA KURAUCHI; Morimoto, Carlos; Betke, Margrit
    This paper introduces a bi-modal typing interface, HGaze Typing, which combines the simplicity of head gestures with the speed of gaze inputs to provide efficient and comfortable dwell-free text entry. HGaze Typing uses gaze path information to compute candidate words and allows explicit activation of common text entry commands, such as selection, deletion, and revision, by using head gestures (nodding, shaking, and tilting). By adding a head-based input channel, HGaze Typing reduces the size of the screen regions for cancel/deletion buttons and the word candidate list, which are required by most eye-typing interfaces. A user study finds HGaze Typing outperforms a dwell-time-based keyboard in efficacy and user satisfaction. The results demonstrate that the proposed method of integrating gaze and head-movement inputs can serve as an effective interface for text entry and is robust to unintended selections.