GUILHERME DA FRANCA COUTO FERNANDES DE ALMEIDA
Projetos de Pesquisa
Unidades Organizacionais
Resumo profissional
Área de pesquisa
Nome para créditos
2 resultados
Resultados de Busca
Agora exibindo 1 - 2 de 2
Artigo Científico Apply the Laws, if They are Good: Moral Evaluations Linearly Predict Whether Judges Should Enforce the Law(2024) Engelmann, Neele; GUILHERME DA FRANCA COUTO FERNANDES DE ALMEIDA; Sousa, Felipe Oliveira de; Prochownik, Karolina; Hannikainen, Ivar R.; Struchiner, Noel; Magen, StefanWhat should judges do when faced with immoral laws? Should they apply them without exception, since “the law is the law?” Or can exceptions be made for grossly immoral laws, such as historically, Nazi law? Surveying laypeople (N = 167) and people with some legal training (N = 141) on these matters, we find a surprisingly strong, monotonic relationship between people’s subjective moral evaluation of laws and their judgments that these laws should be applied in concrete cases. This tendency is most pronounced among individuals who endorse natural law (i.e., the legal-philosophical view that immoral laws are not valid laws at all), and is attenuated when disagreement about the moral status of a law is considered reasonable. The relationship is equally strong for laypeople and for those with legal training. We situate our findings within the broader context of morality’s influence on legal reasoning that experimental jurisprudence has uncovered in recent years, and consider normative implications.Artigo Científico Exploring the psychology of LLMs’ moral and legal reasoning(2024) GUILHERME DA FRANCA COUTO FERNANDES DE ALMEIDA; Nunes, José Luiz; Engelmann, Neele; Wiegmann, Alex; Araújo, Marcelo deLarge language models (LLMs) exhibit expert-level performance in tasks across a wide range of different domains. Ethical issues raised by LLMs and the need to align future versions makes it important to know how state of the art models reason about moral and legal issues. In this paper, we employ the methods of experimental psychology to probe into this question. We replicate eight studies from the experimental literature with instances of Google's Gemini Pro, Anthropic's Claude 2.1, OpenAI's GPT-4, and Meta's Llama 2 Chat 70b. We find that alignment with human responses shifts from one experiment to another, and that models differ amongst themselves as to their overall alignment, with GPT-4 taking a clear lead over all other models we tested. Nonetheless, even when LLM-generated responses are highly correlated to human responses, there are still systematic differences, with a tendency for models to exaggerate effects that are present among humans, in part by reducing variance. This recommends caution with regards to proposals of replacing human participants with current state-of-the-art LLMs in psychological research and highlights the need for further research about the distinctive aspects of machine psychology