ANDREW TOSHIAKI NAKAYAMA KURAUCHI
Unidades Organizacionais
Resumo profissional
person.page.researcharea
Nome para créditos
8 resultados
Resultados da Pesquisa
Agora exibindo 1 - 8 de 8
Trabalho de Conclusão de Curso Development of a dashboard for real-time student performance data visualization(2024) Carvalho, Arthur Ferreira; Carreras, Natália Queiroz Menezes; Mahfuz, Pedro OsbornThis project is designed to provide instructors who use the PrairieLearn platform with visual tools and insights into students’ performance by creating and displaying said information in an external dashboard. The data used to create the dashboard is pulled from PrairieLearn’s API. The objective with this is to help instructors model their courses to maximize academic engagement and performance. The PrairieLearn platform offers a dynamic and interactive environment for students to engage with course material, enabling instructors to create customizable quizzes, assignments, and assessments tailored to the evolving educational landscape. With their current system, which provides limited insights by a statistics table that displays average scores and completion times, the project aims to expand the analytical tools available to educators with the creation of this external tool. By integrating features such as performance metric analysis, question score histograms, and assessment completion percentage tracking, the dashboard equips instructors with a detailed view into student performance. This allows for a deeper understanding of assessment effectiveness, enabling educators in identifying learning gaps, adjusting teaching strategies, and customizing content to meet individual student needs more effectively. The anticipated outcome is a user-friendly dashboard, which provides insights into students' learning patterns, allowing instructors to make informed decisions and improve educational outcomes.Trabalho de Conclusão de Curso Development of an Administrator Web App for subscriptions’ management and a Mobile App for unlocking E-bikes(2024) Ades, Cesar Ezra; Hadba, Lila Takahashi; Kawahara, Thiago ShigueroThis project aims to develop a Web Application (App) for the Administrator (E-Moving personnel) and an users’ Mobile App for E-Moving, a rental electric bike (E-Bike) company focused on improving urban mobility. This endeavour builds upon a prior Capstone Project (PFE), which developed a control board for electric bicycles in 2023. The current initiative seeks to enhance the E-Moving profitability and control, by being able to block users’ e-bikes and mitigating the risks of theft and default. The project, a collaboration between students from Insper (São Paulo, Brazil) and Texas A&M (Texas, United States), involves Insper students developing two Apps in accordance with the requirements established by the previous project and those of the Texas A&M students. Notably, the hardware component is being developed by Texas A&M students. The project is structured into five primary segments: (i) Web App and Mobile App screen flowchart, (ii) Web App and Mobile App screen design and front-end implementation, (iii) Bluetooth connection with E-bike, (iv) Database integration with both Web App and Mobile App front-end, and (v) Integration of Bluetooth, Web App, Mobile App, and E-bike. This project emphasizes a practical application that enables administrators (E-Moving personnel) to remotely monitor client's E-bikes and for them to manage its E-bike.Trabalho de Evento Active learning approaches applied in teaching agile methodologies(2023) GRAZIELA SIMONE TONIN; FABIO ROBERTO DE MIRANDA; ANDREW TOSHIAKI NAKAYAMA KURAUCHI; Montagner, Igor; Agena, Barbara; Barth, Fabrício J.We need to modernize education to form adaptable leaders who can tackle evolving challenges in our dynamic world. Insper's computer science program is designed to reflect this need with an innovative infrastructure, curriculum, and industry partnerships. We use active learning methodologies to teach agile methodologies and develop soft skills to solve real-world problems. Our focus is on non-violent communication, feedback techniques, and teamwork, along with constant interaction with industry professionals who share their experiences with students. Our goal is to provide students with a well-rounded education that equips them for success in the digital age. This work-in-progress research project describes our approach to teaching and our objective to prepare students for the future in the context of an innovative first semester experience on a CS program.Trabalho de Evento EyeSwipe: Dwell-free Text Entry Using Gaze Paths(2016) ANDREW TOSHIAKI NAKAYAMA KURAUCHI; Feng, Wenxin; Joshi, Ajjen; Morimoto, Carlos; Betke, MargritText entry using gaze-based interaction is a vital communication tool for people with motor impairments. Most solutions require the user to fixate on a key for a given dwell time to select it, thus limiting the typing speed. In this paper we introduce EyeSwipe, a dwell-time-free gaze-typing method. With EyeSwipe, the user gaze-types the first and last characters of a word using the novel selection mechanism “reverse crossing.” To gaze-type the characters in the middle of the word, the user only needs to glance at the vicinity of the respective keys. We compared the performance of EyeSwipe with that of a dwell-time-based virtual keyboard. EyeSwipe afforded statistically significantly higher typing rates and more comfortable interaction in experiments with ten participants who reached 11.7 words per minute (wpm) after 30 min typing with EyeSwipe.Artigo Científico An investigation of the distribution of gaze estimation errors in head mounted gaze trackers using polynomial functions(2018) Mardanbegi, Diako; ANDREW TOSHIAKI NAKAYAMA KURAUCHI; Morimoto, Carlos H.Second order polynomials are commonly used for estimating the point-of-gaze in head-mounted eye trackers. Studies in remote (desktop) eye trackers show that although some non-standard 3rd order polynomial models could provide better accuracy, high-order polynomials do not necessarily provide better results. Different than remote setups though, where gaze is estimated over a relatively narrow field-of-view surface (e.g. less than 30x20 degrees on typical computer displays), head-mounted gaze trackers (HMGT) are often desired to cover a relatively wider field-of-view to make sure that the gaze is detected in the scene image even for extreme eye angles. In this paper we investigate the behavior of the gaze estimation error distribution throughout the image of the scene camera when using polynomial functions. Using simulated scenarios, we describe effects of four different sources of error: interpolation, extrapolation, parallax, and radial distortion. We show that the use of third order polynomials result in more accurate gaze estimates in HMGT, and that the use of wide angle lenses might be beneficial in terms of error reduction.- HGaze Typing: Head-Gesture Assisted Gaze Typing(2021) Feng, Wenxin; Zou, Jiangnan; ANDREW TOSHIAKI NAKAYAMA KURAUCHI; Morimoto, Carlos; Betke, MargritThis paper introduces a bi-modal typing interface, HGaze Typing, which combines the simplicity of head gestures with the speed of gaze inputs to provide efficient and comfortable dwell-free text entry. HGaze Typing uses gaze path information to compute candidate words and allows explicit activation of common text entry commands, such as selection, deletion, and revision, by using head gestures (nodding, shaking, and tilting). By adding a head-based input channel, HGaze Typing reduces the size of the screen regions for cancel/deletion buttons and the word candidate list, which are required by most eye-typing interfaces. A user study finds HGaze Typing outperforms a dwell-time-based keyboard in efficacy and user satisfaction. The results demonstrate that the proposed method of integrating gaze and head-movement inputs can serve as an effective interface for text entry and is robust to unintended selections.
Trabalho de Conclusão de Curso Dynamic Adaptation of Graphical User Interfaces in Augmented Reality Based on Environmental Factors: Enhancing User Experience and Safety(2024) Santos, André Corrêa; Barão, Pedro Bittar; Lima, Rafael Melhado AraujoAugmented Reality (AR) has been experiencing great steps in further blending real life and virtual environments, by bringing additional information and virtual controls to real world scenarios. As such, adequate and responsive positioning and integration of virtual elements are fundamental in AR to provide users a seamless experience in blending virtual elements to reality. This project aims to develop an Augmented Reality solution that makes graphical user interfaces (GUI), such as interactive panels, respond dynamically to their environment. The solution automatically adjusts the positioning and appearance of graphical interfaces based on conditions from the environment, such as changes in lighting, presence of important objects (such as warning signs), to the presence of people, and to the presence of possible safety risks to the user. To achieve this, multiple techniques in the area of computer vision have been used for identifying and classifying objects detected, while a Generative Artificial Intelligence (AI) model is used to interpret more nuanced contextual data, such as a user text input. This allows interfaces to adjust themselves to not block the vision of points of interest from the user, adjust colors and contrast for legibility and visual comfort. The development of the solution has been guided by user testing to ensure effectiveness and an intuitive experience. As of this report, a prototype has been developed that can adjust the positioning of the GUI to avoid occluding specific classes based on a text input. Additionally, it modifies the color of the GUI elements to complement the dominant colors behind GUIs in the camera image.Trabalho de Conclusão de Curso Uso de Câmeras de Segurança para detecção de armas de fogo(2024) Possato, Eric Andrei Lima; Jesus, Matheus Aguiar de; Pinto, Pedro Altobelli Teixeira; Silva, Pedro AntonioNeste projeto desenvolvemos um “middleware” que funciona em conjunto com o software Defense IA da Intelbras, uma empresa brasileira especializada em produtos e soluções para segurança, comunicação, redes e energia, com forte atuação em iniciativas como cidades inteligentes. O Defense IA é um sistema avançado de vigilância que utiliza inteligência artificial para monitoramento e controle de acesso, oferecendo funcionalidades como reconhecimento facial, leitura de placas de veículos e detecção de comportamentos suspeitos. O “middleware” que desenvolvemos visa adicionar uma funcionalidade ao Defense IA. Especificamente, ele é responsável por capturar, analisar e processar fluxos de vídeo de câmeras RTSP (Real Time Streaming Protocol) , com o foco na identificação de armas nas imagens. O objetivo foi criar um serviço intermediário que opera de forma autônoma, realizando a análise das imagens em tempo real e enviando alertas após um processo de triagem. Isso permite uma integração eficiente com o software da Intelbras, adicionando uma nova função ao Defense IA.