Cable SCARA Robot Controlled by a Neural Network Using Reinforcement Learning
Autores
Okabe, Eduardo
Paiva, Victor
Silva-Teixeira, Luis H.
Izuka, Jaime
Orientador
Co-orientadores
Citações na Scopus
Tipo de documento
Data
2023
Resumo
In this work, three reinforcement learning algorithms (Proximal Policy Optimization, Soft Actor-Critic, and Twin Delayed Deep Deterministic Policy Gradient) are employed to control a two link selective compliance articulated robot arm (SCARA) robot. This robot has three cables attached to its end-effector, which creates a triangular shaped workspace. Positioning the end-effector in the workspace is a relatively simple kinematic problem, but moving outside this region, although possible, requires a nonlinear dynamic model and a state-of-the-art controller. To solve this problem in a simple manner, reinforcement learning algorithms are used to find possible trajectories for three targets out of the workspace. Additionally, the SCARA mechanism offers two possible configurations for each end-effector position. The algorithm results are compared in terms of displacement error, velocity, and standard deviation among ten trajectories provided by the trained network. The results indicate the Proximal Policy Algorithm as the most consistent in the analyzed situations. Still, the Soft Actor-Critic presented better solutions, and Twin Delayed Deep Deterministic Policy Gradient provided interesting and more unusual trajectories.
Palavras-chave
Algorithms; Cables; Reinforcement learning; Robots; Artificial neural networks
Titulo de periódico
Journal of Computational and Nonlinear Dynamics
DOI
Título de Livro
URL na Scopus
Idioma
en
Notas
Membros da banca
Área do Conhecimento CNPQ
OUTROS