Kar, Reshma and Ghosh, Lidia and Konar, Amit and Chakraborty, Aruna and Nagar, Atulya K. (2021) EEG-Induced Autonomous Game-Teaching to a Robot Arm by Human Trainers Using Reinforcement Learning. IEEE Transactions on Games. ISSN 2475-1502
Preview |
Text
EEG-Induced Autonomous Game-Teaching to a Robot Arm by Human Trainers Using Reinforcement Learning.pdf - Accepted Version Download (531kB) | Preview |
Abstract
This paper deals with a simple indoor game, where the player has to pass a ball through a ring fixed on a variable pan-tilt platform. The motivation of the research is to learn the gaming actions of an experienced player by a robot arm for subsequent training to younger children (trainee) by the robot. The robot learns the gaming actions of the player at different game states, determined by pan-tilt orientations of the ring and its radial distance with respect to the player. The actions of the experienced player/expert are defined by six parameters: three junction-coordinates in the right arm of the player and the 3-dimensional speed of the ball in a given throw. Reinforcement learning is employed here to adapt a state-action probability matrix of a probabilistic learning automation based on the reward (or penalty) scores of the player due to the success (or failure) in passing the ball through a given ring. A hybrid brain-computer interface (BCI) is used to detect the failures in the gaming action of the player by natural arousal of Error-related Potential (ErrP) signal following motor execution, indicated by motor imageries. In absence (presence) of ErrP after a motor imagination, the system considers a success (failure) in the player’s trials, and thus adapts the probabilities in the learning automata according to success/failure of individual game instances. After the convergence of the state-action probability matrix, the same is used for planning, where the action corresponding to the highest probability at a given state in the automaton is selected for execution. The robot can autonomously train the game to the children using the learning automaton with converged probability scores. Experiments undertaken confirm that the success rate of the robot arm in the motor execution phase is very high (above 90%) when the ring is placed at a moderate distance of 4 feet from the robot.
Item Type: | Article |
---|---|
Additional Information and Comments: | © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available from: https://ieeexplore.ieee.org/document/9599488 |
Keywords: | Brain-Computer Interfaces, Reinforcement Learning, Gaming, Event-Related Potentials, Event- Related Desynchronization/Synchronization. |
Faculty / Department: | Faculty of Human and Digital Sciences > Mathematics and Computer Science |
Depositing User: | Atulya Nagar |
Date Deposited: | 23 Nov 2021 11:55 |
Last Modified: | 23 Nov 2021 11:55 |
URI: | https://hira.hope.ac.uk/id/eprint/3434 |
Actions (login required)
View Item |