Self-adaptive context aware audio localization for robots using parallel cerebellar models

Barrett-Baxendale, Mark and Pearson, M.J. and Nibouche, M. and Secco, Emanuele Lindo and Pipe, A.G. (2017) Self-adaptive context aware audio localization for robots using parallel cerebellar models. In: Unspecified. (Accepted for Publication)

[thumbnail of audio_cerebellum_camera.pdf] Text
audio_cerebellum_camera.pdf - Accepted Version

Download (1MB)

Abstract

An audio sensor system is presented that uses multiple cerebellar models to determine the acoustic environment in which a robot is operating, allowing the robot to select appropriate models to calibrate its audio-motor map for the detected environment. The use of the adaptive filter model of the cerebellum in a variety of robotics applications has demonstrated the utility of the so-called cerebellar chip. This paper combines the notion of cerebellar calibration of a distorted audio-motor map with the use of multiple parallel models to predict the context (acoustic environment) within which the robot is operating. The system was able to correctly predict seven different acoustic contexts in almost 70% of cases tested.

Item Type: Conference or Workshop Item (Paper)
Additional Information and Comments: The final publication is available at link.springer.com
Faculty / Department: Faculty of Human and Digital Sciences > Mathematics and Computer Science
Depositing User: Emanuele Secco
Date Deposited: 27 Mar 2017 15:15
Last Modified: 25 May 2017 13:42
URI: https://hira.hope.ac.uk/id/eprint/1907

Actions (login required)

View Item View Item