Learning Hierarchical Acquisition Functions for Bayesian Optimization

Type of publication:  Artikel in einem Konferenzbericht
Publikationsstatus: Akzeptiert
Zeitschrift: International Conference on Intelligent Robots and Systems
Jahr: 2020
Verlag: Proceedings of International Conference on Intelligent Robots and Systems (IROS)
Ort: Las Vegas, USA
URL: https://rob.ai-lab.science/wp/...
Abriss: Learning control policies in robotic tasks requires a large number of interactions due to small learning rates, bounds on the updates or unknown constraints. In contrast humans can infer protective and safe solutions after a single failure or unexpected observation. In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks. A Gaussian Process implements the modeling and the sampling of the acquisition function. This enables rapid learning with large learning rates while a mental replay phase ensures that policy regions that led to failures are inhibited during the sampling process. The features of the hierarchical Bayesian optimization method are evaluated in a simulated and physiological humanoid postural balancing task. The method outperforms standard optimization techniques, such as Bayesian Optimization, in the number of interactions to solve the task, in the computational demands and in the frequency of observed failures. Further, we show that our method performs similar to humans for learning the postural balancing task by comparing our simulation results with real human data.
Schlagworte: Reinforcement Learning
Autoren: Rottmann, Nils
Kunavar, Tjasa
Babic, Jan
Peters, Jan
Rueckert, Elmar
Anhänge
     [Bib|RIS]