Prof. Dr. Elmar Rückert


Professor

Ratzeburger Allee 160
23562 Lübeck
Gebäude 64, Raum 96 (EG)

Telefon: +49 451 31015209
Fax: +49 451 31015204
Email:
Elmar Rückert

Short-bio

Elmar Rueckert is professor of Robotics and Autonomous Systems at the University of Luebeck and head of the research group "Neural Learning Methods for Robotics". In 2014, he received his PhD in computer science at the Graz University of Technology. His dissertation on "biologically inspired motor learning methods for robots using probabilistic inference" was awarded summa cum laude. Thereafter he worked as a doctoral scientist at the Institute for Intelligent Autonomous Systems at the Technical University of Darmstadt. In 2016 he became the group leader of the research group "Neuronal Learning Methods for Robotics" and co-superviser of two PhD students. At the same time, he became coordinator of the associated EU project on cognitive learning methods in robotics.

At the beginning of 2018, Elmar Rueckert was appointed to the University of Luebeck, where his research interests include learning autonomous systems. Elmar Rueckert gives bachelor and master lectures in humanoid robotics, probabilistic robotics and machine learning. His research has contributed significantly to the understanding of stochastic and neuronal control and learning methods in robotics. His work on model predictive control with neuronal networks has been crucial for the realization of recent breakthroughs in real-time control strategies of humanoid robots with event based neuronal networks.

Research Interests

Medical Robotics: Real-time Tumour Tracking, Probabilistic Motion Compensation Models, Prosthesis Control, Movement Decoding and Understanding, Brain-Computer-Interfaces, Real-time Control, Interactive Learning from human feedback

Machine and Deep Learning: Deep Networks, Graphical Models, Probabilistic Inference, Variational Inference, Gaussian Processes, Transfer Learning, Message Passing, Clustering, Bayesian Optimization, Genetic Programming, LSTMs

Simulations and Computational Models: Robot Manipulation, Human Postural Control, Locomotion, Autonomous Driving, Long-Short-Term-Memory Models, Probabilistic Time-Series Models, Muscle Synergies, Hippocampal Models for Planning

Autonomous Systems: Movement Primitives, Reinforcement Learning, Imitation Learning, Morphological Computation, Quadruped Locomotion, Balancing Control with Humanoids, Deep Reinforcement Learning, Optimal Feedback Control

[SS2018] During the summer semester I am teaching the course Humanoid Robotics (RO5300). In this course I will discuss the key components of one of the most complex autonomous systems. These topics are

  1. Kinematics, Dynamics & Locomotion
  2. Representations of Skills & Imitation Learning
  3. Feedback Control, Priorities & Torque Control
  4. Reinforcement Learning & Policy Search
  5. Sensor Integration & Fusion
  6. Cognitive Reasoning & Planning

This course provides a unique overview over central topics in robotics. A particular focus is put in the dependencies and interaction among the components in the control loop. These interactions are discussed through in the context of state of the art methods including dynamical systems movement primitives, gradient based policy search methods or probabilisitic inference for planning algorithms

The students will also experiment with state of the art machine learning methods and robotic simulation tools in accompanying exercises. Hands on tutorials on programming with Matlab, the robot middleware and interface ROS and the simulation tool V-Rep complement the course content.   

For more information, please visit https://rob.ai-lab.science/teaching/

 

[WS2018/19] In the winter semester, I will teach a course on Probabilistic Learning for Robotics which covers advanced topics including graphical models, factor graphs, probabilistic inference for decision making and planning, and computational models for inference in neuroscience. The content is not yet fixed and may change.