Prof. Dr. Elmar Rückert


Professor

Ratzeburger Allee 160
23562 Lübeck
Gebäude 64, Raum 96 (EG)

Telefon: +49 451 31015209
Fax: +49 451 31015204
Email:
Elmar Rückert

Short-bio

Since February 2018, Elmar Rueckert holds a tenure track junior professorship (W1) at the institute for Robotics and Autonomous Systems at University Luebeck. Prior to that he was the research group leader of the neurorobotics division at the Intelligent Autonomous Systems (IAS) lab at the Technische Universität Darmstadt.

He has a strong expertise in recurrent neural networks, learning movement primitives, probabilistic planning and motor control in tasks with contacts. He was the team leader of the European project GOAL-robots and was responsible for learning approaches in the highly successful project CoDyCo.

Before joining IAS, he has been with the Institute for Theoretical Computer Science at Graz University of Technology, where he received his Ph.D. with honors under the supervision of Wolfgang Maass.

Elmar Rueckert remains associated to the Intelligent Autonomous Systems lab at Technische Universität Darmstadt as adjunct scientist and visiting professor.

Research Interests

Medical Robotics: Real-time Tumour Tracking, Probabilistic Motion Compensation Models, Prosthesis Control, Movement Decoding and Understanding, Brain-Computer-Interfaces, Real-time Control, Interactive Learning from human feedback

Machine and Deep Learning: Deep Networks, Graphical Models, Probabilistic Inference, Variational Inference, Gaussian Processes, Transfer Learning, Message Passing, Clustering, Bayesian Optimization, Genetic Programming, LSTMs

Simulations and Computational Models: Robot Manipulation, Human Postural Control, Locomotion, Autonomous Driving, Long-Short-Term-Memory Models, Probabilistic Time-Series Models, Muscle Synergies, Hippocampal Models for Planning

Autonomous Systems: Movement Primitives, Reinforcement Learning, Imitation Learning, Morphological Computation, Quadruped Locomotion, Balancing Control with Humanoids, Deep Reinforcement Learning, Optimal Feedback Control

[SS2018] During the summer semester I am teaching the course Humanoid Robotics (RO5300). In this course I will discuss the key components of one of the most complex autonomous systems. These topics are

  1. Kinematics, Dynamics & Locomotion
  2. Representations of Skills & Imitation Learning
  3. Feedback Control, Priorities & Torque Control
  4. Reinforcement Learning & Policy Search
  5. Sensor Integration & Fusion
  6. Cognitive Reasoning & Planning

This course provides a unique overview over central topics in robotics. A particular focus is put in the dependencies and interaction among the components in the control loop. These interactions are discussed through in the context of state of the art methods including dynamical systems movement primitives, gradient based policy search methods or probabilisitic inference for planning algorithms

The students will also experiment with state of the art machine learning methods and robotic simulation tools in accompanying exercises. Hands on tutorials on programming with Matlab, the robot middleware and interface ROS and the simulation tool V-Rep complement the course content.   

For more information, please visit https://rob.ai-lab.science/teaching/

 

[WS2018/19] In the winter semester, I will teach a course on Probabilistic Learning for Robotics which covers advanced topics including graphical models, factor graphs, probabilistic inference for decision making and planning, and computational models for inference in neuroscience. The content is not yet fixed and may change.