Skip to the content.

Autonomous robots often need to operate with and around humans, to provide added value to the collaborative team, to earn-and-prove trustworthiness, as well as to utilize the support from the human team-mates. The presence of humans brings additional challenges for robot decision-making, but also opportunities for improving the decision-making capabilities with human help. Two major computational paradigms for sequential decision-making are planning and learning.

Planning in multi-stage robotic problems is not trivial, mainly due to computational efficiency (due to the curse of dimensionality) and the need for accurate models. To plan more efficiently, researchers use hierarchical abstractions (e.g. Task and Motion Planning - TAMP). Representing the problem as TAMP enables to incorporate declarative knowledge and to achieve predictable and interpretable behavior. However, creating declarative models requires significant engineering effort and it is practically impossible to account in advance for all possible cases robots might face in the long-term operations in the wild. Therefore, life-long learning presents itself as a necessity and human help as a dependable source of knowledge!

Learning methods achieved impressive capabilities, solely by improving performance based on the experience (e.g. trial-and-error, human demonstrations, corrections, etc.). However, they generally struggle with the long-term consequences of actions and the problems with the combinatorial structure. They can sometimes give solutions which are contradicting “common sense”, ignore causal effects, and forget previously learned skills (e.g. catastrophic forgetting). These issues are particularly prominent when it comes to life-long learning. Some of these issues might be avoided by using deliberate long-horizon reasoning (e.g. planning methods) and explicit human help.

Recently, a lot of research interest was shown in combined approaches utilizing synergies of planning and learning (e.g. neuro-symbolic AI). But still, principled integration of human input into these combined approaches is missing. Human input can play an important role in bridging planning and learning and enable reliable and trustworthy life-long learning with human help. It can be used for grounding learned models, providing “common sense” knowledge, teaching skills, setting goals, etc.

In this workshop, we aim to bring together researchers from the field of robot learning, symbolic AI (planning), and human-robot interaction to discuss emerging trends and define common challenges and new opportunities for cross-fertilization in these fields. The workshop schedule includes invited talks, spotlight presentations, and interdisciplinary panel discussions.

We wish to provide some answers to these questions:


We focus on enabling life-long learning with human help through:


Andy Zeng
Google AI, USA
Dorsa Sadigh
Stanford University, USA
George Konidaris
Brown University, USA
Karinne Ramirez-Amaro
Chalmers University of Technology, Sweden
Meng Guo
Peking University, China
Nick Hawes
University of Oxford, UK
Peter Stone
The University of Texas at Austin, USA
Cédric Colas
MIT and INRIA, USA and France

Important dates


Zlatan Ajanović
TU Delft
Jens Kober
TU Delft
Jana Tumova
KTH Royal Institute of Technology
Christian Pek
TU Delft
Selma Musić
Stanford University

Supporting IEEE RAS technical committees

Technical Committee on Robot Learning
Technical Committee on Human-Robot Interaction and Communication
Technical Committee on Cognitive Robotics


This workshop is partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation; European Research Council Starting Grant TERI “Teaching Robots Interactively”, grant agreement no. 804907; European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Actions grant agreement no. 101025273.