Autonomous robots often need to operate with and around humans, to provide added value to the collaborative team, to earn-and-prove trustworthiness, as well as to utilize the support from the human team-mates. The presence of humans brings additional challenges for robot decision-making, but also opportunities for improving the decision-making capabilities with human help. Two major computational paradigms for sequential decision-making are planning and learning.
Planning in multi-stage robotic problems is not trivial, mainly due to computational efficiency (due to the curse of dimensionality) and the need for accurate models. To plan more efficiently, researchers use hierarchical abstractions (e.g. Task and Motion Planning - TAMP). Representing the problem as TAMP enables to incorporate declarative knowledge and to achieve predictable and interpretable behavior. However, creating declarative models requires significant engineering effort and it is practically impossible to account in advance for all possible cases robots might face in the long-term operations in the wild. Therefore, life-long learning presents itself as a necessity and human help as a dependable source of knowledge!
Learning methods achieved impressive capabilities, solely by improving performance based on the experience (e.g. trial-and-error, human demonstrations, corrections, etc.). However, they generally struggle with the long-term consequences of actions and the problems with the combinatorial structure. They can sometimes give solutions which are contradicting “common sense”, ignore causal effects, and forget previously learned skills (e.g. catastrophic forgetting). These issues are particularly prominent when it comes to life-long learning. Some of these issues might be avoided by using deliberate long-horizon reasoning (e.g. planning methods) and explicit human help.
Recently, a lot of research interest was shown in combined approaches utilizing synergies of planning and learning (e.g. neuro-symbolic AI). But still, principled integration of human input into these combined approaches is missing. Human input can play an important role in bridging planning and learning and enable reliable and trustworthy life-long learning with human help. It can be used for grounding learned models, providing “common sense” knowledge, teaching skills, setting goals, etc.
In this workshop, we aim to bring together researchers from the field of robot learning, symbolic AI (planning), and human-robot interaction to discuss emerging trends and define common challenges and new opportunities for cross-fertilization in these fields. The workshop schedule includes invited talks, spotlight presentations, and interdisciplinary panel discussions.
We wish to provide some answers to these questions:
- How can we design robots operating side-by-side with humans with long-term autonomy, and capable of intuitively communicating and learning about “common sense” and human goals?
- How can we utilize human input to improve robot generalization capabilities and skill transfer across tasks?
- How can we utilize learned models and human input for long-horizon deliberate reasoning?
- How can we ground learned models and plans to be human interpretable?
Topics
We focus on enabling life-long learning with human help through:
- Interactive imitation learning
- Learning from demonstration
- Task and motion planning
- Hierarchical reinforcement learning
- Safe reinforcement learning
- Skill learning, cross-domain skill transfer and generalization
- Lifelong learning and curriculum learning
- Learning general robotic task specifications
- Modeling, symbolic model acquisition, representation learning
- Neuro-symbolic AI for robotics
Speakers
Andy Zeng Google AI, USA |
Dorsa Sadigh Stanford University, USA |
George Konidaris Brown University, USA |
Karinne Ramirez-Amaro Chalmers University of Technology, Sweden |
Meng Guo Peking University, China |
Nick Hawes University of Oxford, UK |
Peter Stone The University of Texas at Austin, USA |
Cédric Colas MIT and INRIA, USA and France |
Important dates
- Workshop paper submission deadline:
28th April 2023 (AOE) - Author notification:
12th May 2023 - Camera-ready: 24th May 2023
- Finalized workshop program:
25th May 2023 - Workshop date: 29th May 2023
Organizers
Zlatan Ajanović TU Delft |
Jens Kober TU Delft |
Jana Tumova KTH Royal Institute of Technology |
Christian Pek TU Delft |
Selma Musić Stanford University |
Supporting IEEE RAS technical committees
Technical Committee on Robot Learning | |
Technical Committee on Human-Robot Interaction and Communication | |
Technical Committee on Cognitive Robotics |
Ackwnowledgements
This workshop is partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation; European Research Council Starting Grant TERI “Teaching Robots Interactively”, grant agreement no. 804907; European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Actions grant agreement no. 101025273.