The social navigation and prediction community has reached a crucial point. Despite the large volume of publications, we lack a “common language.” Broadly speaking, we have yet to reach consensus about:
- The most meaningful skills, capabilities, and behaviors that a social navigation system should have.
- Validation standards. Practitioners adopt different scenarios, experimental setups, robot platforms, baselines, and metrics.
- The abstractions best suited for the challenges of social navigation.
- What constitutes a good social navigation dataset. Relatedly, it is unclear how to best use a given dataset (e.g., for training or validation).
Finally, social robot navigation and prediction evaluation studies often lack the statistical rigor seen in other research communities (e.g., experimental psychology).
Motivated by these observations, we would like to call a diverse, multidisciplinary audience to participate in an interactive, discussion-oriented workshop. We would like to hear from roboticists, social scientists, and designers about how to develop best practices for social robot navigation research.
The workshop will be organized around focus topics, which are more narrow “common language” challenges. After our invited speakers briefly present their perspectives upon a focus topic, workshop participants will break off into small, interdisciplinary brainstorming workgroups. These workgroups will be comprised of the invited speakers, the organizers, and the workshop attendees. Ultimately, we want the community to drive the conversation around the focus topics.
Focus topic 1. What are the most important variables that influence social robot navigation performance? In other words, what levers should we be exploring?
- What robot behaviors will enable untrained bystanders/pedestrians to navigate effectively around robots? What behavior models and interaction metaphors are most effective for enabling effective robot navigation -- human-like, dog-like, what else?
- What social robot navigation performance metrics should we optimize for? How should we measure them? What are appropriate benchmarks? What are some good baselines? How do we ensure that these metrics are accessible to the community?
- How do we design social robot navigation studies to address and get beyond novelty effects in our data?
Focus topic 2: What are the best abstractions for social robot navigation from the perspective of perception, planning, and control?
- What are the appropriate representations to capture important properties of collective crowd behavior?
- What is the right level of detail to include in context models? How does this change depend on the experimental setting?
- How do we encode behavior specifications into objective or reward functions?
- How do we verify that a robot navigation framework is correct and safe? How do we manage a safety critical system without safety guarantees?
Focus topic 3: What properties should a social navigation dataset capture? How do we deal with uncontrollable variables in experimental settings?
- What aspects of human behavior can be simulated? What aspects of human behavior are most important for social navigation?
- How does simulation limitation inform reinforcement learning? For example: if we train on social forces agents, can we quantify performance boundaries when deploying in real world settings?
- What is the correct reward function for reinforcement learning based social navigation?
- What role does supervised (e.g. deep) learning play in social navigation? How do we verify that we have not learned dataset specific artifacts? How do we account for “distribution shift” in social navigation settings?
|Workshop paper submission deadline:
||April 9th (AOE)
|Notification of acceptance:
||Sunday July 12th (full day)
Oregon State University, Corvallis OR, USA.