There are two ways of spreading light: to be the candle or the mirror that reflects it.
— Edith Wharton

HRI

Mirror Eyes: Explainable Human-Robot Interaction at a Glance

Matti Krüger, Daniel Tanneberg, Chao Wang, Stephan Hasler, Michael Gienger

Abstract

The gaze of a person tends to reflect their interest. This work explores what happens when this statement is taken literally and applied to robots. Here we present a robot system that employs a moving robot head with a screen-based eye model that can direct the robot's gaze to points in physical space and present a reflection-like mirror image of the attended region on top of each eye. We conducted a user study with 33 participants, who were asked to instruct the robot to perform pick-and-place tasks, monitor the robot's task execution, and interrupt it in case of erroneous actions. Despite a deliberate lack of instructions about the role of the eyes and a very brief system exposure, participants felt more aware about the robot's information processing, detected erroneous actions earlier, and rated the user experience higher when eye-based mirroring was enabled compared to non-reflective eyes. These results suggest a beneficial and intuitive utilization of the introduced method in cooperative human-robot interaction.

Cooperative Eyes

Effective cooperation on complex tasks often depends on clear communication between agents. However, verbal communication can be cumbersome and ambiguous - particularly in tasks involving spatial references and physical manipulation. Non-verbal cues, such as pointing, offer a practical alternative or supplement to spoken language. In human-robot interaction, robotic eyes show particular promise as pointing devices, building on the natural communicative role that eye gaze already plays in human-human interaction. Screen-based eyes add further expression flexibility to expand this communication medium even beyond human limits. One such expansion is a feature we call Mirror Eyes. In its essence, Mirror Eyes refer to the addition of a reflection-like mirror image of the attended region on top of each eye. In this study we investigated how Mirror Eye utilization within a moveable robot head would affect aspects of a human-robot interaction setting.

Experiment

Participants were asked to verbally instruct a robot to carry out specific pick-and-place tasks, monitor the robot’s execution of the request for correctness, and interrupt it verbally in case of erroneous actions. The task was carried out in two conditions, one with an enabled Mirror Eye feature and one that included the eyes but lacked the mirroring effect (Eyes-Only). In a subset of trials, the robot was programmed to either pick up the wrong object (step 1 error) or place it on the wrong target (step 2 error).

Pick-up (step 1) error in Eyes-Only condition

Pick-up (step 1) error with Mirror Eyes enabled

Correct task execution with Mirror Eyes enabled

Placement (step 2) error with Mirror Eyes enabled

The participant interrupts the robot upon detecting an erroneous action.

Results

Participants felt more aware about the robot’s information processing, detected erroneous actions earlier, and rated the user experience higher when mirroring was enabled.

Conclusion

These results suggest that moveable robot heads benefit from Mirror Eye utilization in human-robot interaction scenarios. Because benefits appeared after very short system exposure and without any explanations about the role of the eyes in the interaction, we believe them to be indicative of the presence of quick intuition.

BibTeX

Krüger, M., Tanneberg, D., Wang, C., Hasler, S., Gienger, M. (in press). Mirror Eyes: Explainable Human-Robot Interaction at a Glance. 2025 34th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)

@inproceedings{krueger2025mirroreyes,
  author={Matti Kr\"{u}ger and Daniel Tanneberg and Chao Wang and Stephan Hasler and Michael Gienger},
  title={Mirror Eyes: Explainable Human-Robot Interaction at a Glance},
  booktitle={2025 34th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)}
  year={in press},
  organization={IEEE}
}
Krüger, M., Oshima, Y., & Fang, Y. (2024). Virtual Reflections on a Dynamic 2D Eye Model Improve Spatial Reference Identification. arXiv preprint arXiv:2412.07344.

@misc{krueger2024virtualreflections,
  title={Virtual Reflections on a Dynamic 2D Eye Model Improve Spatial Reference Identification},
  author={Matti Kr\"{u}ger and Yutaka Oshima and Yu Fang},
  year={2024},
  eprint={2412.07344},
  archivePrefix={arXiv},
  primaryClass={cs.HC},
  url={https://arxiv.org/abs/2412.07344},
}

HRI