Supportive robots for everyday environments must be understandable, operable, and teachable by non-experts, requiring intuitive and sustainable Human-Robot Interaction. Yet little work has explored continual task learning in repeated, unscripted settings. We present a robotic system that incrementally learns from user interaction in Augmented Reality, generalizes acquired skills, and plans task execution accordingly. In an exploratory study, participants freely taught the robot simple tasks in a virtual kitchen using AR glasses. A holographic robot provided feedback, asked clarifying questions, and generalized demonstrated actions to new objects. Results show users found the system engaging, understandable, and trustworthy, though with variation in the latter two. Participants who explored more expanded the robot's knowledge more effectively, and perceived understanding was linked to trust. While no significant changes were observed across sessions, the low return rate and user comments reveal important challenges for designing interactive learning systems.
Left: Design elements for online feedback to the tutor about recognized actions and objects. Right: after the demonstration, the robot asks questions to generalize the demonstrated skill (here, after seeing heating milk in the microwave, it asks the tutor “Can microwave be used to change the temperature of cola from warm to hot?“) along with virtual graphic overlays.
Graphical elements used to explain to the user that the robot learned that food items in the “bread” category can be heated up in the toaster (left). The same message is also provided by text (right) and speech.
The robot avatar executing the plan it has generated upon request from the tutor.
Participants felt more aware about the robot's information processing, detected erroneous actions earlier,
and rated the user experience higher when mirroring was enabled.