Introduction
Section Prototype presented the socially intelligent human-robot dialogue for the use case "UC01.0: Music Bingo", and a corresponding robot that shows less intelligent dialogues for comparison (i.e., the control condition. Both dialogues are video-recorded in the robot lab by a staff member, in which the video camera “looks at” the robot (so, you “only” see the robot).
In this test, these videos (i.e. the recorded dialogues and robot expressions) will be assessed by participants in an online evaluation to test if the robot is perceived as intended. The hypotheses are that the participants recognize more intended dialogue characteristics for the intelligent robot than the less-intelligent robot, and asses the robots differently on aspects like understandability, trustworthiness, and likeability. The concerning measures are being acquired via an online questionnaire (immediately after each video).
The participants will be the students from the other groups that take the course (about 45 students take the course). The data will be anonymized. It is intended to be a within-subjects design, in which the two conditions are counterbalanced.
Method
The prototype was evaluated with an in-person experiment with multiple participants.
Participants
Randomly selected PwD from the care centers.
Experimental design
For the experiment, we used a within-subject design. All of the participants interacted with both versions of the robot, with half of the participants interacting with version 1 first and then version 2, and the other half in reverse order. This was done to counter-balance the carryover effects.
Tasks
The participant interacted with the robot, which was programmed to engage in the Music Bingo Play. Two versions were implemented: the first version (simple interaction) only explains the game procedure without further interactions. The second (advanced interaction) is our original implementation of it with more human-like interactions such as small talks.
Measures
We measured the effectiveness of the Music Bingo Play. Our quantitative measure was whether the person performed better in the game with further help from the robot, and the qualitative measure was the emotions that the PwD experienced before, during, and after the interaction. The qualitative measures were recorded with a simple questionnaire.
Procedure
The procedure was conducted as follows:
- Welcome participants and explain what they are going to be doing.
- Have them sign the permission form.
- Complete questionnaire 1 regarding their emotional state.
- Play the Music Bingo Game with the robot.
- Have interaction with version A of the robot.
- Complete questionnaire 2 (extended version).
- Have a short interview during downtime (prepared questions).
- Have interaction with version B of the robot.
- Complete questionnaire 3 (extended version).
- Have a short interview during downtime (prepared questions).
We used the "Wizard of Oz" method for differentiating agreement and disagreement, to make sure that the whole process did not depend on voice recognition being good enough, and to have an overall smoother interaction. In practice, this meant that someone was pressing "y" and "n" on the keyboard according to the participants' answers, in a place the participant did not see, such as behind them. The only issue encountered was some connectivity delays at times, which only slightly affected a few of the interactions.
Material
- Consent form. To protect the privacy of participants and ensure the evaluation process goes smoothly, we will ask participants to sign a consent form, indicating they are willing to take part in the evaluation and the data gathered from the experiment will be analyzed by researchers.
- Pepper robot. Our robot is programmed using Choregraphe. The robot will have the same behaviour for every participant. However, the input data will be entered by the Activity Coordinator.