Wiki source code of b. Test

Version 7.1 by Dongxu Lu on 2023/01/13 14:11

Show last authors
1 = Introduction =
2
3 Section [[Prototype>>doc:3\. Evaluation.a\. Prototype.WebHome]] presented the //socially intelligent// human-robot dialogue for the use case "[[UC01.0: Music Bingo>>doc:2\. Specification.b\. Use Cases.UC01\.0\: Music Bingo.WebHome]]", and a corresponding robot that shows less intelligent dialogues for comparison (i.e., the control condition. Both dialogues are video-recorded in the robot lab by a staff member, in which the video camera “looks at” the robot (so, you “only” see the robot).
4
5 In this test, these videos (i.e. the recorded dialogues and robot expressions) will be assessed by participants in an online evaluation to test if the robot is perceived as intended. The hypotheses are that the participants recognize more intended dialogue characteristics for the intelligent robot than the less-intelligent robot, and asses the robots differently on aspects like understandability, trustworthiness, and likeability. The concerning measures are being acquired via an online questionnaire (immediately after each video).
6
7 The participants will be the students from the other groups that take the course (about 45 students take the course). The data will be anonymized. It is intended to be a within-subjects design, in which the two conditions are counterbalanced.
8
9 = Method =
10
11 The prototype was evaluated with an in-person experiment with multiple participants.
12
13 == Participants ==
14
15 Randomly selected PwD from the care centers.
16
17 == Experimental design ==
18
19 For the experiment, we used a within-subject design. All of the participants interacted with both versions of the robot, with half of the participants interacting with version 1 first and then version 2, and the other half in reverse order. This was done to counter-balance the carryover effects.
20
21 == Tasks ==
22
23 The participant interacted with the robot, which was programmed to engage in the Music Bingo Play. Two versions were implemented: the first version (simple interaction) only explains the game procedure without further interactions. The second (advanced interaction) is our original implementation of it with more human-like interactions such as small talks.
24
25 == Measures ==
26
27 We measured the effectiveness of the Music Bingo Play. Our quantitative measure was whether the person performed better in the game with further help from the robot, and the qualitative measure was the emotions that the PwD experienced before, during, and after the interaction. The qualitative measures were recorded with a simple questionnaire.
28
29 == Procedure ==
30
31 The procedure was conducted as follows:
32
33 1. Welcome participants and explain what they are going to be doing.
34 1. Have them sign the permission form.
35 1. Complete questionnaire 1 regarding their emotional state.
36 1. Play the Music Bingo Game with the robot.
37 1. Have interaction with version A of the robot.
38 1. Complete questionnaire 2 (extended version).
39 1. Have a short interview during downtime (prepared questions).
40 1. Have interaction with version B of the robot.
41 1. Complete questionnaire 3 (extended version).
42 1. Have a short interview during downtime (prepared questions).
43
44 We used the "Wizard of Oz" method for differentiating agreement and disagreement, to make sure that the whole process did not depend on voice recognition being good enough, and to have an overall smoother interaction. In practice, this meant that someone was pressing "y" and "n" on the keyboard according to the participants' answers, in a place the participant did not see, such as behind them. The only issue encountered was some connectivity delays at times, which only slightly affected a few of the interactions.
45
46 == Material ==
47
48 1. Consent form. To protect the privacy of participants and ensure the evaluation process goes smoothly, we will ask participants to sign a consent form, indicating they are willing to take part in the evaluation and the data gathered from the experiment will be analyzed by researchers.
49 1. Pepper robot. Our robot is programmed using Choregraphe. The robot will have the same behaviour for every participant. However, the input data will be entered by the Activity Coordinator.
50
51 = Results =
52
53 Since each PwD has its own state of dementia and personal issues, it is very difficult to get uniform results, especially since they are collected orally.
54 Getting very nice, fully robust, and reliable results is merely a hope and a dream.
55 However, we can try to consider the main trends that we are interested in.
56 Thus, the results will be mainly focused on:
57 - How much autonomy did the PwD gain?
58 → what did the caregiver, relatives, and PwD report
59 → how well did PwD perform in the group game
60 → did the relatives feel they are cared
61
62
63 - Did their emotional state improve?
64 → feelings from the PwD themselves
65 → reports from relatives and caregiver
66 These results will most likely never be yes-no results, but more like clues or hints that show whether some things worked on not, which will be the point of our discussion.
67
68 An example result analysis below:
69
70 **Accomplishment and Autonomy Assessment**
71 [[image:E5AE14BC-6F6D-40AD-B774-9EC749FCF37E.png||alt="group2.svg"]]
72 Figure 3: Graphical representation of results for accomplishment and autonomy subset of the system assessment, with results shown for people who like vs. dislike gardening, along with the average of the sample.
73
74 The second group, namely the accomplishment and autonomy subset has questions concerning the sense of control and accomplishment felt during the task by the participants. The participants on average responded between slightly agree and agree that completing the task was a good accomplishment and that they felt in control while doing it and a bit lower for the statement "I feel like I have accomplished it myself" suggestingthat it is possible for the participants to feel like Pepper is responsible, at least partially, for the accomplishment of the task.
75
76 ====== //H0//: The distribution of answers from people who like gardening and people who do not like gardening is the same. ======
77
78 |=//Wilcoxon Rank-Sum results //|=I feel like completing the task was a good accomplishment.|=I feel like I accomplished it myself.|=I felt in control of what I had to do.
79 |=p-value|0.0982|0.220|0.581
80 |=statistics|-1.653|-1.224|0.551
81
82 Table 4: Results of Wilcoxon Rank-Sum statistical test on accomplishment and autonomy subset of system assessment for people who like vs. dislike gardening
83
84 The sense of accomplishment is slightly higher for people who like gardening that for those who do not. It is globally around slightly agree. An interesting fact to notice is that participants who do not like gardening felt more in control of what they had to do.
85
86 **Negative Experiences Assessment** 
87 [[image:CCD77DB3-91F9-440A-8725-7B90CF92FF9F.png]]
88 Figure 4: Graphical representation of results for negative experiences subset of the system assessment, with results shown for people who like vs. dislike gardening, along with the average of the sample.
89
90 The third group, namely negative experiences subset is used to group together questions that measure negative feeling experiences with Pepper. The results show that the participants on average answered between slightly disagree and disagreed. This suggests that Pepper was not frustrating for most people but only for a small fraction of the participants.
91
92 ====== //H0//: The distribution of answers from people who like gardening and people who do not like gardening is the same. ======
93
94 |=//Wilcoxon Rank-Sum results //|=I felt annoyed by Pepper.|=I felt frustrated by the task.|=I felt pressured by Pepper.
95 |=p-value|0.951|0.358|0.926
96 |=statistics|0.0612|0.918|-0.0918
97
98 Table 5: Results of Wilcoxon Rank-Sum statistical test on negative experiences subset of system assessment for people who like vs. dislike gardening
99
100 The participants globally disagree that the presence of Pepper annoyed, frustrated or pressured them. Those who like gardening actually had a bit more negative feelings regarding the presence of Pepper than those who dislike gardening.
101
102 **Social Assessment**
103 [[image:CBF52B0D-2992-4C0F-917D-6CAA0AA0EFB4.png]]
104 Figure 5: Graphical representation of results for social subset of the system assessment, with results shown for people who like vs. dislike gardening, along with the average of the sample.
105
106 The fourth and final group addresses a social subset and is utilized for assessing Pepper's social presence and trustworthiness as felt by the participants. The two statements used are "Pepper cared about helping me" and "I would trust Pepper with more important activities". The responses were on average slightly above the neutral level.
107
108 ====== //H0//: The distribution of answers from people who like gardening and people who do not like gardening is the same. ======
109
110 |=//Wilcoxon Rank-Sum results//|=Pepper cared about helping me.|=I would trust Pepper with more important activities.
111 |=p-value|0.854|0.0297
112 |=statistics|0.183|-2.173
113
114 Table 5: Results of Wilcoxon Rank-Sum statistical test on social subset of system assessment for people who like vs. dislike gardening
115
116 This graph shows that the trust in Pepper was highly dependent on whether the participants enjoyed the activity or not.
117
118
119 = Discussion =
120
121 * Reliability: The evaluation is reliable. One could replicate the exact same experiment with other participants.
122 * Validity: This evaluation is not really valid. Our feasible evaluation does not have the corresponding target group, and is of a much smaller scope compared to our ideal evaluation. We cannot test all our claims.
123 * Biases: The evaluation has large biases. This is discussed more in detail in the limitations where different bias factors are explained.
124 * Scope: The evaluation can be generalized to a larger scope, although with a lot of care, since the evaluation is not fully valid.
125 * Ecological validity: The evaluation is partially valid in terms of influence from the environment. The affect assessment questionnaire is the same before and after the activity, with the same environment, so the environment is technically not involved in this. However, the system assessment questionnaire does rely on some elements from the environment.
126
127
128 = Conclusions =
129
130 The results from the mood questionnaire seem to support our claims CL001: PwD
131
132
133 Although there are many potential biases, there seems to be a general trend which is that the mood of the participants slightly improved thanks to the activity.
134
135 All participants, except one who asked to leave the experiment early, finished the whole activity we had prepared for them during the session. This means the participants were able to perform activity steps told by Pepper. This supports our claim CL03: the PwD performs an activity step.
136
137 No participant failed to notice Pepper or did not hear what she was saying after the experiment had started. This supports our claim CL01: the PwD becomes aware of Pepper's presence.
138
139 From the system assessment questionnaire, participants quite agree that completing the task was a good accomplishment for them. This supports our claim CL08: the PwD feels accomplished.
140
141 We did not have any question explictly aimed at targeting our claim CL08. However, frustration, annoyance and pressure are often linked to a lack of understanding from the other part. We can combine these with the question about whether Pepper cared about helping the participants, and with our observations during the experiment. When aggregated together, it seems that generally speaking, the participants felt understood. This supports our claim CL08: the PwD feels understood.