Wiki source code of Test

Version 71.1 by Veikko Saikkonen on 2022/04/01 15:29

Show last authors
1 = Problem statement and research questions =
2
3 People with dementia often forget to eat and drink, leading to dehydration, malnutrition and decreased wellbeing in general. Our prototype engages in discourses to remind PwD to have lunch and drink water, using the Nao robot platform. The discourse aims to reming the PwD without causing any anxiety or embarrassment which a traditional "alarm" system could cause, and keep them company throughout these activities.
4
5 RQ1: "Does the robot cause PwD to eat more regularly?"*
6 RQ2: "Does the robot remind the PwD of their hunger?"
7 RQ3: "Does the music make the eating more enjoyable for the PwD?"
8 RQ4: "Does the PwD experience less negative emotions, such as agitation, sadness, embarrassment, after the interaction with the 'intelligent' robot?"
9
10 '*' This research question is difficult due to the practical limitations in designing the experimental setup and as such is left to lesser importance.
11
12 = Method =
13
14 The prototype is evaluated with an in-person experiment with multiple participants. In the experiment, the participants will be asked to pretend to be PwD and act accordingly with/without the prototype.
15
16 == Participants ==
17
18 As there are practical difficulties with conducting the experiment with actual people with dementia due to both time constraints and COVID, our participants' group will consist of peers from other groups and friends, who will act as if they are older people with dementia. We plan to gather around 20 people for our experiments.
19
20 == Experimental design ==
21
22 We will be using a within-subject design. In the experiment all of the participants will interact with both versions of the robot, with half of the participants interacting with the version 1 first and then version 2, and the other half in reverse order, to counter-balance the carryover effects. Snacks will be made available for the participants, in case they're prompted and they're hungry. The participants will be unaware of the possibility of eating snacks, to prevent disturbing the interaction with the robot. Otherwise the subjects could be primed for eating, which would bias the results and hide the effect of the robotic interaction.
23
24 == Tasks ==
25
26 The participant will have to interact with the robot, which is programmed to engage in a lunch discourse. Two versions will be implemented: the first version will ask basic questions about mealtime, mostly acting as a reminder for the PwD to have lunch (alarm clock). The second will be our original implementation of it with the more sophisticated discourse and music.
27
28 == Measures ==
29
30 We plan on measuring the effectiveness of the discourse, both physically and emotionally. Our quantitative measure is whether the person ate the lunch they were supposed to have eaten, and the qualitative measure is the emotions that the PwD experienced before, during, and after the interaction. The qualitative measures will be recorded with a simple questionnaire. Depending on the time of the experiments, we assume that people might also not be hungry enough to be prompted to have something to eat, which might disturb the results. We do plan however to measure whether the robot will remind someone of their hunger and have them eat.
31
32 == Procedure ==
33
34 * Welcome Participants and explain what they are going to be doing.
35 * Have them sign the permission form.
36 * Participants complete a questionnaire(A) regarding their emotional state (control).
37 * Have version A of interaction with the robot.
38 * Complete questionnaire(extended version).
39 * Have a short interview during downtime (prepared questions).
40 * Have version B of interaction with the robot.
41 * Complete questionnaire(extended version).
42 * Have a short interview during downtime (prepared questions).
43
44 == Material ==
45
46 For the experiments, we'll be using the NAO robot platform, as well as a laptop for the participants to complete the questionnaires on. The questionnaire will be a combination of questions regarding the emotional state of the participants, their interaction with the robot, and the music included in the interaction. Food will be made available to see and measure how much people will eat.
47
48 Questionnaires:
49 Consent Form and Disclaimers
50 8 questions from the [[EVEA>>https://www.ucm.es/data/cont/docs/39-2013-04-19-EVEA%20-%20Datasheet.pdf]] questionnaire
51 4 questions from the [[Godspeed>>https://www.bartneck.de/2008/03/11/the-godspeed-questionnaire-series/]] questionnaire
52 3 food-related questions of our own (5-point Likert scale)
53 2 music-related questions of our own (5-point Likert scale)
54
55 == Practicalities ==
56
57 Beforehand:
58
59 * Do a practice round by ourselves
60 ** Film this
61 * Contact other groups and decide on a time slot
62 ** Might be better to reserve in 10 min slots, so that people don't have to wait so much
63 ** If possible, this could be done in parallel with another groups testing
64 * Reserve lab
65 * Buy snacks
66
67 During:
68
69 1. Give starting questionnare to fill while people are waiting for the previous participant
70 2. Guide the participant to the testing spot
71 3. Inform the participant where the snacks are
72 4. Run the first version
73 5. Give the mid-questionnare
74 6. Run the other test
75 7. Conduct the questionnare for the participant
76 8. Give the participant the end-questionnare
77
78 Other practicalities during:
79
80 * We will use the "Wizard of Oz" method for recognizing agreement and disagreement, to make sure that the whole process does not depend on voice recegnition being good enough
81 ** Someone will press eg. "y" and "n" on the keyboard according to the participants answers
82 * We will change the order in which the smart and basic versions are for each participant
83 ** this way if someone doesn't show up, we don't get skewed amounts
84
85 After:
86
87 * Analyze results
88
89 = Results =
90
91 The results were gathered from 19 personnel, all of whom interacted first with one version of the robot and then the other. 10 of the participants interacted first with the simple version, nine having their first interaction with the advanced version.
92
93 == Eating ==
94
95 {{html}}
96 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/EatingComp.png?rev=1.1" alt="Results on the eating of the test personas" style="display:block;margin-left:auto;margin-right:auto" width=750/>
97 {{/html}}
98
99 (% style="text-align:center" %)
100 Figure 1: Results on the eating of the test personas during the experiment
101
102 Simple robot:
103
104 * 16% ate
105 * 33% of those would not have eaten without the robot
106
107 Advanced robot:
108
109 * 32% ate
110 * 67% of those would not have eaten without the robot
111
112 == Music ==
113
114 {{html}}
115 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/MusicEnjoyable.png?rev=1.1" alt="Effects of music on the test personnel" style="display:block;margin-left:auto;margin-right:auto" width=1250/>
116 {{/html}}
117
118 (% style="text-align:center" %)
119 Figure 2: Answers of the test personas regarding music
120
121
122 == EVEA (Mood) ==
123
124 {{html}}
125 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/MoodChangeDumb.png?rev=1.1" alt="Measured moods and changes for the simple robot" style="display:block;margin-left:auto;margin-right:auto" width=750/>
126 {{/html}}
127
128 (% style="text-align:center" %)
129 Figure 3: Median measured moods for the simple version of the robot
130
131 {{html}}
132 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/MoodChangeSmart.png?rev=1.1" alt="Measured moods and changes for the advanced version of the robot" style="display:block;margin-left:auto;margin-right:auto" width=750/>
133 {{/html}}
134
135 (% style="text-align:center" %)
136 Figure 4: Median measured moods for the advanced version of the robot
137
138 (% style="text-align:center" %)
139 Table 1: Wilcoxon signed rank test results for the hypothesis that the mood changed during the interaction with the simple robot
140
141 |=Mood|=Happiness|=Anxiety|=Sadness|=Anger
142 |Statistic|37|5|4|14
143 |P-value|0.54|0.01|0.01|0.45
144
145 (% style="text-align:center" %)
146 Table 2: Wilcoxon signed rank test results for the null hypothesis that the mood changed during the interaction with the advanced robot
147
148 |=Mood|=Happiness|=Anxiety|=Sadness|=Anger
149 |Statistic|32|11|2|17
150 |P-value|0.18|0.01|0.01|0.45
151
152 (% style="text-align:center" %)
153 Table 3: Wilcoxon signed rank test results for the null hypothesis that the mood decreased during the interaction with the simple robot
154
155 |=Mood|=Anxiety|=Sadness|=Anger
156 |Statistic|81|53|29
157 |P-value|0.01|0.00|0.23
158
159 (% style="text-align:center" %)
160 Table 4: Wilcoxon signed rank test results for the null hypothesis that the mood decreased during the interaction with the advanced robot
161
162 |=Mood|=Anxiety|=Sadness|=Anger
163 |Statistic|32|149|52
164 |P-value|0.00|0.01|0.07
165
166 (% style="text-align:center" %)
167 Table 5: Wilcoxon signed rank test results for the null hypothesis that the mood increased during the interaction with the simple robot
168
169 |=Mood|=Happiness
170 |Statistic|37
171 |P-value|0.27
172
173 (% style="text-align:center" %)
174 Table 6: Wilcoxon signed rank test results for the null hypothesis that the mood increased during the interaction with the advanced robot
175
176 |=Mood|=Happiness
177 |Statistic|32
178 |P-value|0.09
179
180 (% style="text-align:center" %)
181 Table 7: Wilcoxon signed rank test results for the hypothesis that the mood changes with the simple and advanced robots during the interaction are different
182
183 |=Mood|=Happiness|=Anxiety|=Sadness|=Anger
184 |Statistic|92|49|85|69
185 |P-value|0.92|0.07|0.71|0.31
186
187 == Godspeed ==
188
189 {{html}}
190 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/friendly-hist.png?rev=1.1" alt="Effects of music on the test personnel" style="display:block;margin-left:auto;margin-right:auto" width=750/>
191 {{/html}}
192
193 (% style="text-align:center" %)
194 Figure 5: Answers to the statement 'I thought the robot was friendly'
195
196 {{html}}
197 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/pleasant-hist.png?rev=1.1" alt="Answers to the statement 'I thought the robot was pleasant'." style="display:block;margin-left:auto;margin-right:auto" width=750/>
198 {{/html}}
199
200 (% style="text-align:center" %)
201 Figure 6: Answers to the statement 'I thought the robot was pleasant'
202
203 {{html}}
204 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/godspeed-barchart.png?rev=1.1" alt="Godspeed questionnaire median comparison'." style="display:block;margin-left:auto;margin-right:auto" width=750/>
205 {{/html}}
206
207 (% style="text-align:center" %)
208 Figure 7: Median measured Godspeed questionnaire dimensions
209
210
211 (% style="text-align:center" %)
212 Table 8: Wilcoxon signed rank test results for the null hypothesis that the advanced robot scored higher in the perceived dimensions
213
214 |=Dimension|=Likeability|=Intelligence
215 |Statistic|36|70
216 |P-value|0.01|0.17
217
218 = Conclusions =
219
220 From the results we can see that the more advanced robot shows advantages over the simple version in many categories. Hints of better performance in other categories can be seen, but no conclusions should be drawn from the ones that lack the statistical significance.
221
222 In improving the eating, it seems that both robots have limited success in causing the people to eat as seen in Figure 1, they could cause the patients to eat more regularly, if triggered by timers or other suitable systems. It also seems that the advanced robot is better in the reminding, by a slight margin. However, the long term effects of reminding should be researched more to conclude whether the usage of the demonstrated robot platform or similar would cause the patients to eat more regularly. It is also unclear how the test setup and the limited choice of food affected the eating.
223
224 Based on the answers of the participants regarding music seen in Figure 2, it seems that most of them were either indifferent or liked the music. Also, as the test personnel find the advanced robot more likeable with a 5% confidence limit (Table 7), and the advanced version was the only version with music, it seems likely that the music does make the interaction more pleasant for the personas. However, some of the likeability might be due to the other advanced features of the robot and thus more research is needed to conclude the effect of the music.
225
226 The EVEA and partial Godspeed result can be seen in Figures 3-7 and Tables 1-8. The results show that with reasonable confidence (5% confidence limit), both versions of the robot decreased sadness and anxiety in the test personas. Hints are shown (10% confidence limit) that the advanced robot also decreases feelings of anger and increases happiness, while the simple robot fails to show similar results. However, in Table 7 we can see that the statistical differences in the mood differences during the interactions with the different versions are not highly significant.
227
228 A Wilcoxon signed rank test for the partial Godspeed test shows in Table 8 that with high confidence (1% confidence limit), the intelligent robot is more likeable in comparison to the simple robot. With these results it is likely that the more advanced robot is slightly preferrable and the personas might experience less negative emotions after the interaction with the robots, but it is slightly unclear if the effect is more powerful with the advanced robot.
229
230
231 = Discussion =
232
233
234
235 = Appendix =
236
237 == Experiment introduction for participants ==
238
239
240
241 Hi, we are <NAME> and <NAME> from the TU Delft Socio-Cognitive Engeering course Group 1, thank you for participating in our prototype evaluation experiment. The experiment is being conducted as a part of the TU Delft course on Socio-Cognitive Engineering and aims to evaluate the prototype designed as a part of the course. The evaluated prototype is based on the Nao robot-platform and is intended to improve the wellbeing of people suffering of dementia.
242
243 Consuming food and/or water can be a consequence of the interaction between you and the robot. Therefore, we would like to ask you if you have any allergies. If you have a form of Diabetes, please let us know before we start the first part of the experiment. You are strongly encouraged to share any other health conditions that can possibly be relevant to take into account when doing an experiment with robots and food with us.
244
245 The link between the stimuli of the Nao-robot and the triggering of epileptic seizures is yet unknown. If you have ever experienced epileptic seizures, please let us know. Then, we could see if any special precautions are needed.
246
247 The experiment will last for approximately 15-20 minutes, and consists of two interaction sections with the Nao robot, as well as questionnaires before, between and after the sections. We kindly ask you to act naturally during the experiment and fill the questionnaires truthfully and intuitively. Remember that we are evaluating the prototypes performance, not yours. You can stop the experiment at any time.
248
249 We will be collecting data of the questionnaires and recording some experiments, do you agree with your experiment being recorded? All data excluding the recordings will be anonymised before analysis and storage. The recordings will not be shared with third parties. After the experiment you have the right to ask for information about the collected data and revoke the right to use it. We kindly ask you not to share any information about the experiment with other participants.
250 
Do you have any questions?
251
252 == After research interview ==
253
254 Setup:
255 The test subject has finished both parts of the experiment. Before leaving the test conductor(s) sit down with them and ask the following questions in a discussion about the experiment. Discussion can flow freely, but the following topics should be discussed.
256
257 Topics:
258 - Emotions before / during / after the interaction with the robot
259 - Agitation due to the robot suggesting eating
260 - Effect of music on the general feeling of the situation
261 - Feeling of company during eating
262 - Effectiveness of eating/drinking suggestions
263
264 Questions:
265 - Did you eat or drink anything during the experiment?
266 - Were you feeling hungry/thirsty beforehand and did the discourse change this?
267 - On a scale of 1-10, how likely would you have eaten/drank without the robot suggesting it?
268 - What did the interaction with the robot feel like?
269 - With the more intelligent version?
270 - With the less intelligent version?
271 - What did you feel like when the robot suggested you should eat/drink?