Wiki source code of Test

Version 116.1 by Sofia Kostakonti on 2022/04/04 19:58

Show last authors
1 = Problem statement and research questions =
2
3 People with dementia often forget to eat and drink, leading to dehydration, malnutrition and decreased wellbeing in general. Our prototype engages in discourses to remind PwD to have lunch and drink water, using the Nao robot platform. The discourse aims to reming the PwD without causing any anxiety or embarrassment which a traditional "alarm" system could cause, and keep them company throughout these activities.
4
5 The four research questions studied in this evaluation are:
6
7 {{html}}
8
9 <ol>
10 <li>Does the robot remind the PwD of their hunger?</li>
11 <li>Does the music make the eating more enjoyable for the PwD?</li>
12 <li>Does the PwD experience less negative emotions, such as agitation, sadness, embarrassment, after the interaction with the 'intelligent' robot?</li>
13 <li>* Does the robot cause PwD to eat more regularly?</li>
14 </ol>
15
16 * This research question is difficult due to the practical limitations in designing the experimental setup and as such is left to lesser importance.
17
18 {{/html}}
19
20 = Method =
21
22 The prototype was evaluated with an in-person experiment with multiple participants.
23
24 == Participants ==
25
26 As there are practical difficulties with conducting the experiment with actual people with dementia, due to both time constraints and COVID, our participants' group consists of peers from other groups and friends. In total we had 19 people take part in our experiment.
27
28 == Experimental design ==
29
30 For the experiment, we used a within-subject design. All of the participants interacted with both versions of the robot, with half of the participants interacting with version 1 first and then version 2, and the other half in reverse order. This was done to counter-balance the carryover effects. Snacks were made available for the participants, in case they were prompted and were hungry. They were fully aware of them and some of the questionnaire prompts might have given them an idea of what our experiment is about (or at least that it's related to food), which might have skewed our results.
31
32 == Tasks ==
33
34 The participant interacted with the robot, which was programmed to engage in a lunch discourse. Two versions were implemented: the first version (simple interaction) asks basic questions about mealtime, mostly acting as a reminder for the PwD to have lunch (basically an alarm clock). The second (advanced interaction) is our original implementation of it with the more sophisticated discourse and music.
35
36 == Measures ==
37
38 We measured the effectiveness of the discourse, both physically and emotionally. Our quantitative measure was whether the person ate the lunch they were supposed to have eaten, and the qualitative measure was the emotions that the PwD experienced before, during, and after the interaction. The qualitative measures were recorded with a simple questionnaire. Some people were not hungry enough to be prompted to have something to eat, which disturbed the results. However we did measure whether the robot reminded someone of their hunger and if they ate.
39
40 == Procedure ==
41
42 The procedure was conducted as follows:
43
44 1. Welcome participants and explain what they are going to be doing.
45 1. Have them sign the permission form.
46 1. Complete questionnaire 1 regarding their emotional state and hunger scale (control).
47 1. Have interaction with version A of the robot.
48 1. Complete questionnaire 2 (extended version).
49 1. Have a short interview during downtime (prepared questions).
50 1. Have interaction with version B of the robot.
51 1. Complete questionnaire 3 (extended version).
52 1. Have a short interview during downtime (prepared questions).
53
54 We used the "Wizard of Oz" method for differentiating agreement and disagreement, to make sure that the whole process did not depend on voice recognition being good enough, and to have an overall smoother interaction. In practice, this meant that someone was pressing "y" and "n" on the keyboard according to the participants' answers, in a place the participant did not see, such as behind them. The robot's responses were hardcoded, with a few different branches available to take into account the variety of answers the participants would give. The only issue encountered was some connectivity delays at times, which only slightly affected a few of the interactions.
55
56 == Material ==
57
58 For the experiments, we used the NAO robot platform, and a laptop to control it. The participants completed the questionnaires on their phones by scanning a QR code. The questionnaires are a combination of questions regarding the emotional state of the participants, their hunger levels, their interaction with the robot, and the music included in the interaction. Stroopwafels and water in a clean cup were made available to see and measure how much people ate.
59
60 During the experiments, four different types of questions were given to the participants, in addition to the Consent Form and Disclaimers they had to sign in the beginning. The four sections were:
61
62 1. 8 questions from the [[EVEA>>https://www.ucm.es/data/cont/docs/39-2013-04-19-EVEA%20-%20Datasheet.pdf]] questionnaire for mood assessment
63 1. 4 questions from the [[Godspeed>>https://www.bartneck.de/2008/03/11/the-godspeed-questionnaire-series/]] questionnaire to assess the pleasantness and intelligence of the robot
64 1. 3 hunger and food-related questions of our own, to assess if they eat before or during the interaction (5-point Likert scale)
65 1. 2 music-related questions of our own, to measure how much they enjoyed the music and what was its effect (5-point Likert scale)
66
67 Before the first interaction, the participants were asked to respond to sections 1. and 3., while right after each interaction, they were asked to respond to all four sections, with the music section only present after the advanced interaction.
68
69 == Practicalities ==
70
71 Before the experiment we:
72
73 * did a practice round by ourselves
74 ** This was filmed to have a controlled performance to give an example of the experiment if needed
75 * contacted other groups and decide on scheduling
76 ** Each participant was booked a 20 min slot
77 * reserved the lab
78 * bought the stroopwafels
79
80 = Results =
81
82 The results were gathered from 19 personnel, all of whom interacted first with one version of the robot and then the other. 10 of the participants interacted first with the simple version, nine having their first interaction with the advanced version.
83
84 == Eating ==
85
86 {{html}}
87 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/EatingComp.png?rev=1.1" alt="Results on the eating of the test personas" style="display:block;margin-left:auto;margin-right:auto" width=750/>
88 {{/html}}
89
90 (% style="text-align:center" %)
91 Figure 1: Results on the eating of the test personas during the experiment
92
93 Simple robot:
94
95 * 16% ate
96 * 33% of those would not have eaten without the robot
97
98 Advanced robot:
99
100 * 32% ate
101 * 67% of those would not have eaten without the robot
102
103 == Music ==
104
105 {{html}}
106 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/MusicEnjoyable.png?rev=1.1" alt="Effects of music on the test personnel" style="display:block;margin-left:auto;margin-right:auto" width=1250/>
107 {{/html}}
108
109 (% style="text-align:center" %)
110 Figure 2: Answers of the test personas regarding music
111
112 == EVEA (Mood) ==
113
114 {{html}}
115 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/MoodChangeDumb.png?rev=1.1" alt="Measured moods and changes for the simple robot" style="display:block;margin-left:auto;margin-right:auto" width=750/>
116 {{/html}}
117
118 (% style="text-align:center" %)
119 Figure 3: Median measured moods for the simple version of the robot
120
121 {{html}}
122 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/MoodChangeSmart.png?rev=1.1" alt="Measured moods and changes for the advanced version of the robot" style="display:block;margin-left:auto;margin-right:auto" width=750/>
123 {{/html}}
124
125 (% style="text-align:center" %)
126 Figure 4: Median measured moods for the advanced version of the robot
127
128 (% style="text-align:center" %)
129 Table 1: Wilcoxon signed rank test results for the hypothesis that the mood changed during the interaction with the simple robot
130
131 |=Mood|=Happiness|=Anxiety|=Sadness|=Anger
132 |Statistic|37|5|4|14
133 |P-value|0.54|0.01|0.01|0.45
134
135 (% style="text-align:center" %)
136 Table 2: Wilcoxon signed rank test results for the null hypothesis that the mood changed during the interaction with the advanced robot
137
138 |=Mood|=Happiness|=Anxiety|=Sadness|=Anger
139 |Statistic|32|11|2|17
140 |P-value|0.18|0.01|0.01|0.45
141
142 (% style="text-align:center" %)
143 Table 3: Wilcoxon signed rank test results for the null hypothesis that the mood decreased during the interaction with the simple robot
144
145 |=Mood|=Anxiety|=Sadness|=Anger
146 |Statistic|81|53|29
147 |P-value|0.01|0.00|0.23
148
149 (% style="text-align:center" %)
150 Table 4: Wilcoxon signed rank test results for the null hypothesis that the mood decreased during the interaction with the advanced robot
151
152 |=Mood|=Anxiety|=Sadness|=Anger
153 |Statistic|32|149|52
154 |P-value|0.00|0.01|0.07
155
156 (% style="text-align:center" %)
157 Table 5: Wilcoxon signed rank test results for the null hypothesis that the mood increased during the interaction with the simple robot
158
159 |=Mood|=Happiness
160 |Statistic|37
161 |P-value|0.27
162
163 (% style="text-align:center" %)
164 Table 6: Wilcoxon signed rank test results for the null hypothesis that the mood increased during the interaction with the advanced robot
165
166 |=Mood|=Happiness
167 |Statistic|32
168 |P-value|0.09
169
170 (% style="text-align:center" %)
171 Table 7: Wilcoxon signed rank test results for the hypothesis that the mood changes with the simple and advanced robots during the interaction are different
172
173 |=Mood|=Happiness|=Anxiety|=Sadness|=Anger
174 |Statistic|92|49|85|69
175 |P-value|0.92|0.07|0.71|0.31
176
177 == Godspeed ==
178
179 {{html}}
180 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/friendly-hist.png?rev=1.1" alt="Effects of music on the test personnel" style="display:block;margin-left:auto;margin-right:auto" width=750/>
181 {{/html}}
182
183 (% style="text-align:center" %)
184 Figure 5: Answers to the statement 'I thought the robot was friendly'
185
186 {{html}}
187 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/pleasant-hist.png?rev=1.1" alt="Answers to the statement 'I thought the robot was pleasant'." style="display:block;margin-left:auto;margin-right:auto" width=750/>
188 {{/html}}
189
190 (% style="text-align:center" %)
191 Figure 6: Answers to the statement 'I thought the robot was pleasant'
192
193 {{html}}
194 <img src="https://xwiki.ewi.tudelft.nl/xwiki/wiki/sce2022group01/download/Test/WebHome/godspeed-barchart.png?rev=1.1" alt="Godspeed questionnaire median comparison'." style="display:block;margin-left:auto;margin-right:auto" width=750/>
195 {{/html}}
196
197 (% style="text-align:center" %)
198 Figure 7: Median measured Godspeed questionnaire dimensions
199
200
201 (% style="text-align:center" %)
202 Table 8: Wilcoxon signed rank test results for the null hypothesis that the advanced robot scored higher in the perceived dimensions
203
204 |=Dimension|=Likeability|=Intelligence
205 |Statistic|36|70
206 |P-value|0.01|0.17
207
208 == Qualitative Results: Quotes and observations ==
209
210 As described, during the experiment, the interaction between the participants and the robot was observed. This section will elaborate on findings from those observations and quotes from participants.
211
212 After each interaction section, the participant was asked how the interaction with the robot felt. From the interaction with the less intelligent version of the robot, some interesting quotes were:
213
214 * “The robot was bit direct.”
215 * “Efficient interaction, but less friendly than the other interaction.”
216 * “Strange, I did not catch the questions.”
217 * “It felt short.”
218
219 Some of these quotes stress the fact that the less intelligent prototype interaction was rather short and direct. It should be said that the sequence of the interactions seemed to have some impact on how the participants experienced the interaction. Some participants who first experienced the less intelligent prototype were smiling and positively surprised during this interaction, while others who first experienced the intelligent prototype were overall smiling less while interacting with the less intelligent robot.
220
221 From the interaction with the intelligent version of the robot, some interesting quotes were:
222
223 * “I think it’s perfect, the robot is very friendly. I liked that the robot sat down with me after a while.”
224 * “The interaction felt quite natural.”
225 * “Nao answered pretty quickly, you don’t have to wait for an answer. It is quite a happy robot.”
226 * “Suggestion to eat was still a bit on the side, a little subtle if I would have dementia.”
227 * “Very nice, calming, I could have stayed longer with the music.”
228 * “It was good, natural, understands what I’m saying.”
229
230 Some participants clearly expressed how friendly they found the intelligent version of the robot. The sequence of the interactions did not seem to impact their feeling about the interaction as much as with the interaction with the less intelligent version of the robot.
231 Some reported that the interaction felt natural and intuitive.
232 As for the music, some participants told us that the music was a useful and pleasant addition to the interaction with the robot.
233 As for the suggestion to eat and drink, one participant reported that the suggestions to eat and drink were perhaps too friendly and too subtle.
234 From our observations, it seemed as if participants were either smiling more during the interaction with the intelligent version of the robot or concentrating on the interaction more carefully compared to the interaction with the less intelligent version of the robot.
235
236 = Discussion =
237
238 From the results we can see that the more advanced robot shows advantages over the simple version in multiple categories. Hints of better performance in other categories can be seen, but no conclusions should be drawn from the ones that lack the statistical significance.
239
240 As for the eating, it seems that both robots have limited success in causing the people to eat as seen in Figure 1, they could cause the patients to eat more regularly, if triggered by timers or other suitable systems. It also seems that the advanced robot is better in the reminding, by a slight margin. However, the long term effects of reminding should be researched more to conclude whether the usage of the demonstrated robot platform or similar would cause the patients to eat more regularly. It is also unclear how the test setup and the limited choice of food affected the eating.
241
242 Based on the answers of the participants regarding music seen in Figure 2, it can be seen that most of them were either indifferent or liked the music. Also, as the test personnel find the advanced robot more likeable with a 5% confidence limit (Table 7), and the advanced version was the only version with music, it seems likely that the music does make the interaction more pleasant for the personas. However, some of the likeability might be due to the other advanced features of the robot and thus more research is needed to conclude the effect of the music.
243
244 The EVEA and partial Godspeed result can be seen in Figures 3-7 and Tables 1-8. The results show that with reasonable confidence (5% confidence limit), both versions of the robot decreased sadness and anxiety in the test personas. Hints are shown (10% confidence limit) that the advanced robot also decreases feelings of anger and increases happiness, while the simple robot fails to show similar results. However, in Table 7 we can see that the statistical differences in the mood differences during the interactions with the different versions are not highly significant.
245
246 A Wilcoxon signed rank test for the partial Godspeed test shows in Table 8 that with high confidence (1% confidence limit), the intelligent robot is more likeable in comparison to the simple robot. With these results it is likely that the more advanced robot is slightly preferable and the personas might experience less negative emotions after the interaction with the robots, but it remains yet unclear if the effect is more powerful with the advanced robot.
247
248 The observations and interviews with the participants clearly demonstrated that for now, that a more friendly and intelligent robot does make the interaction with the robot more pleasant. Also, the observations do support the data from the questionnaire in terms of the likability difference between both robot types.
249
250 Analysis of the results surfaced some minor issues in the experiment, such as the lack of comparison with two robots of similar features, with and without music. Also the practical limitations in the setup, such as the lack of different food options and some participants being aware of the design goals of the prototype could have interfered with the natural flow of the intercourse. With these limitations, the research method was successful in extracting differences within the robots and brought up additional directions for future research.
251
252 The most interesting direction for future research would be the longer term studying of the effect of mealtime reminders on the health of the test subjects. The longer term health study would uncover the effect on eating frequency and the development of the relationship with the robot, for example would the test subjects that were first excited about the novel interaction with the robot, develop negative feelings about the supervision that the robot is conducting into their personal life.
253
254 Furthermore, an aspect that was not compared in this study is how many stroopwafels the participants ate while interacting with the robot. For now, the focus was to evaluate whether the claim the robot causes the PwD - in the case of the experiment: the participants - to eat or not. For future research, the amount of food consumed by the participants could also be taken into consideration.
255
256 Lastly, another topic to study is the differences with and without music. The effects of music could be studied with the music tailored to personal taste and all versions of the robot with and without the music playback included in the interaction. This would allow to pinpoint the effects of music, without the other features causing variance.
257
258 = Conclusions =
259
260 From the results it seems that in short-term interactions, both of the robots do remind the persons of their hunger, but the test setup might have caused many people not to eat or not to be hungry when arriving. It would also seem that the music does make the entire discourse more enjoyable as people did enjoy it, but it is unclear whether the observed increases in mood caused by the advanced robot in comparison to the simple version are due to the music or other features included in the advanced version or simply due to variance. It seems that the advanced robot is slightly more enjoyable due to the observed change in anxiety, but in total the results are inconclusive.
261
262 The long-term effects of this are unclear and require further study. The short-term experiment shows promising results to further develop such solutions, but to also conduct experiments to study the long-term effects of such a solution. With a longer experiment, the development of the human-robot interaction and the effect of constant mealtime reminders would likely begin to show, which could cause differences to the presented short-term results, by for example the robot becoming more enjoyable as it becomes familiar.
263
264 = Appendix =
265
266 == Experiment introduction for participants ==
267
268
269
270 Hi, we are <NAME> and <NAME> from the TU Delft Socio-Cognitive Engeering course Group 1, thank you for participating in our prototype evaluation experiment. The experiment is being conducted as a part of the TU Delft course on Socio-Cognitive Engineering and aims to evaluate the prototype designed as a part of the course. The evaluated prototype is based on the Nao robot-platform and is intended to improve the wellbeing of people suffering of dementia.
271
272 Consuming food and/or water can be a consequence of the interaction between you and the robot. Therefore, we would like to ask you if you have any allergies. If you have a form of Diabetes, please let us know before we start the first part of the experiment. You are strongly encouraged to share any other health conditions that can possibly be relevant to take into account when doing an experiment with robots and food with us.
273
274 The link between the stimuli of the Nao-robot and the triggering of epileptic seizures is yet unknown. If you have ever experienced epileptic seizures, please let us know. Then, we could see if any special precautions are needed.
275
276 The experiment will last for approximately 15-20 minutes, and consists of two interaction sections with the Nao robot, as well as questionnaires before, between and after the sections. We kindly ask you to act naturally during the experiment and fill the questionnaires truthfully and intuitively. Remember that we are evaluating the prototypes performance, not yours. You can stop the experiment at any time.
277
278 We will be collecting data of the questionnaires and recording some experiments, do you agree with your experiment being recorded? All data excluding the recordings will be anonymised before analysis and storage. The recordings will not be shared with third parties. After the experiment you have the right to ask for information about the collected data and revoke the right to use it. We kindly ask you not to share any information about the experiment with other participants.
279 
Do you have any questions?
280
281 == After research interview ==
282
283 Setup:
284 The test subject has finished both parts of the experiment. Before leaving the test conductor(s) sit down with them and ask the following questions in a discussion about the experiment. Discussion can flow freely, but the following topics should be discussed.
285
286 Topics:
287 - Emotions before / during / after the interaction with the robot
288 - Agitation due to the robot suggesting eating
289 - Effect of music on the general feeling of the situation
290 - Feeling of company during eating
291 - Effectiveness of eating/drinking suggestions
292
293 Questions:
294 - Did you eat or drink anything during the experiment?
295 - Were you feeling hungry/thirsty beforehand and did the discourse change this?
296 - On a scale of 1-10, how likely would you have eaten/drank without the robot suggesting it?
297 - What did the interaction with the robot feel like?
298 - With the more intelligent version?
299 - With the less intelligent version?
300 - What did you feel like when the robot suggested you should eat/drink?