Wiki source code of Test

Version 61.1 by Vishruty Mittal on 2022/04/02 12:26

Show last authors
1 Evaluation is an iterative process where the initial iterations focus on examining if the proposed idea is working as intended. Therefore, we want to first understand how realistic and convincing the provided dialogues and suggested activities are, and would they be able to prevent people from wandering. To examine this, we conduct a small pilot study with students, who role-play having dementia. We then observe their interaction with Pepper to examine the effectiveness of our dialog flow in preventing people from wandering.
2
3
4 = Problem statement and research questions =
5
6 **Goal**: How effective is music and dialogue in preventing people with dementia from wandering?
7
8 **Research Questions (RQ):**
9
10
11 1. What percentage of people are prevented from going out unsupervised? (Quantitative) (CL01, CL05)
12 1. How does the interaction change the participant's mood? (CL02)
13 1. Can the robot respond appropriately to the participant's intention? (CL03)
14 1. How do the participants react to the music? (CL04)
15 1. Does the activity that the robot suggests prevent people from wandering/ leaving? (CL06)
16 1. Can pepper identify and catch the attention of the PwD?
17
18 //Future research questions//
19
20 1. Does the interaction with Pepper make PwD come back to reality? (CL08)
21 1. Does the interaction with Pepper make PwD feel he/she is losing freedom? (CL09)
22 1. Does preventing the participant from going out alone make them feel dependent? (CL10)
23
24 = Method =
25
26 A between-subject study with students who play the role of having dementia. Data will be collected with a questionnaire that participants fill out before and after interacting with Pepper. The questionnaire captures different aspects of the conversation along with their mood before and after the interaction with Pepper.
27
28 For our between-subject study, our independent variable is Pepper trying to distract the users by mentioning different activities along with the corresponding music. Through this, we want to measure the effectiveness of music and activities in preventing people from leaving the care home, which is thereby our dependent variable. So we developed 2 different prototype designs-
29
30 Design X - It is the full interaction flow where Pepper suggests activities and uses music to distract people from leaving.
31 Design Y - It is the control condition where pepper simply tries to stop people from leaving by physically keeping its hand on the door.
32
33 == Participants ==
34
35 17 students who play the role of having dementia. They will be divided into two groups. One group (11 participants) will be interacting with the intelligent (group 1) robot while the other group (6 students) will interact with the unintelligent robot (group 2).
36 It is assumed that all participants are living at the same care center.
37 Before they start, they can choose how stubborn they want to be and where they want to go.
38
39 == Experimental design ==
40
41 All questions collect quantitative data, using a 5 point Likert scale wherever applicable.
42
43 1. Observe the participant's mood and see how the conversation goes. Observe the level of aggression (tone, volume, pace)
44 1. Observe whether the mood is improved and the decision has been changed.
45 1. Observe how natural the conversation is. (conversation makes sense)
46 1. Participants fill out questionnaires.
47
48 == Tasks ==
49
50 Because our participants only play the role of having dementia, we will give them a level of stubbornness/ willpower with they are trying to leave. We try to detect this level with the robot.
51 Participants from group 1 (using intelligent robot) will also be given one of the reasons to leave, listed below:
52
53 1. going to the supermarket
54 1. going to the office
55 1. going for a walk
56
57 After this preparation, the participant is told to (try to) leave the building. The participant and robot have an interaction where the robot is trying to convince the participant to stay inside.
58
59
60 == Measures ==
61
62 We will be measuring this physically and emotionally.
63 Physically: whether the participant was stopped from leaving the building or not.
64 Emotionally: evaluate their responses to the robot and observe their mood before and after the interaction.
65
66
67 == Procedure ==
68
69 {{html}}
70 <!-- Your HTML code here -->
71 <table width='100%'>
72 <tr>
73 <th width='50%'>Group 1</th>
74 <th width='50%'>Group 2</th>
75 </tr>
76 <tr>
77 <td>intelligent robot</td>
78 <td>unintelligent robot</td>
79 </tr>
80 <tr>
81 <td>
82 1. Starts with a short briefing on what we expect from the participant<br>
83 2. Let them fill out the informed consent form<br>
84 3. Tell them their level of stubbornness and reason to leave<br>
85 4. Fill out question about current mood (in their role)<br>
86 4. Let the user interact with the robot<br>
87 5. While user is interacting, we will be observing the conversation with the robot<br>
88 6. Let user fill out the questionnaire about their experience after the interaction
89 </td>
90 <td>
91 1. Starts with a short briefing on what we expect from the participant<br>
92 2. Let them fill out the informed consent form<br>
93 4. Fill out question about current mood (in their role)<br>
94 5. Let the user interact with the robot<br>
95 6. Let user fill out the questionnaire about their experience after the interaction<br>
96 </td>
97 </tr>
98 </table>
99
100 {{/html}}
101
102 == Material ==
103
104 Pepper, laptop, door, and music.
105
106 = Results =
107
108 {{html}}
109 <!--=== Comparison between intelligent (cond. 1) and less intelligent (cond. 2) prototype ===
110
111 {{html}}
112 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/Stay_inside.svg" width="500" height="270" />
113 {{/html}}
114
115 Non of the participants who interacted with the less intelligent robot was prevented from leaving. Still, 3 people assigned to condition 1 weren't convinced to stay inside. A failure rate of 27,3 % is too high for this application since people could be in danger if the system fails.
116
117 **Mood evolution**
118 [[image:mood_only.png||height="150"]]
119
120 {{html}}<img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/mood_before.svg" width="500" height="270" /><br>
121 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/mood_after.svg" width="500" height="270" /><br>{{/html}}
122
123 Regarding the changes in mood, 4 out of 11 participants assigned to condition 1 had an increase in mood throughout the interaction. Only one participant felt less happy afterward, the rest stayed at the same level of happiness. The overall mood shifted to happier in general (as you can see in the graphic above), even though only small improvements in mood were detected (<= 2 steps on the scale).
124 The participants from condition 2 mostly stayed at the same mood level, 2 were less happy, one participant was happier afterward. Comparing both conditions it becomes clear, that condition 1 had a more positive impact on the participant's mood.
125
126 Interesting is also, none of the participants was in a really bad mood at the beginning or end.
127
128 ==== Condition 1 - intelligent Prototype: ====
129
130 {{html}}
131 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/Music_reco.svg" width="500" height="270" /> <br>
132 {{/html}}
133
134 8 out of 11 Participants answered, that they don't know the music that has been played. If we told them afterward the title of the song, most participants do know the song. Why didn't they recognize it during the interaction?
135 This can have two reasons: The part of the song we pick was too short to be recognized or not the most significant part of the song. For example, the beginning of "escape - the pina colada song" is not as well known as its chorus. Another reason could be, that the participant was distracted or confused by the robot and therefore couldn't carefully listen to the music.
136
137 {{html}}
138 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/Music_fit.svg" width="500" height="270" /> <br>
139 {{/html}}
140
141 Only 4 out of 11 people agreed, that the music fits the situation. One of our claims, to use music that fits the situation or place, is therefore not reached and the music didn't have the intended effect. Even though we carefully choose the music and discussed a lot about our choice, it was hard to find music that different people connect with a certain place or activity. An approach to improve this could be using an individual playlist for each participant.
142
143
144 ==== Condition 2 - less intelligent prototype: ====
145
146 Participants assigned to condition 2 weren't convinced to leave. We saw, that most of them tried to continue talking to pepper when it raises its arm to block the door, even though it didn't listen. They were surprised by peppers reaction and asked for a reason why they are not allowed to leave. In order to have a natural conversation flow, the robot should provide an explanation for each scenario that tells why the person is not allowed to leave. This confirms that our approach, to give reason to stay inside, might be helpful to convince PwD to stay inside.
147
148 === Problems that occurred during the evaluation ===
149
150 1. lots of difficulties with speech recognition:
151 1.1. even though the participant said one of the expected words, pepper understood it wrong and continued with a wrong path
152 1.2. If the participant started to talk before pepper was listening (eyes turning blue), it misses a "yes" or "no" at the beginning of the sentence, which causes misunderstandings.
153 1. problems with face detection
154 2.1. due to bad light face was not recognized
155 2.2. if the participant passes pepper from the side, the face was not recognized. Therefore, we told people to walk from the front towards pepper. In most cases that helped detect the face.
156 2.3. face detection doesn't work with face masks. This could lead to huge problems in the usage of pepper in care homes.
157
158 One of the most frequent and noticeable reactions from participants was **confusion**. This feeling was caused by two main factors:
159 misunderstandings from speech recognition which leads to unsuitable answers from pepper, as well as the unsuitable environment and setting of our evaluation.
160 The reasons for failure in speech recognition are listed above. An unsuitable answer can e.g. be an argument to stay inside, that doesn't fit the participant's reason to leave. Also, some people told in a long sentence that they don't like the provided activity and still want to leave. If the speech recognition fails in this case and pepper understood you would like to do the activity, it seems like it encourages you to leave, instead of doing the activity. This leads to the total opposite of our intention.
161 Furthermore, we found out, that our prototype doesn't fit in the environment of the lab. We encourage the participant to do some activities, that they can't do in the lab environment (go to the living room, have a coffee or do a puzzle). If the robot tells asks you if you want to do the activity, most people don't know how to react and are insecure about how to answer. Participants "freeze" in front of the robot or just left the room. -->
162 {{/html}}
163
164 === RQ1: Are people convinced not to go out unsupervised? ===
165
166 {{html}}
167 <table style="width: 100%">
168 <tr>
169 <td style="width: 50%">
170 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ1.jpg?height=250&rev=1.1" />
171 </td>
172 <td>
173 Comment on the graph
174 </td>
175 </tr>
176 </table>
177 {{/html}}
178
179 === RQ2: How does the interaction change the participant's mood? ===
180
181 {{html}}
182 <table style="width: 100%">
183 <tr>
184 <td style="width: 50%">
185 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ2.jpg?height=250&rev=1.1" />
186 </td>
187 <td>
188 Comment on the graph
189 </td>
190 </tr>
191 </table>
192 {{/html}}
193
194 === RQ3: Can the robot respond appropriately to the participant's intention? ===
195
196 {{html}}
197 <table style="width: 100%">
198 <tr>
199 <td style="width: 50%">
200 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ3.jpg?height=250&rev=1.1" />
201 </td>
202 <td>
203 Comment on the graph
204 </td>
205 </tr>
206 </table>
207 {{/html}}
208
209 === RQ4: How do the participants react to the music? ===
210
211 {{html}}
212 <table style="width: 100%">
213 <tr>
214 <td style="width: 50%">
215 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ4.jpg?height=250&rev=1.1" />
216 </td>
217 <td>
218 Comment on the graph
219 </td>
220 </tr>
221 </table>
222 {{/html}}
223
224 === RQ5: Does the activity that the robot suggests prevent people from wandering/ leaving? ===
225
226 {{html}}
227 <table style="width: 100%">
228 <tr>
229 <td style="width: 50%">
230 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ5.jpg?height=250&rev=1.1" />
231 </td>
232 <td>
233 Comment on the graph
234 </td>
235 </tr>
236 </table>
237 {{/html}}
238
239 === RQ6: Can pepper identify and catch the attention of the PwD? ===
240
241 {{html}}
242 <table style="width: 100%">
243 <tr>
244 <td style="width: 50%">
245 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ6.jpg?height=250&rev=1.1" />
246 </td>
247 <td>
248 Comment on the graph
249 </td>
250 </tr>
251 </table>
252 {{/html}}
253
254 === Reliabity Scores ===
255
256 {{html}}
257 <table style="width: 100%">
258 <tr>
259 <td style="width: 50%">
260 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RelScores.jpg?height=250&rev=1.1" />
261 </td>
262 <td>
263 Comment on the graph
264 </td>
265 </tr>
266 </table>
267 {{/html}}
268
269 = Limitation =
270
271 * **Lab Environment**: The lab environment is different from a care home, the participants found it difficult to process the suggestions made by Pepper. For example, if Pepper asked someone to visit the living room, it created confusion among the participants regarding their next action.
272
273 * **Role-Playing**: Participants for the experiment are not actual patients suffering from dementia. Hence it is naturally difficult for them to enact the situations and replicate the mental state of an actual person suffering from dementia.
274
275 * **Speech Recognition**: The speech recognition module inside Pepper is not perfect. Therefore, in certain cases, Pepper misinterpreted words spoken by the participants and triggered an erroneous dialogue flow. The problems commonly occurred with words that sound similar such as "work" and "walk". Moreover, there are some additional hardware limitations that hampered the efficiency of the speech recognition system. One prominent issue is that the microphone within Pepper is only active when the speaker is turned off. A blue light in the eye of Pepper indicated the operation of the microphone. Since most of the participants are not used to interacting with Pepper found it difficult to keep this limitation in mind while trying to have a natural conversation.
276
277 * **Face Detection**: The face recognition module within Pepper is also rudimentary in nature. It can not detect half faces are when participants approach from the side. Adding to the problem, the lighting condition in the lab was not sufficient for the reliable functioning of the face recognition module. Hence Pepper failed to notice the participant in some cases and did not start the dialogue flow.
278
279 = Conclusions =
280
281 * People who liked the activity tend to stay in
282 * People who knew the music found it more fitting
283 * People are more convinced to stay in with the intelligent prototype
284 * Cannot conclude whether moods were improved
285 * Need to experiment with the actual target user group to derive on concrete conclusion
286 * Experiment with personalization
287
288 = Future Work =
289
290 * **Personalisation**: Personalize music, and activity preferences according to the person interacting with Pepper.
291 * **Robot Collaboration**: Collaborate with other robots such as Miro to assist a person with dementia while going for a walk instead of the caretaker.
292 * **Recognise Person**: For a personalised experience, it is essential that Pepper is able to identify each person based on an internal database.
293 * **Fine Tune Speech Recognition**: Improvements are necessary for the speech recognition module before the actual deployment of the project in a care home. Additionally, support for multiple languages can be considered to engage with non-English speaking people.