Wiki source code of Test

Version 60.1 by Vishruty Mittal on 2022/04/02 11:53

Show last authors
1 Evaluation is an iterative process where the initial iterations focus on examining if the proposed idea is working as intended. Therefore, we want to first understand how realistic and convincing the provided dialogues and suggested activities are, and would they be able to prevent people from wandering. To examine this, we conduct a small pilot study with students, who role-play having dementia. We then observe their interaction with Pepper to examine the effectiveness of our dialog flow in preventing people from wandering.
2
3
4 = Problem statement and research questions =
5
6 **Goal**: How effective is music and dialogue in preventing people with dementia from wandering?
7
8 **Research Questions (RQ):**
9
10
11 1. What percentage of people are prevented from going out unsupervised? (Quantitative) (CL01, CL05)
12 1. How does the interaction change the participant's mood? (CL02)
13 1. Can the robot respond appropriately to the participant's intention? (CL03)
14 1. How do the participants react to the music? (CL04)
15 1. Does the activity that the robot suggests prevent people from wandering/ leaving? (CL06)
16 1. Can pepper identify and catch the attention of the PwD?
17
18 //Future research questions//
19
20 1. Does the interaction with Pepper make PwD come back to reality? (CL08)
21 1. Does the interaction with Pepper make PwD feel he/she is losing freedom? (CL09)
22 1. Does preventing the participant from going out alone make them feel dependent? (CL10)
23
24 = Method =
25
26 A between-subject study with students who play the role of having dementia. Data will be collected with a questionnaire that participants fill out before and after interacting with Pepper. The questionnaire captures different aspects of the conversation along with their mood before and after the interaction with Pepper.
27
28 == Participants ==
29
30 17 students who play the role of having dementia. They will be divided into two groups. One group (11 participants) will be interacting with the intelligent (group 1) robot while the other group (6 students) will interact with the unintelligent robot (group 2).
31 It is assumed that all participants are living at the same care center.
32 Before they start, they can choose how stubborn they want to be and where they want to go.
33
34 == Experimental design ==
35
36 All questions collect quantitative data, using a 5 point Likert scale wherever applicable.
37
38 1. Observe the participant's mood and see how the conversation goes. Observe the level of aggression (tone, volume, pace)
39 1. Observe whether the mood is improved and the decision has been changed.
40 1. Observe how natural the conversation is. (conversation makes sense)
41 1. Participants fill out questionnaires.
42
43 == Tasks ==
44
45 Because our participants only play the role of having dementia, we will give them a level of stubbornness/ willpower with they are trying to leave. We try to detect this level with the robot.
46 Participants from group 1 (using intelligent robot) will also be given one of the reasons to leave, listed below:
47
48 1. going to the supermarket
49 1. going to the office
50 1. going for a walk
51
52 After this preparation, the participant is told to (try to) leave the building. The participant and robot have an interaction where the robot is trying to convince the participant to stay inside.
53
54
55 == Measures ==
56
57 We will be measuring this physically and emotionally.
58 Physically: whether the participant was stopped from leaving the building or not.
59 Emotionally: evaluate their responses to the robot and observe their mood before and after the interaction.
60
61
62 == Procedure ==
63
64 {{html}}
65 <!-- Your HTML code here -->
66 <table width='100%'>
67 <tr>
68 <th width='50%'>Group 1</th>
69 <th width='50%'>Group 2</th>
70 </tr>
71 <tr>
72 <td>intelligent robot</td>
73 <td>unintelligent robot</td>
74 </tr>
75 <tr>
76 <td>
77 1. Starts with a short briefing on what we expect from the participant<br>
78 2. Let them fill out the informed consent form<br>
79 3. Tell them their level of stubbornness and reason to leave<br>
80 4. Fill out question about current mood (in their role)<br>
81 4. Let the user interact with the robot<br>
82 5. While user is interacting, we will be observing the conversation with the robot<br>
83 6. Let user fill out the questionnaire about their experience after the interaction
84 </td>
85 <td>
86 1. Starts with a short briefing on what we expect from the participant<br>
87 2. Let them fill out the informed consent form<br>
88 4. Fill out question about current mood (in their role)<br>
89 5. Let the user interact with the robot<br>
90 6. Let user fill out the questionnaire about their experience after the interaction<br>
91 </td>
92 </tr>
93 </table>
94
95 {{/html}}
96
97 == Material ==
98
99 Pepper, laptop, door, and music.
100
101 = Results =
102
103 {{html}}
104 <!--=== Comparison between intelligent (cond. 1) and less intelligent (cond. 2) prototype ===
105
106 {{html}}
107 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/Stay_inside.svg" width="500" height="270" />
108 {{/html}}
109
110 Non of the participants who interacted with the less intelligent robot was prevented from leaving. Still, 3 people assigned to condition 1 weren't convinced to stay inside. A failure rate of 27,3 % is too high for this application since people could be in danger if the system fails.
111
112 **Mood evolution**
113 [[image:mood_only.png||height="150"]]
114
115 {{html}}<img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/mood_before.svg" width="500" height="270" /><br>
116 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/mood_after.svg" width="500" height="270" /><br>{{/html}}
117
118 Regarding the changes in mood, 4 out of 11 participants assigned to condition 1 had an increase in mood throughout the interaction. Only one participant felt less happy afterward, the rest stayed at the same level of happiness. The overall mood shifted to happier in general (as you can see in the graphic above), even though only small improvements in mood were detected (<= 2 steps on the scale).
119 The participants from condition 2 mostly stayed at the same mood level, 2 were less happy, one participant was happier afterward. Comparing both conditions it becomes clear, that condition 1 had a more positive impact on the participant's mood.
120
121 Interesting is also, none of the participants was in a really bad mood at the beginning or end.
122
123 ==== Condition 1 - intelligent Prototype: ====
124
125 {{html}}
126 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/Music_reco.svg" width="500" height="270" /> <br>
127 {{/html}}
128
129 8 out of 11 Participants answered, that they don't know the music that has been played. If we told them afterward the title of the song, most participants do know the song. Why didn't they recognize it during the interaction?
130 This can have two reasons: The part of the song we pick was too short to be recognized or not the most significant part of the song. For example, the beginning of "escape - the pina colada song" is not as well known as its chorus. Another reason could be, that the participant was distracted or confused by the robot and therefore couldn't carefully listen to the music.
131
132 {{html}}
133 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/Music_fit.svg" width="500" height="270" /> <br>
134 {{/html}}
135
136 Only 4 out of 11 people agreed, that the music fits the situation. One of our claims, to use music that fits the situation or place, is therefore not reached and the music didn't have the intended effect. Even though we carefully choose the music and discussed a lot about our choice, it was hard to find music that different people connect with a certain place or activity. An approach to improve this could be using an individual playlist for each participant.
137
138
139 ==== Condition 2 - less intelligent prototype: ====
140
141 Participants assigned to condition 2 weren't convinced to leave. We saw, that most of them tried to continue talking to pepper when it raises its arm to block the door, even though it didn't listen. They were surprised by peppers reaction and asked for a reason why they are not allowed to leave. In order to have a natural conversation flow, the robot should provide an explanation for each scenario that tells why the person is not allowed to leave. This confirms that our approach, to give reason to stay inside, might be helpful to convince PwD to stay inside.
142
143 === Problems that occurred during the evaluation ===
144
145 1. lots of difficulties with speech recognition:
146 1.1. even though the participant said one of the expected words, pepper understood it wrong and continued with a wrong path
147 1.2. If the participant started to talk before pepper was listening (eyes turning blue), it misses a "yes" or "no" at the beginning of the sentence, which causes misunderstandings.
148 1. problems with face detection
149 2.1. due to bad light face was not recognized
150 2.2. if the participant passes pepper from the side, the face was not recognized. Therefore, we told people to walk from the front towards pepper. In most cases that helped detect the face.
151 2.3. face detection doesn't work with face masks. This could lead to huge problems in the usage of pepper in care homes.
152
153 One of the most frequent and noticeable reactions from participants was **confusion**. This feeling was caused by two main factors:
154 misunderstandings from speech recognition which leads to unsuitable answers from pepper, as well as the unsuitable environment and setting of our evaluation.
155 The reasons for failure in speech recognition are listed above. An unsuitable answer can e.g. be an argument to stay inside, that doesn't fit the participant's reason to leave. Also, some people told in a long sentence that they don't like the provided activity and still want to leave. If the speech recognition fails in this case and pepper understood you would like to do the activity, it seems like it encourages you to leave, instead of doing the activity. This leads to the total opposite of our intention.
156 Furthermore, we found out, that our prototype doesn't fit in the environment of the lab. We encourage the participant to do some activities, that they can't do in the lab environment (go to the living room, have a coffee or do a puzzle). If the robot tells asks you if you want to do the activity, most people don't know how to react and are insecure about how to answer. Participants "freeze" in front of the robot or just left the room. -->
157 {{/html}}
158
159 === RQ1: Are people convinced not to go out unsupervised? ===
160
161 {{html}}
162 <table style="width: 100%">
163 <tr>
164 <td style="width: 50%">
165 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ1.jpg?height=250&rev=1.1" />
166 </td>
167 <td>
168 Comment on the graph
169 </td>
170 </tr>
171 </table>
172 {{/html}}
173
174 === RQ2: How does the interaction change the participant's mood? ===
175
176 {{html}}
177 <table style="width: 100%">
178 <tr>
179 <td style="width: 50%">
180 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ2.jpg?height=250&rev=1.1" />
181 </td>
182 <td>
183 Comment on the graph
184 </td>
185 </tr>
186 </table>
187 {{/html}}
188
189 === RQ3: Can the robot respond appropriately to the participant's intention? ===
190
191 {{html}}
192 <table style="width: 100%">
193 <tr>
194 <td style="width: 50%">
195 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ3.jpg?height=250&rev=1.1" />
196 </td>
197 <td>
198 Comment on the graph
199 </td>
200 </tr>
201 </table>
202 {{/html}}
203
204 === RQ4: How do the participants react to the music? ===
205
206 {{html}}
207 <table style="width: 100%">
208 <tr>
209 <td style="width: 50%">
210 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ4.jpg?height=250&rev=1.1" />
211 </td>
212 <td>
213 Comment on the graph
214 </td>
215 </tr>
216 </table>
217 {{/html}}
218
219 === RQ5: Does the activity that the robot suggests prevent people from wandering/ leaving? ===
220
221 {{html}}
222 <table style="width: 100%">
223 <tr>
224 <td style="width: 50%">
225 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ5.jpg?height=250&rev=1.1" />
226 </td>
227 <td>
228 Comment on the graph
229 </td>
230 </tr>
231 </table>
232 {{/html}}
233
234 === RQ6: Can pepper identify and catch the attention of the PwD? ===
235
236 {{html}}
237 <table style="width: 100%">
238 <tr>
239 <td style="width: 50%">
240 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RQ6.jpg?height=250&rev=1.1" />
241 </td>
242 <td>
243 Comment on the graph
244 </td>
245 </tr>
246 </table>
247 {{/html}}
248
249 === Reliabity Scores ===
250
251 {{html}}
252 <table style="width: 100%">
253 <tr>
254 <td style="width: 50%">
255 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/RelScores.jpg?height=250&rev=1.1" />
256 </td>
257 <td>
258 Comment on the graph
259 </td>
260 </tr>
261 </table>
262 {{/html}}
263
264 = Limitation =
265
266 * **Lab Environment**: The lab environment is different from a care home, the participants found it difficult to process the suggestions made by Pepper. For example, if Pepper asked someone to visit the living room, it created confusion among the participants regarding their next action.
267
268 * **Role-Playing**: Participants for the experiment are not actual patients suffering from dementia. Hence it is naturally difficult for them to enact the situations and replicate the mental state of an actual person suffering from dementia.
269
270 * **Speech Recognition**: The speech recognition module inside Pepper is not perfect. Therefore, in certain cases, Pepper misinterpreted words spoken by the participants and triggered an erroneous dialogue flow. The problems commonly occurred with words that sound similar such as "work" and "walk". Moreover, there are some additional hardware limitations that hampered the efficiency of the speech recognition system. One prominent issue is that the microphone within Pepper is only active when the speaker is turned off. A blue light in the eye of Pepper indicated the operation of the microphone. Since most of the participants are not used to interacting with Pepper found it difficult to keep this limitation in mind while trying to have a natural conversation.
271
272 * **Face Detection**: The face recognition module within Pepper is also rudimentary in nature. It can not detect half faces are when participants approach from the side. Adding to the problem, the lighting condition in the lab was not sufficient for the reliable functioning of the face recognition module. Hence Pepper failed to notice the participant in some cases and did not start the dialogue flow.
273
274 = Conclusions =
275
276 * People who liked the activity tend to stay in
277 * People who knew the music found it more fitting
278 * People are more convinced to stay in with the intelligent prototype
279 * Cannot conclude whether moods were improved
280 * Need to experiment with the actual target user group to derive on concrete conclusion
281 * Experiment with personalization
282
283 = Future Work =
284
285 * **Personalisation**: Personalize music, and activity preferences according to the person interacting with Pepper.
286 * **Robot Collaboration**: Collaborate with other robots such as Miro to assist a person with dementia while going for a walk instead of the caretaker.
287 * **Recognise Person**: For a personalised experience, it is essential that Pepper is able to identify each person based on an internal database.
288 * **Fine Tune Speech Recognition**: Improvements are necessary for the speech recognition module before the actual deployment of the project in a care home. Additionally, support for multiple languages can be considered to engage with non-English speaking people.