Wiki source code of Test

Version 53.1 by Sayak Mukherjee on 2022/04/02 02:40

Show last authors
1 The best way to test our prototype would be a study with persons with dementia. However, testing the robot in a real environment would be very time-consuming, because it is not predictable if and when people with dementia start wandering. Hence it is out of the scope of this project.
2 Nevertheless, we wanted to get a first impression of how realistic and convincing the provided dialogues and suggested activities are. In a small study with students, who play the role of having dementia, we are observing the interaction with the robot with an aim to find out how effective it is in preventing people from wandering.
3
4
5 = Problem statement and research questions =
6
7 **Goal**: How effective is music and dialogue in preventing people with dementia from wandering?
8
9 **Research Questions (RQ):**
10
11
12 1. What percentage of people are prevented from going out unsupervised? (Quantitative) (CL01, CL05)
13 1. How does the interaction change the participant's mood? (CL02)
14 1. Can the robot respond appropriately to the participant's intention? (CL03)
15 1. How do the participants react to the music? (CL04)
16 1. Does the activity that the robot suggests prevent people from wandering/ leaving? (CL06)
17 1. Can pepper identify and catch the attention of the PwD?
18
19 //Future research questions//
20
21 1. Does the interaction with Pepper make PwD come back to reality? (CL08)
22 1. Does the interaction with Pepper make PwD feel he/she is losing freedom? (CL09)
23 1. Does preventing the participant from going out alone make them feel dependent? (CL10)
24
25 = Method =
26
27 A between-subject study with students who play the role of having dementia. Data will be collected with a questionnaire (before and after participation), observing the participant's body language and the way that they're responding to Pepper.
28
29 == Participants ==
30
31 18 students who play the role of having dementia. They will be divided into two groups. One group (11 participants) will be interacting with the intelligent (group 1) robot while the other group (7 students) will interact with the unintelligent robot (group 2).
32 It is assumed that all participants are living at the same care center.
33 Before they start, they can choose how stubborn they want to be and where they want to go.
34
35 == Experimental design ==
36
37 All questions collect quantitative data, using a 5 point Likert scale wherever applicable.
38
39 1. Observe the participant's mood and see how the conversation goes. Observe the level of aggression (tone, volume, pace)
40 1. Observe whether the mood is improved and the decision has been changed.
41 1. Observe how natural the conversation is. (conversation makes sense)
42 1. Participants fill out questionnaires.
43
44 == Tasks ==
45
46 Because our participants only play the role of having dementia, we will give them a level of stubbornness/ willpower with they are trying to leave. We try to detect this level with the robot.
47 Participants from group 1 (using intelligent robot) will also be given one of the reasons to leave, listed below:
48
49 1. going to the supermarket
50 1. going to the office
51 1. going for a walk
52
53 After this preparation, the participant is told to (try to) leave the building. The participant and robot have an interaction where the robot is trying to convince the participant to stay inside.
54
55
56 == Measures ==
57
58 We will be measuring this physically and emotionally.
59 Physically: whether the participant was stopped from leaving the building or not.
60 Emotionally: evaluate their responses to the robot and observe their mood before and after the interaction.
61
62
63 == Procedure ==
64
65 {{html}}
66 <!-- Your HTML code here -->
67 <table width='100%'>
68 <tr>
69 <th width='50%'>Group 1</th>
70 <th width='50%'>Group 2</th>
71 </tr>
72 <tr>
73 <td>intelligent robot</td>
74 <td>unintelligent robot</td>
75 </tr>
76 <tr>
77 <td>
78 1. Starts with a short briefing on what we expect from the participant<br>
79 2. Let them fill out the informed consent form<br>
80 3. Tell them their level of stubbornness and reason to leave<br>
81 4. Fill out question about current mood (in their role)<br>
82 4. Let the user interact with the robot<br>
83 5. While user is interacting, we will be observing the conversation with the robot<br>
84 6. Let user fill out the questionnaire about their experience after the interaction
85 </td>
86 <td>
87 1. Starts with a short briefing on what we expect from the participant<br>
88 2. Let them fill out the informed consent form<br>
89 4. Fill out question about current mood (in their role)<br>
90 5. Let the user interact with the robot<br>
91 6. Let user fill out the questionnaire about their experience after the interaction<br>
92 </td>
93 </tr>
94 </table>
95
96 {{/html}}
97
98 == Material ==
99
100 Pepper, laptop, door, and music.
101
102
103 = Results =
104 {{html}}
105 <!--=== Comparison between intelligent (cond. 1) and less intelligent (cond. 2) prototype ===
106
107 {{html}}
108 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/Stay_inside.svg" width="500" height="270" />
109 {{/html}}
110
111 Non of the participants who interacted with the less intelligent robot was prevented from leaving. Still, 3 people assigned to condition 1 weren't convinced to stay inside. A failure rate of 27,3 % is too high for this application since people could be in danger if the system fails.
112
113 **Mood evolution**
114 [[image:mood_only.png||height="150"]]
115
116 {{html}}<img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/mood_before.svg" width="500" height="270" /><br>
117 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/mood_after.svg" width="500" height="270" /><br>{{/html}}
118
119 Regarding the changes in mood, 4 out of 11 participants assigned to condition 1 had an increase in mood throughout the interaction. Only one participant felt less happy afterward, the rest stayed at the same level of happiness. The overall mood shifted to happier in general (as you can see in the graphic above), even though only small improvements in mood were detected (<= 2 steps on the scale).
120 The participants from condition 2 mostly stayed at the same mood level, 2 were less happy, one participant was happier afterward. Comparing both conditions it becomes clear, that condition 1 had a more positive impact on the participant's mood.
121
122 Interesting is also, none of the participants was in a really bad mood at the beginning or end.
123
124 ==== Condition 1 - intelligent Prototype: ====
125
126 {{html}}
127 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/Music_reco.svg" width="500" height="270" /> <br>
128 {{/html}}
129
130 8 out of 11 Participants answered, that they don't know the music that has been played. If we told them afterward the title of the song, most participants do know the song. Why didn't they recognize it during the interaction?
131 This can have two reasons: The part of the song we pick was too short to be recognized or not the most significant part of the song. For example, the beginning of "escape - the pina colada song" is not as well known as its chorus. Another reason could be, that the participant was distracted or confused by the robot and therefore couldn't carefully listen to the music.
132
133 {{html}}
134 <img src="/xwiki/wiki/sce2022group05/download/Test/WebHome/Music_fit.svg" width="500" height="270" /> <br>
135 {{/html}}
136
137 Only 4 out of 11 people agreed, that the music fits the situation. One of our claims, to use music that fits the situation or place, is therefore not reached and the music didn't have the intended effect. Even though we carefully choose the music and discussed a lot about our choice, it was hard to find music that different people connect with a certain place or activity. An approach to improve this could be using an individual playlist for each participant.
138
139
140 ==== Condition 2 - less intelligent prototype: ====
141
142 Participants assigned to condition 2 weren't convinced to leave. We saw, that most of them tried to continue talking to pepper when it raises its arm to block the door, even though it didn't listen. They were surprised by peppers reaction and asked for a reason why they are not allowed to leave. In order to have a natural conversation flow, the robot should provide an explanation for each scenario that tells why the person is not allowed to leave. This confirms that our approach, to give reason to stay inside, might be helpful to convince PwD to stay inside.
143
144 === Problems that occurred during the evaluation ===
145
146 1. lots of difficulties with speech recognition:
147 1.1. even though the participant said one of the expected words, pepper understood it wrong and continued with a wrong path
148 1.2. If the participant started to talk before pepper was listening (eyes turning blue), it misses a "yes" or "no" at the beginning of the sentence, which causes misunderstandings.
149 1. problems with face detection
150 2.1. due to bad light face was not recognized
151 2.2. if the participant passes pepper from the side, the face was not recognized. Therefore, we told people to walk from the front towards pepper. In most cases that helped detect the face.
152 2.3. face detection doesn't work with face masks. This could lead to huge problems in the usage of pepper in care homes.
153
154 One of the most frequent and noticeable reactions from participants was **confusion**. This feeling was caused by two main factors:
155 misunderstandings from speech recognition which leads to unsuitable answers from pepper, as well as the unsuitable environment and setting of our evaluation.
156 The reasons for failure in speech recognition are listed above. An unsuitable answer can e.g. be an argument to stay inside, that doesn't fit the participant's reason to leave. Also, some people told in a long sentence that they don't like the provided activity and still want to leave. If the speech recognition fails in this case and pepper understood you would like to do the activity, it seems like it encourages you to leave, instead of doing the activity. This leads to the total opposite of our intention.
157 Furthermore, we found out, that our prototype doesn't fit in the environment of the lab. We encourage the participant to do some activities, that they can't do in the lab environment (go to the living room, have a coffee or do a puzzle). If the robot tells asks you if you want to do the activity, most people don't know how to react and are insecure about how to answer. Participants "freeze" in front of the robot or just left the room. -->
158 {{/html}}
159
160 1. What percentage of people are prevented from going out unsupervised? (Quantitative) (CL01, CL05)
161 1. How does the interaction change the participant's mood? (CL02)
162 1. Can the robot respond appropriately to the participant's intention? (CL03)
163 1. How do the participants react to the music? (CL04)
164 1. Does the activity that the robot suggests prevent people from wandering/ leaving? (CL06)
165 1. Can pepper identify and catch the attention of the PwD?
166
167 === RQ1: Are people convinced not to go out unsupervised? ===
168 {{html}}
169 <table style="width: 100%">
170 <tr>
171 <td style="width: 50%">
172 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/Storyboard_1.png?height=750&rev=1.1" />
173 </td>
174 <td>
175 Comment on the graph
176 </td>
177 </tr>
178 </table>
179 {{/html}}
180
181 === RQ2: How does the interaction change the participant's mood? ===
182 {{html}}
183 <table style="width: 100%">
184 <tr>
185 <td style="width: 50%">
186 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/Storyboard_1.png?height=750&rev=1.1" />
187 </td>
188 <td>
189 Comment on the graph
190 </td>
191 </tr>
192 </table>
193 {{/html}}
194
195 === RQ3: Can the robot respond appropriately to the participant's intention? ===
196 {{html}}
197 <table style="width: 100%">
198 <tr>
199 <td style="width: 50%">
200 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/Storyboard_1.png?height=750&rev=1.1" />
201 </td>
202 <td>
203 Comment on the graph
204 </td>
205 </tr>
206 </table>
207 {{/html}}
208
209 === RQ4: How do the participants react to the music? ===
210 {{html}}
211 <table style="width: 100%">
212 <tr>
213 <td style="width: 50%">
214 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/Storyboard_1.png?height=750&rev=1.1" />
215 </td>
216 <td>
217 Comment on the graph
218 </td>
219 </tr>
220 </table>
221 {{/html}}
222
223 === RQ5: Does the activity that the robot suggests prevent people from wandering/ leaving? ===
224 {{html}}
225 <table style="width: 100%">
226 <tr>
227 <td style="width: 50%">
228 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/Storyboard_1.png?height=750&rev=1.1" />
229 </td>
230 <td>
231 Comment on the graph
232 </td>
233 </tr>
234 </table>
235 {{/html}}
236
237 === RQ6: Can pepper identify and catch the attention of the PwD? ===
238 {{html}}
239 <table style="width: 100%">
240 <tr>
241 <td style="width: 50%">
242 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/Storyboard_1.png?height=750&rev=1.1" />
243 </td>
244 <td>
245 Comment on the graph
246 </td>
247 </tr>
248 </table>
249 {{/html}}
250
251 === Reliabity Scores ===
252 {{html}}
253 <table style="width: 100%">
254 <tr>
255 <td style="width: 50%">
256 <img src="/xwiki/wiki/sce2022group05/download/Foundation/Operational%20Demands/Personas/WebHome/Storyboard_1.png?height=750&rev=1.1" />
257 </td>
258 <td>
259 Comment on the graph
260 </td>
261 </tr>
262 </table>
263 {{/html}}
264
265 = Limitation=
266
267 * **Lab Environment**: The lab environment is different from a care home, the participants found it difficult to process the suggestions made by Pepper. For example, if Pepper asked someone to visit the living room, it created confusion among the participants regarding their next action.
268
269 * **Role-Playing**: Participants for the experiment are not actual patients suffering from dementia. Hence it is naturally difficult for them to enact the situations and replicate the mental state of an actual person suffering from dementia.
270
271 * **Speech Recognition**: The speech recognition module inside Pepper is not perfect. Therefore, in certain cases, Pepper misinterpreted words spoken by the participants and triggered an erroneous dialogue flow. The problems commonly occurred with words that sound similar such as "work" and "walk". Moreover, there are some additional hardware limitations that hampered the efficiency of the speech recognition system. One prominent issue is that the microphone within Pepper is only active when the speaker is turned off. A blue light in the eye of Pepper indicated the operation of the microphone. Since most of the participants are not used to interacting with Pepper found it difficult to keep this limitation in mind while trying to have a natural conversation.
272
273 * **Face Detection**: The face recognition module within Pepper is also rudimentary in nature. It can not detect half faces are when participants approach from the side. Adding to the problem, the lighting condition in the lab was not sufficient for the reliable functioning of the face recognition module. Hence Pepper failed to notice the participant in some cases and did not start the dialogue flow.
274
275 = Conclusions =
276 * People who liked the activity tend to stay in
277 * People who knew the music found it more fitting
278 * People are more convinced to stay in with the intelligent prototype
279 * Cannot conclude whether moods were improved
280 * Need to experiment with the actual target user group to derive on concrete conclusion
281 * Experiment with personalization
282
283 = Future Work =
284 * **Personalisation**: Personalize music, and activity preferences according to the person interacting with Pepper.
285 * **Robot Collaboration**: Collaborate with other robots such as Miro to assist a person with dementia while going for a walk instead of the caretaker.
286 * **Recognise Person**: For a personalised experience, it is essential that Pepper is able to identify each person based on an internal database.
287 * **Fine Tune Speech Recognition**: Improvements are necessary for the speech recognition module before the actual deployment of the project in a care home. Additionally, support for multiple languages can be considered to engage with non-English speaking people.