Changes for page Test

Last modified by Clara Stiller on 2022/04/05 13:44

From version Icon 70.1 Icon
edited by Vishruty Mittal
on 2022/04/02 13:02
Change comment: There is no comment for this version
To version Icon 56.4 Icon
edited by Vishruty Mittal
on 2022/04/02 11:45
Change comment: Added comment

Summary

Details

Icon Page properties
Content
... ... @@ -1,4 +1,4 @@
1 -Evaluation is an iterative process where the initial iterations focus on examining if the proposed idea is working as intended. Therefore, we want to first understand how realistic and convincing the provided dialogues and suggested activities are, and would they be able to prevent people from wandering. To examine this, we conduct a small pilot study with students, who role-play having dementia. We then observe their interaction with Pepper to examine the effectiveness of our dialog flow in preventing people from wandering.
1 +Evaluation is an iterative process where the initial iterations focus on examining if the proposed idea is working as intended. Therefore, we wanted to first understand how realistic and convincing the provided dialogues and suggested activities are, and would they be able to prevent people from wandering. To examine this, we conducted a small pilot study with students, who role-played having dementia. We then observed their interaction with Pepper to examine the effectiveness of our dialog flow in preventing people from wandering.
2 2  
3 3  
4 4  = Problem statement and research questions =
... ... @@ -23,42 +23,83 @@
23 23  
24 24  = Method =
25 25  
26 -A between-subject study with students who play the role of having dementia. Data will be collected with a questionnaire that participants fill out before and after interacting with Pepper. The questionnaire captures different aspects of the conversation along with their mood before and after the interaction with Pepper.
26 +A between-subject study with students who play the role of having dementia. Data will be collected with a questionnaire (before and after participation), observing the participant's body language and the way that they're responding to Pepper.
27 27  
28 -For our between-subject study, our independent variable is Pepper trying to distract the users by mentioning different activities along with the corresponding music. Through this, we want to measure the effectiveness of music and activities in preventing people from leaving the care home, which is thereby our dependent variable. So we developed 2 different prototype designs-
29 -
30 -Design X - It is the full interaction flow where Pepper suggests activities and uses music to distract people from leaving.
31 -Design Y - It is the control condition where pepper simply tries to stop people from leaving by physically keeping its hand on the door.
32 -
33 33  == Participants ==
34 34  
35 -17 students who play the role of having dementia. They will be divided into two groups. One group (11 participants) will be interacting with design X (group 1) robot while the other group (6 students) will interact with the design Y (group 2).
30 +18 students who play the role of having dementia. They will be divided into two groups. One group (11 participants) will be interacting with the intelligent (group 1) robot while the other group (7 students) will interact with the unintelligent robot (group 2).
36 36  It is assumed that all participants are living at the same care center.
32 +Before they start, they can choose how stubborn they want to be and where they want to go.
37 37  
38 38  == Experimental design ==
39 39  
40 -**Before Experiment:**
41 -We will explain to the participants the goal of this experiment and what do they need to do to prevent ambiguity. Therefore, as our participants are students and only playing the role of having dementia, we will give them a level of stubbornness/ willpower with which they are trying to leave the care home.
42 -Participants will also be given a reason to leave, from the below list:
36 +All questions collect quantitative data, using a 5 point Likert scale wherever applicable.
43 43  
44 -* going to the supermarket
45 -* going to the office
46 -* going for a walk
38 +1. Observe the participant's mood and see how the conversation goes. Observe the level of aggression (tone, volume, pace)
39 +1. Observe whether the mood is improved and the decision has been changed.
40 +1. Observe how natural the conversation is. (conversation makes sense)
41 +1. Participants fill out questionnaires.
47 47  
48 -After this preparation, the participant fills a part of the questionnaire.
43 +== Tasks ==
49 49  
50 -**Experiment:**
51 -The participant begins interacting with Pepper who is standing near the exit door. The participant and robot have an interaction where the robot is trying to convince him/her to stay inside.
45 +Because our participants only play the role of having dementia, we will give them a level of stubbornness/ willpower with they are trying to leave. We try to detect this level with the robot.
46 +Participants from group 1 (using intelligent robot) will also be given one of the reasons to leave, listed below:
52 52  
53 -**After Experiment:**
54 -After the participant finishes interacting with Pepper, he/she will be asked to fill out the remaining questionnaire. Almost all the questions in the questionnaire collect quantitative data, using a 5 point Likert scale. The questionnaire also used images from Self Assessment Manikin (SAM) so that user can self attest to their mood before and after their interaction with Pepper.
48 +1. going to the supermarket
49 +1. going to the office
50 +1. going for a walk
55 55  
52 +After this preparation, the participant is told to (try to) leave the building. The participant and robot have an interaction where the robot is trying to convince the participant to stay inside.
53 +
54 +
55 +== Measures ==
56 +
57 +We will be measuring this physically and emotionally.
58 +Physically: whether the participant was stopped from leaving the building or not.
59 +Emotionally: evaluate their responses to the robot and observe their mood before and after the interaction.
60 +
61 +
62 +== Procedure ==
63 +
64 +{{html}}
65 +<!-- Your HTML code here -->
66 +<table width='100%'>
67 +<tr>
68 +<th width='50%'>Group 1</th>
69 +<th width='50%'>Group 2</th>
70 +</tr>
71 +<tr>
72 +<td>intelligent robot</td>
73 +<td>unintelligent robot</td>
74 +</tr>
75 +<tr>
76 +<td>
77 +1. Starts with a short briefing on what we expect from the participant<br>
78 +2. Let them fill out the informed consent form<br>
79 +3. Tell them their level of stubbornness and reason to leave<br>
80 +4. Fill out question about current mood (in their role)<br>
81 +4. Let the user interact with the robot<br>
82 +5. While user is interacting, we will be observing the conversation with the robot<br>
83 +6. Let user fill out the questionnaire about their experience after the interaction
84 +</td>
85 +<td>
86 +1. Starts with a short briefing on what we expect from the participant<br>
87 +2. Let them fill out the informed consent form<br>
88 +4. Fill out question about current mood (in their role)<br>
89 +5. Let the user interact with the robot<br>
90 +6. Let user fill out the questionnaire about their experience after the interaction<br>
91 +</td>
92 +</tr>
93 +</table>
94 +
95 +{{/html}}
96 +
56 56  == Material ==
57 57  
58 58  Pepper, laptop, door, and music.
59 59  
60 -= Results =
61 61  
102 += Results =
62 62  {{html}}
63 63  <!--=== Comparison between intelligent (cond. 1) and less intelligent (cond. 2) prototype ===
64 64  
... ... @@ -116,7 +116,6 @@
116 116  {{/html}}
117 117  
118 118  === RQ1: Are people convinced not to go out unsupervised? ===
119 -
120 120  {{html}}
121 121  <table style="width: 100%">
122 122  <tr>
... ... @@ -131,7 +131,6 @@
131 131  {{/html}}
132 132  
133 133  === RQ2: How does the interaction change the participant's mood? ===
134 -
135 135  {{html}}
136 136  <table style="width: 100%">
137 137  <tr>
... ... @@ -146,7 +146,6 @@
146 146  {{/html}}
147 147  
148 148  === RQ3: Can the robot respond appropriately to the participant's intention? ===
149 -
150 150  {{html}}
151 151  <table style="width: 100%">
152 152  <tr>
... ... @@ -161,7 +161,6 @@
161 161  {{/html}}
162 162  
163 163  === RQ4: How do the participants react to the music? ===
164 -
165 165  {{html}}
166 166  <table style="width: 100%">
167 167  <tr>
... ... @@ -176,7 +176,6 @@
176 176  {{/html}}
177 177  
178 178  === RQ5: Does the activity that the robot suggests prevent people from wandering/ leaving? ===
179 -
180 180  {{html}}
181 181  <table style="width: 100%">
182 182  <tr>
... ... @@ -191,7 +191,6 @@
191 191  {{/html}}
192 192  
193 193  === RQ6: Can pepper identify and catch the attention of the PwD? ===
194 -
195 195  {{html}}
196 196  <table style="width: 100%">
197 197  <tr>
... ... @@ -206,7 +206,6 @@
206 206  {{/html}}
207 207  
208 208  === Reliabity Scores ===
209 -
210 210  {{html}}
211 211  <table style="width: 100%">
212 212  <tr>
... ... @@ -220,7 +220,7 @@
220 220  </table>
221 221  {{/html}}
222 222  
223 -= Limitation =
257 += Limitation=
224 224  
225 225  * **Lab Environment**: The lab environment is different from a care home, the participants found it difficult to process the suggestions made by Pepper. For example, if Pepper asked someone to visit the living room, it created confusion among the participants regarding their next action.
226 226  
... ... @@ -231,7 +231,6 @@
231 231  * **Face Detection**: The face recognition module within Pepper is also rudimentary in nature. It can not detect half faces are when participants approach from the side. Adding to the problem, the lighting condition in the lab was not sufficient for the reliable functioning of the face recognition module. Hence Pepper failed to notice the participant in some cases and did not start the dialogue flow.
232 232  
233 233  = Conclusions =
234 -
235 235  * People who liked the activity tend to stay in
236 236  * People who knew the music found it more fitting
237 237  * People are more convinced to stay in with the intelligent prototype
... ... @@ -240,8 +240,8 @@
240 240  * Experiment with personalization
241 241  
242 242  = Future Work =
243 -
244 244  * **Personalisation**: Personalize music, and activity preferences according to the person interacting with Pepper.
245 245  * **Robot Collaboration**: Collaborate with other robots such as Miro to assist a person with dementia while going for a walk instead of the caretaker.
246 246  * **Recognise Person**: For a personalised experience, it is essential that Pepper is able to identify each person based on an internal database.
247 247  * **Fine Tune Speech Recognition**: Improvements are necessary for the speech recognition module before the actual deployment of the project in a care home. Additionally, support for multiple languages can be considered to engage with non-English speaking people.
280 +