Changes for page b. Test

Last modified by Demi Breen on 2023/04/09 15:10

From version 32.1
edited by Hugo van Dijk
on 2023/03/30 22:11
Change comment: There is no comment for this version
To version 34.1
edited by Liza Wensink
on 2023/04/01 10:46
Change comment: There is no comment for this version

Summary

Details

Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.hjpvandijk
1 +XWiki.lwensink
Content
... ... @@ -110,10 +110,14 @@
110 110  
111 111  = 3. Results =
112 112  
113 -On average, participants only rejected the robot's persuasion attempts 0.5 times. The participants rated the robot a 2/5 in terms of being scary. They gave a 4/5 for it making life more interesting and it being good to make use of the robot. Questions related to the participant's enjoyment and fascination with the system and the robot were met with ratings between 3.8 and 4.1. The question "I think the staff would like me using the robot" was rated a 4/5 on average.
113 +=== Noteworthy answers ===
114 114  
115 -Firstly, the Jarque-Bera test [2] was used to check for normality. When the answers for a question weren't normally distributed, the Mann-Whitney U-Test [3] was used. For normally distributed answers, the T-Test [4] was used. These tests used the null hypothesis that there is no significant difference between the two groups. When the calculated probability value (p-value) is less than 0.05, we can reject the null hypothesis and conclude that there is a significant difference between the two groups for the answers to that question.
115 +On average, participants only rejected the robot's persuasion attempts 0.5 times. The participants rated the robot a 2/5 in terms of being scary. They gave a 4/5 for it making life more interesting and it being good to make use of the robot. Questions related to the participant's enjoyment and fascination with the system and the robot were met with ratings between 3.8 and 4.1. The question "I think the staff would like me using the robot" was rated a 4/5 on average. Finally, to the question of whether they would not have gone for a walk if the robot didn't ask them to, the average answer was 3.8/5. All these answers had a standard deviation of less than 1.
116 116  
117 +=== ANOVA ===
118 +
119 +Firstly, the Jarque-Bera test [2] was used to check for normality. When the answers to a question weren't normally distributed, the Mann-Whitney U-Test [3] was used. For normally distributed answers, the T-Test [4] was used. These tests used the null hypothesis that there is no significant difference between the two groups. When the calculated probability value (p-value) is less than 0.05, we can reject the null hypothesis and conclude that there is a significant difference between the two groups for the answers to that question.
120 +
117 117  Even though the average rejections were higher for emotion-based (0,875) than for goal-based(0,125). This difference was not significant.
118 118  
119 119  Furthermore, there was no significant difference in any of the questionnaire answers between the two groups.
... ... @@ -120,23 +120,48 @@
120 120  
121 121  [[This table>>doc:.p-values.WebHome]] shows the p-value per measure.
122 122  
127 +=== Observations ===
123 123  
129 +General remarks made by participants evaluating the emotion-based system were only about the walking aspect of the robot, stating that the walking distance should be increased and the change in direction was quite sharp. Participants doing the goal-based evaluation commented on the badly performing speech recognition system and stated that it might be useful to start by asking how the participant feels.
130 +
124 124  When asked the reason that convinced the participant to join the robot on a walk, two out of the six participants that said yes eventually in the emotion-based system recited one of the persuasion subjects. For the goal-based system, this was three out of eight.
125 125  
126 -When participants were standing too close to the robot, it wouldn't walk. This happened in numerous times, resulting in conversation without walking.
133 +When participants were standing too close to the robot, it wouldn't walk. This happened numerous times, resulting in conversation without walking.
127 127  
128 -General remarks made by participants evaluating the emotion-based system were only about the walking aspect of the robot, stating that the walking distance should be increased and the change in direction was quite sharp. Participants doing the goal-based evaluation commented on the badly performing speech recognition system and stated that it might be useful to start by asking how the participant feels.
129 -
130 130  Even though it was specified at the start of every session that the participant can say either yes or no to the robot's persuasion attempts, we noticed that some participants did not seem to grasp the fact that they could say no. At the end of their session, one participant stated that he was not persuaded by the robot at all, even though they said yes on the robot's first persuasion attempt.
131 131  \\Another participant, who said no to all persuasion attempts, stated afterwards that they "Just wanted to see what would happen if I said no all the time".  This indicated that some participants already had a plan of how many times they would reject the robot before starting, and did not really listen to the persuasions made.
132 132  
133 133  As the robot's speech recognition could only understand single words due to its implementation, this resulted in numerous occasions where a participant was not understood and had to repeat themselves. It also occurred that the robot understood 'yes' when 'no' was said.
134 134  
140 +- Mention something about only one participant going into Bob's character fully? And that he mentioned that the "no" he was giving was more attention-seeking than a real no.
135 135  
142 +- Add that sometimes the robot cut participants off, if they were speaking slower or elaborating on their answer.
136 136  
144 +
145 +
137 137  = 4. Discussion =
138 138  
148 +- In terms of research question, no significant differences were found. It could be that this is true in general, but it is very likely that this is influenced by the circumstances surrounding the design and the evaluation.
139 139  
150 +- The design is rather limited and with limited capabilities, due to time constraints. Speech recognition didn't always work properly and were not as flexible as desired which makes the interactions less realistic for the participant.
151 +
152 +- There are also other constraints to the interaction, which has to be given as instructions to the participant before testing, such as at what distance to stay from the robot, when to join the robots side, how long to wait to speak after a certain prompt, etc. This further made it unnatural, but was necessary for the system to perform properly.
153 +
154 +- Since participants were also prompted to give shorter answers and try to keep to things like "yes" and "no" it greatly influenced the way participants interacted with the robot.
155 +
156 +- Further, it was (obviously) not possible to test the design with PwD. This was attempted to be resolved by providing a persona description for participants to keep in mind during the testing. Only one participant ended up embodying this character.
157 +
158 +- Results may have been different if participants outside of the course were used, since we are all very familiar with these robots and systems. On one hand it could be positive, since we have all researched dementia and have gained a lot of knowledge within this we could be better at simulating appropriate behavior with the robot or testing the systems in a reasonable way. But since participants also have an idea of how the robot works perhaps some mistakes or issues went undetected which could have appeared with individuals that are not familiar with the robot. Of course knowing about dementia is not the same thing as actually suffering from the diagnosis, so many aspects have most likely gone undetected there.
159 +
160 +- Results could also be influences by the sheer amount of participants, which concluded at 8 participants per group (8 for the goal-oriented approach, and 8 for the emotional approach). Perhaps with more participants the results would differ to a greater extent between the two approaches. Due to time constraints it was not possible to include more participants.
161 +
162 +- Further participants who started the interaction with a pre-disposed idea of what they wanted to do, like the participant mentioned above in the results section, definitely influenced the outcome, since this was no longer about listening to the prompts the robot was giving.
163 +
164 +- Interesting to consider is if participants are perhaps inclined to be positive, or feel like they need to be in such a project evaluation and if ideas like these also ended up affecting the outcome. ?
165 +
166 +- In future studies the amount of participants should be considered, as well as testing the design on PwD. Further improvements to the speech recognition are needed, as well as the smoothness of the walking and the distances travelled and the aspect of the participant's distance to the robot. Perhaps if the less realistic aspects discussed above are minimized, a robot that feels more realistic would result in participants listening to the actual prompts given, rather than going into the experiment with a predisposed idea of what they are going to do or answer and would also perhaps deter the participants from tending to reply positively.
167 +
168 +
140 140  = 5. Conclusions =
141 141  
142 142  Both systems were deemed enjoyable and fascinating, and little rejections were made to both types of persuasions. No significant difference was found in any of the measures between the two groups.