Changes for page b. Test
Last modified by Demi Breen on 2023/04/09 15:10
From version 35.1
edited by Maya Elasmar
on 2023/04/01 12:26
on 2023/04/01 12:26
Change comment:
There is no comment for this version
To version 32.1
edited by Hugo van Dijk
on 2023/03/30 22:11
on 2023/03/30 22:11
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. MayaElasmar1 +XWiki.hjpvandijk - Content
-
... ... @@ -110,14 +110,10 @@ 110 110 111 111 = 3. Results = 112 112 113 - ===Noteworthy answers===113 +On average, participants only rejected the robot's persuasion attempts 0.5 times. The participants rated the robot a 2/5 in terms of being scary. They gave a 4/5 for it making life more interesting and it being good to make use of the robot. Questions related to the participant's enjoyment and fascination with the system and the robot were met with ratings between 3.8 and 4.1. The question "I think the staff would like me using the robot" was rated a 4/5 on average. 114 114 115 - On average, participantsonlyrejected the robot's persuasionattempts0.5times.Theparticipantsrated therobota 2/5 interms of being scary.Theygave a4/5forit makinglifemoreinterestinganditbeinggoodtomakeuse of therobot.Questionselatedtotheparticipant's enjoymentandfascinationwithystemandtheobot weremet withratingsbetween3.8 and 4.1. Thequestion"Ithinkthestaff would likeme using therobot" wasrateda4/5onaverage.Finally,toquestion of whethertheywouldnot havegoneforawalkiftheobotdidn'taskthemto,the average answerwas3.8/5.Alltheseanswershada standarddeviationof less than 1.115 +Firstly, the Jarque-Bera test [2] was used to check for normality. When the answers for a question weren't normally distributed, the Mann-Whitney U-Test [3] was used. For normally distributed answers, the T-Test [4] was used. These tests used the null hypothesis that there is no significant difference between the two groups. When the calculated probability value (p-value) is less than 0.05, we can reject the null hypothesis and conclude that there is a significant difference between the two groups for the answers to that question. 116 116 117 -=== ANOVA === 118 - 119 -Firstly, the Jarque-Bera test [2] was used to check for normality. When the answers to a question weren't normally distributed, the Mann-Whitney U-Test [3] was used. For normally distributed answers, the T-Test [4] was used. These tests used the null hypothesis that there is no significant difference between the two groups. When the calculated probability value (p-value) is less than 0.05, we can reject the null hypothesis and conclude that there is a significant difference between the two groups for the answers to that question. 120 - 121 121 Even though the average rejections were higher for emotion-based (0,875) than for goal-based(0,125). This difference was not significant. 122 122 123 123 Furthermore, there was no significant difference in any of the questionnaire answers between the two groups. ... ... @@ -124,50 +124,23 @@ 124 124 125 125 [[This table>>doc:.p-values.WebHome]] shows the p-value per measure. 126 126 127 -=== Observations === 128 128 129 -General remarks made by participants evaluating the emotion-based system were only about the walking aspect of the robot, stating that the walking distance should be increased and the change in direction was quite sharp. Participants doing the goal-based evaluation commented on the badly performing speech recognition system and stated that it might be useful to start by asking how the participant feels. 130 - 131 131 When asked the reason that convinced the participant to join the robot on a walk, two out of the six participants that said yes eventually in the emotion-based system recited one of the persuasion subjects. For the goal-based system, this was three out of eight. 132 132 133 -When participants were standing too close to the robot, it wouldn't walk. This happened numerous times, resulting in conversation without walking. 126 +When participants were standing too close to the robot, it wouldn't walk. This happened in numerous times, resulting in conversation without walking. 134 134 128 +General remarks made by participants evaluating the emotion-based system were only about the walking aspect of the robot, stating that the walking distance should be increased and the change in direction was quite sharp. Participants doing the goal-based evaluation commented on the badly performing speech recognition system and stated that it might be useful to start by asking how the participant feels. 129 + 135 135 Even though it was specified at the start of every session that the participant can say either yes or no to the robot's persuasion attempts, we noticed that some participants did not seem to grasp the fact that they could say no. At the end of their session, one participant stated that he was not persuaded by the robot at all, even though they said yes on the robot's first persuasion attempt. 136 136 \\Another participant, who said no to all persuasion attempts, stated afterwards that they "Just wanted to see what would happen if I said no all the time". This indicated that some participants already had a plan of how many times they would reject the robot before starting, and did not really listen to the persuasions made. 137 137 138 138 As the robot's speech recognition could only understand single words due to its implementation, this resulted in numerous occasions where a participant was not understood and had to repeat themselves. It also occurred that the robot understood 'yes' when 'no' was said. 139 139 140 -- Mention something about only one participant going into Bob's character fully? And that he mentioned that the "no" he was giving was more attention-seeking than a real no. 141 141 142 -- Add that sometimes the robot cut participants off, if they were speaking slower or elaborating on their answer. 143 143 144 - 145 - 146 146 = 4. Discussion = 147 147 148 -- In terms of research question, no significant differences were found. It could be that this is true in general, but it is very likely that this is influenced by the circumstances surrounding the design and the evaluation. 149 149 150 -- The design is rather limited and with limited capabilities, due to time constraints. Speech recognition didn't always work properly and were not as flexible as desired which makes the interactions less realistic for the participant. 151 - 152 -- There are also other constraints to the interaction, which has to be given as instructions to the participant before testing, such as at what distance to stay from the robot, when to join the robots side, how long to wait to speak after a certain prompt, etc. This further made it unnatural, but was necessary for the system to perform properly. 153 - 154 -- Since participants were also prompted to give shorter answers and try to keep to things like "yes" and "no" it greatly influenced the way participants interacted with the robot. 155 - 156 -- Further, it was (obviously) not possible to test the design with PwD. This was attempted to be resolved by providing a persona description for participants to keep in mind during the testing. Only one participant ended up embodying this character. 157 - 158 -- Results may have been different if participants outside of the course were used, since we are all very familiar with these robots and systems. On one hand it could be positive, since we have all researched dementia and have gained a lot of knowledge within this we could be better at simulating appropriate behavior with the robot or testing the systems in a reasonable way. But since participants also have an idea of how the robot works perhaps some mistakes or issues went undetected which could have appeared with individuals that are not familiar with the robot. Of course knowing about dementia is not the same thing as actually suffering from the diagnosis, so many aspects have most likely gone undetected there. 159 - 160 -- Results could also be influences by the sheer amount of participants, which concluded at 8 participants per group (8 for the goal-oriented approach, and 8 for the emotional approach). Perhaps with more participants the results would differ to a greater extent between the two approaches. Due to time constraints it was not possible to include more participants. 161 - 162 -- Further participants who started the interaction with a pre-disposed idea of what they wanted to do, like the participant mentioned above in the results section, definitely influenced the outcome, since this was no longer about listening to the prompts the robot was giving. 163 - 164 -- Interesting to consider is if participants are perhaps inclined to be positive, or feel like they need to be in such a project evaluation and if ideas like these also ended up affecting the outcome. ? 165 - 166 -- Normally, a robot should really take a walk outside. It should have been tested how a robot will do in actual garden, totally another surface then the room we did the experiment. Unfortunately, we could not do that, because we are not allowed to move th robot from the room. 167 - 168 -- In future studies the amount of participants should be considered, as well as testing the design on PwD. Further improvements to the speech recognition are needed, as well as the smoothness of the walking and the distances travelled and the aspect of the participant's distance to the robot. Perhaps if the less realistic aspects discussed above are minimized, a robot that feels more realistic would result in participants listening to the actual prompts given, rather than going into the experiment with a predisposed idea of what they are going to do or answer and would also perhaps deter the participants from tending to reply positively. 169 - 170 - 171 171 = 5. Conclusions = 172 172 173 173 Both systems were deemed enjoyable and fascinating, and little rejections were made to both types of persuasions. No significant difference was found in any of the measures between the two groups.