Changes for page Test

Last modified by Andrei Stefan on 2022/04/04 13:38

From version Icon 99.1 Icon
edited by Andrei Stefan
on 2022/04/04 12:08
Change comment: There is no comment for this version
To version Icon 95.1 Icon
edited by Xinqi Li
on 2022/04/02 01:41
Change comment: There is no comment for this version

Summary

Details

Icon Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.AndreiStefan
1 +XWiki.mona98
Content
... ... @@ -1,7 +1,5 @@
1 1  Our robot aims to help delay the stage of dementia or slow down the deterioration of memory. The best situation is that we can test the robot with real PwD and in a relatively long time period to see if this robot really works, which is impossible for our project. So our evaluation performs in a group control way. Participants are divided into two groups, group A with the intelligent one, and group B with the dumb one.
2 2  
3 -The differences between the dumb and intelligent robot are small. The latter tells users the right answer if they get it wrong, while the former only tells them that they have made a mistake.
4 -
5 5  = Problem statement and research questions =
6 6  
7 7  The main use cases that the evaluation focuses on are UC001: Daily todo list and UC005: Quiz. Based on the claims corresponding to those use cases, we derive the following research questions:
... ... @@ -29,8 +29,7 @@
29 29  == Experimental design ==
30 30  
31 31  The experiment will be conducted to simulate the reinforcement learning process of musical memory related to daily activities and to investigate if the quiz is indeed able to help with the learning.
32 -All participants would sign a consent form that informed them of the usage of the collected data and our goal of evaluations. In our prototype, users can personalize the association between music and activities based on their existing intrinsic knowledge. But due to the limited time and requiring a comparable result between groups, in evaluation, we forced 6 pieces of music and activities.
33 -Participants listened to the music and were asked the remember the associated activities. To this end, they were given a list with the 6 activities. In order to make it more difficult to remember (so they had to pretend less to have dementia), the music corresponding to the activities was played in a different order than that of the activities on the list. Furthermore, the music was quite similar (just instrumental). The users then had 3 minutes to practice with the NAO.
30 +All participants would sign a consent form that informed them of the usage of the collected data and our goal of evaluations. In our prototype, users can personalize the association between music and activities based on their existing intrinsic knowledge. But due to the limited time and requiring a comparable result between groups, in evaluation, we forced 6 pieces of music and activities. Participants listened to the music and were asked the remember the associated activities.
34 34  In the end, the participants would take a quiz to see how much they remembered. They are also asked to fill in a questionnaire including the feeling of the robot and possible feedback.
35 35  
36 36  1. How many questions did you answer correctly? (Points from 0-6)
... ... @@ -110,7 +110,7 @@
110 110  
111 111  Also, we collect some feedback from the participants. Most of them liked the appearance of the robot which is consistent with the reasons we choose the NAO. People are more engaged and willing to interact with a humanoid robot. Some of them complained about the speech recognition of this robot.
112 112  
113 -= Discussion & Conclusion =
110 += Discussion =
114 114  
115 115  We assume that our intelligent robot can help people strengthen the association between music and activities. The result of average correct answers didn't approve this. Several reasons existed. First, our participants were not real PwD and their memory abilities vary. Our group size(about 10 for each group) was not large enough. Also, Participants were only given a limited time. The short duration of the quiz and not using personalised music also accounted for this biased result. However, the overall usability score between the two groups and some quantitative results above also shows that our claim PwD are more willing to play with our intelligent robot and PwD are happy to use the robot could still hold.
116 116  
... ... @@ -120,11 +120,16 @@
120 120  * As mentioned before, the small sample size made the accuracy of the result doubtable. Having a larger and more diverse sample group would allow us to more accurately predict real-world usage.
121 121  * The accuracy of the speech recognition system in the NAO and the availability of test subjects and robots also limited the evaluation.
122 122  
123 -Based on our evaluation, we proved that participants with our intelligent robot are more willing to play the quiz and consider the robot can help them remember the task better compared with the control group. Our robot still needs further improvement based on the previous discussion. In the future, we could improve in the following aspects,
120 +In the future, we could improve in the following aspects,
124 124  
125 125  * Test a full implementation of the system in a real setting with PwD.
126 126  * Research should also be done to look if the robot is actually necessary, or if the advantage of the system could be achieved by a cheaper alternative, such as a virtual robot on a tablet. (Also inspired by the feedback we got. One participant asked why we didn't create an APP.)
127 127  
125 +
126 += Conclusion =
127 +
128 +Based on our evaluation, we proved that participants with our intelligent robot are more willing to play the quiz and consider the robot can help them remember the task better compared with the control group. Our robot still needs further improvement based on the previous discussion.
129 +
128 128  = Reference =
129 129  
130 -[1] Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the system usability scale. Intl. Journal of Human–Computer Interaction, 24(6), 574-594.
132 +Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the system usability scale. Intl. Journal of Human–Computer Interaction, 24(6), 574-594.