Changes for page Test
Last modified by Andrei Stefan on 2022/04/04 13:38
Summary
Details
- Page properties
-
- Content
-
... ... @@ -15,7 +15,7 @@ 15 15 16 16 Besides, Our group decided to use a mixed-method approach for the evaluation. 17 17 18 -* Quantitative data will be derived during the experiment such as the number of mistakes the participant makes during the quiz. 18 +* Quantitative data will be derived during the experiment such as the number of mistakes the participant makes during the quiz. The participants were also asked to provide a score based on the given system usability scale^^1^^. 19 19 * Qualitative data expected to be gathered through questionnaires, such as to what extent participants are satisfied with using the robot, is also adopted for evaluation. 20 20 21 21 By measuring these two types of data, we will manage to assess if our claims are achieved and the research questions are answered. ... ... @@ -27,30 +27,13 @@ 27 27 == Experimental design == 28 28 29 29 The experiment will be conducted to simulate the reinforcement learning process of musical memory related to daily activities and to investigate if the quiz is indeed able to help with the learning. 30 - 31 - 32 -=== Evaluation Step === 33 - 34 -First, all participants would sign a consent form that informed them of the usage of the collected data and our goal of evaluations. 35 - 36 -==== Step 1 ==== 37 - 38 -In our prototype, users can personalize the link between music and activities based on their existing intrinsic knowledge. But due to the limited time and requiring a comparable result between groups, in evaluation, we forced 6 pieces of music and activities. Participants listened to the music and were asked the remember the associated activities. 39 - 40 -==== Step 2 ==== 41 - 42 -Participants in two groups would play with the robot for 3 minutes. Participants in group A would play with the intelligent robot and group B would play with the dumb robot. 43 - 44 -==== Step 3 ==== 45 - 30 +All participants would sign a consent form that informed them of the usage of the collected data and our goal of evaluations. In our prototype, users can personalize the association between music and activities based on their existing intrinsic knowledge. But due to the limited time and requiring a comparable result between groups, in evaluation, we forced 6 pieces of music and activities. Participants listened to the music and were asked the remember the associated activities. 46 46 In the end, the participants would take a quiz to see how much they remembered. They are also asked to fill in a questionnaire including the feeling of the robot and possible feedback. 47 47 48 -==== Questionnaire ==== 49 - 50 50 1. How many questions did you answer correctly? (Points from 0-6) 51 51 1. You feel the robot can help you remember the task. (Agree, Neutral, Disagree) 52 52 1. You feel the robot is annoying. (Agree, Neutral, Disagree) 53 -1. Based on the given system usability scale, please give our robot a score.(0-100) 36 +1. Based on the given system usability scale, please give our robot a score. (0-100) 54 54 55 55 Except for the previous questions, we also collect feedback from participants 56 56 ... ... @@ -58,10 +58,19 @@ 58 58 1. What did you dislike most about the robot? 59 59 1. Do you have any further suggestions? (*optional) 60 60 61 - 62 - 63 63 == Tasks == 64 64 46 +The participants are asked to memorize the association between the given music and activities as best as they can during the play with the robot. 47 +The robot would play the music and ask the participant to answer the correct activity. 48 +In the end, the participant would do the final test and we count the number of correct answers. 49 + 50 +== Measures == 51 + 52 +Count the correct answer in the final test. 53 +After the experiment, ask the user to fill in the system usability scale and the questionnaire regarding mood and satisfaction. 54 + 55 +== Procedure == 56 + 65 65 **Event: Quiz** 66 66 67 67 {{html}} ... ... @@ -99,23 +99,26 @@ 99 99 <table> 100 100 {{/html}} 101 101 102 -== M easures==94 +== Material == 103 103 104 -Count the correct answer in the final test. 105 -After the experiment, ask the user to fill in the system usability scale and the questionnaire regarding mood and satisfaction. 96 +Robot(NAO) with setting music, consent form, laptop 106 106 107 -= =Procedure==98 += Results = 108 108 109 -1. Sign the consent form; 110 -2. Complete the given tasks as instructed; 111 -3. Complete a questionnaire 100 +[[image:result2.png||height="400px"]] 101 +From the left figure, we can see the distribution of the number of correct answers. The average score of all participants is 3.6 among 6 questions. For group A, the average score is 3.3 and for group B the average score is 3.8. This bias can be explained because our group size is not large enough to eliminate the various memory ability. but we can also find that all participants in group A can learn something because they have no 0 scores but several participants in group B got 0 scores. In this degree, we can show that our robot does help in memory. 112 112 113 - ==Material==103 +From the middle figure, we can find that people in group A tend to think our robot can help improve the memory task and only a few of them thought our robot is annoying, as shown in the right figure. 114 114 115 -Robot(NAO) with setting music, consent form, laptop 105 +[[image:result4.png||height="400px"]] 106 +As shown in the above figure, group A with our intelligent robot gave our robot an average score of 66.7, and group B with the dumb robot gave 58.2. In this scale, we can see that participants are more willing to play with our intelligent robot. 116 116 117 - =Results=108 +Also, we collect some feedback from the participants. Most of them liked the appearance of the robot which is consistent with the reasons we choose the NAO. People are more engaged and willing to interact with a humanoid robot. Some of them complained about the speech recognition of this robot. 118 118 119 119 = Discussion = 120 120 121 121 = Conclusion = 113 + 114 += Reference = 115 + 116 +Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the system usability scale. Intl. Journal of Human–Computer Interaction, 24(6), 574-594.
- result1.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.mona98 - Size
-
... ... @@ -1,0 +1,1 @@ 1 +107.3 KB - Content
- result2.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.mona98 - Size
-
... ... @@ -1,0 +1,1 @@ 1 +169.4 KB - Content
- result3.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.mona98 - Size
-
... ... @@ -1,0 +1,1 @@ 1 +217.7 KB - Content
- result4.png
-
- Author
-
... ... @@ -1,0 +1,1 @@ 1 +XWiki.mona98 - Size
-
... ... @@ -1,0 +1,1 @@ 1 +52.8 KB - Content