Changes for page b. Test

Last modified by Demi Tao on 2023/04/10 10:13

From version 22.1
edited by Demi Tao
on 2023/04/10 09:24
Change comment: There is no comment for this version
To version 18.1
edited by Demi Tao
on 2023/04/08 01:17
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -95,9 +95,8 @@
95 95  
96 96  === (% style="color:inherit; font-family:inherit" %)Results of the survey:(%%) ===
97 97  
98 -[[Figure: //Percentage of user satisfaction and SUS score//>>image:attach:chart.png]]
98 +[[image:attach:chart.png||alt="Survey Results"]]
99 99  
100 -
101 101  |(% style="width:215px" %)**Attributes**|(% style="width:211px" %)**Mean (control group)**|(% style="width:229px" %)**Mean (Experimental group)**|(% style="width:197px" %)**P-value**
102 102  |(% style="width:215px" %)Accessibility|(% style="width:211px" %)2,33|(% style="width:229px" %)3,25  |(% style="width:197px" %)0,0644
103 103  |(% style="width:215px" %)Trustability|(% style="width:211px" %)3,83|(% style="width:229px" %)4,125|(% style="width:197px" %)0,3165
... ... @@ -106,21 +106,15 @@
106 106  |(% style="width:215px" %)Empowerment|(% style="width:211px" %)3,33|(% style="width:229px" %)4|(% style="width:197px" %)0,0895
107 107  |(% style="width:215px" %)Usability|(% style="width:211px" %)54,16666667|(% style="width:229px" %)71,875|(% style="width:197px" %)0,0903
108 108  
109 -(% class="wikigeneratedid" %)
110 -//Table: User evaluation score//
108 +=== Observation: ===
111 111  
112 -=== Observation: (Total percentage sums up to 100) ===
113 -
114 114  |=(% style="width: 199px;" %)Tasks|=(% style="width: 147px;" %)Succeeded by Themselves|=(% style="width: 146px;" %)Succeeded with Some Guidance|=(% style="width: 185px;" %)Succeeded with Detailed Explicit Instructions|=(% style="width: 175px;" %)Average Time to Complete Task (s)
115 115  |(% style="width:199px" %)Add a reminder|(% style="width:147px" %)14.29%|(% style="width:146px" %)28.57%|(% style="width:185px" %)57.14%|(% style="width:175px" %)97
116 -|(% style="width:199px" %)Check weekly remainders on the Calendar page|(% style="width:147px" %)100%|(% style="width:146px" %)NA|(% style="width:185px" %)NA|(% style="width:175px" %)36
112 +|(% style="width:199px" %)Check weekly remainders on the Calendar page|(% style="width:147px" %)100%|(% style="width:146px" %)0%|(% style="width:185px" %)0%|(% style="width:175px" %)36
117 117  |(% style="width:199px" %)Create a personal profile|(% style="width:147px" %)7.14%|(% style="width:146px" %)50%|(% style="width:185px" %)42.86%|(% style="width:175px" %)69
118 -|(% style="width:199px" %)Verify current profiles|(% style="width:147px" %)85.71%|(% style="width:146px" %)14.29%|(% style="width:185px" %)NA|(% style="width:175px" %)32
114 +|(% style="width:199px" %)Verify current profiles|(% style="width:147px" %)85.71%|(% style="width:146px" %)14.29%|(% style="width:185px" %)0%|(% style="width:175px" %)32
119 119  |(% style="width:199px" %)Play memory game|(% style="width:147px" %)0%|(% style="width:146px" %)42.86%|(% style="width:185px" %)57.14%|(% style="width:175px" %)208
120 120  
121 -//Table: Results of user performance of tasks //
122 -
123 -
124 124  |(% style="width:330px" %)**Tasks**|(% style="width:523px" %)**Parts where people struggled**
125 125  |(% style="width:330px" %)Add a reminder|(% style="width:523px" %)(((
126 126  * Don't know where to start
... ... @@ -135,19 +135,8 @@
135 135  * The beta version has no right or wrong prompts, different from the instructions, making people confused
136 136  )))
137 137  
138 -//Table:  Difficulties that users struggled with when solving tasks//
139 -
140 140  = 4. Discussion =
141 141  
142 -(% class="wikigeneratedid" %)
143 -As mentioned earlier, the user evaluation incorporated two quantitative measures. The first measure evaluated the various attributes of the system, including accessibility, trustworthiness, perceivability, understandability, and empowerment. The second measure employed was the System Usability Scale (SUS).
144 -
145 -(% class="wikigeneratedid" %)
146 -The attributes-related evaluation was analyzed based on the following way: if a respondent had a minimum total score of 60% (15 out of 25) or more, he or she was considered to be satisfied with the application. 11 out of 14 (78.57%) of the users achieved a score of 15 or higher. The average score is 18. According to the standard operating protocol (Quintana et al., 2020), the feasibility test was to be considered successfully completed if at least 75% were satisfied with the use of the application. Therefore, based on this criterion, the feasibility test was considered successfully completed.
147 -
148 -The System Usability Scale (SUS) was interpreted in terms of percentile ranking.
149 -
150 -
151 151  === Limitations: ===
152 152  
153 153  We ran into some problems while creating the application and performing the experiment:
chart.png
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.DemiTao
1 +XWiki.KarthikPrakash
Size
... ... @@ -1,1 +1,1 @@
1 -18.8 KB
1 +15.4 KB
Content
XWiki.XWikiComments[8]
Author
... ... @@ -1,1 +1,0 @@
1 -XWiki.KarthikPrakash
Comment
... ... @@ -1,6 +1,0 @@
1 -If our assumptions are not statistically significant, explain why.
2 -
3 -Some of the reasons could be:
4 -Participants are young, familiar with tech and not cognitively impaired.
5 -
6 -Rephrase the conclusion and explain why and how we arrived at it
Date
... ... @@ -1,1 +1,0 @@
1 -2023-04-08 10:33:45.41
XWiki.XWikiComments[1]
Author
... ... @@ -1,0 +1,1 @@
1 +Anonymous
Comment
... ... @@ -1,0 +1,40 @@
1 +Quintana et al. (2020) designed questions based on the most important quality attributes to evaluate how the application works for people with mild cognitive impairment.
2 +
3 +(potential questions tailored to our case)
4 +
5 +* (((
6 +How satisfied are you with the robot’s possibility to support you?
7 +)))
8 +* (((
9 +How well does the robot fulfill your expectations?
10 +)))
11 +* (((
12 +Imagine a perfect robot for this task. How far away from it is the robot you are using today?
13 +)))
14 +* (((
15 +I find the robot easily accessible for people with dementia.
16 +)))
17 +* (((
18 +I feel that I can trust the robot and that it is safe to use.
19 +)))
20 +* (((
21 +I find it easy to understand how to operate the robot.
22 +)))
23 +* (((
24 +I am able to understand all the information presented by the robot.
25 +)))
26 +* (((
27 +I feel that the robot gave me better control over my daily situation.
28 +)))
29 +
30 +The alternatives the users could give to all the above were the following: 1 = Strongly disagree; 2 = Disagree.; 3 = Neither agree nor disagree; 4 = Agree; 5 = Strongly agree.
31 +
32 +
33 +The System Usability Scale (SUS) is a dependable instrument for testing usability. It comprises of 10 questions with five response options which range from strongly agree to strongly disagree for responders (Jordan et al., 1996).
34 +
35 +(see the attachment for the 10 questions)
36 +
37 +
38 +1. Quintana M, Anderberg P, Sanmartin Berglund J, Frögren J, Cano N, Cellek S, Zhang J, Garolera M. Feasibility-Usability Study of a Tablet App Adapted Specifically for Persons with Cognitive Impairment—SMART4MD (Support Monitoring and Reminder Technology for Mild Dementia). //International Journal of Environmental Research and Public Health//. 2020; 17(18):6816. [[https:~~/~~/doi.org/10.3390/ijerph17186816>>https://doi.org/10.3390/ijerph17186816]]
39 +
40 +2. Brooke, J. SUS-A quick and dirty usability scale. //Usability Eval. Ind. **1996**, 189//, 4–7.
Date
... ... @@ -1,0 +1,1 @@
1 +2023-03-24 11:18:20.296
XWiki.XWikiComments[6]
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.DemiTao
Comment
... ... @@ -1,0 +1,16 @@
1 +**Interpretation for user evaluation **
2 +
3 +If a respondent had a minimum total score of 60% (15 out of 25 for the matrix question) or more, he or she was considered to be satisfied with the application.
4 +
5 +**Scoring SUS**
6 +
7 +* For odd items: subtract one from the user response.
8 +* For even-numbered items: subtract the user responses from 5
9 +* This scales all values from 0 to 4 (with four being the most positive response).
10 +* Add up the converted responses for each user and multiply that total by 2.5. This converts the range of possible values from 0 to 100 instead of from 0 to 40.
11 +
12 +**Interpreting Scores for SUS [[*>>https://measuringu.com/sus/]]**
13 +
14 +Interpreting scoring can be complex. The participant’s scores for each question are converted to a new number, added together and then multiplied by 2.5 to convert the original scores of 0-40 to 0-100.  Though the scores are 0-100, these are not percentages and should be considered only in terms of their percentile ranking.
15 +
16 +Based on research, a SUS score above a 68 would be considered above average and anything below 68 is below average, however, the best way to interpret your results involves “normalizing” the scores to produce a percentile ranking.
Date
... ... @@ -1,0 +1,1 @@
1 +2023-04-07 17:01:06.25