Changes for page 3. Evaluation Methods

Last modified by William OGrady on 2024/04/08 22:22

From version 45.1
edited by William OGrady
on 2024/04/04 15:01
Change comment: There is no comment for this version
To version 46.1
edited by William OGrady
on 2024/04/06 11:27
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -10,11 +10,11 @@
10 10  
11 11  
12 12  1. **Relatedness. **Does the design positively affect the PwD's //affective //state?
13 -11. **Affect**. How do participants feel about using the robot in this state?
14 -11. **Attitude towards Technology**. How do people think about using technology? Are they biased towards the robot before the study?
13 +11. **Affect**. How do participants feel about using the NAO in this state?
14 +11. **Attitude towards Technology**. How do people think about using technology? Are they biased towards the NAO before the study?
15 15  1. **Competence. **Is the design //competent//; is the design capable enough for the PwD to rely on it?
16 16  11. **Memory self-efficacy **(pre-study) How good are participants at remembering information?
17 -11. **Memory recall **(post-study) Can the participant accurately retrieve information through the robot?
17 +11. **Memory recall **(post-study) Can the participant accurately retrieve information through the NAO?
18 18  
19 19  For a sample size as small as 20 participants, it is most adequate to apply a within-subjects design (they require fewer participants) [1]. That means there is an approach where every PwD is experiencing all of the conditions examined. A within-subjects design might be prone to confounds such as pre-existing notions in the environment. That is why the attitude towards robots and the pre-study sense of affect and autonomy should be examined and evaluated as such.
20 20