Changes for page 3. Evaluation Methods

Last modified by William OGrady on 2024/04/08 22:22

From version 14.1
edited by Jean-Paul Smit
on 2024/03/01 11:36
Change comment: There is no comment for this version
To version 13.1
edited by Jean-Paul Smit
on 2024/03/01 11:36
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -4,11 +4,11 @@
4 4  
5 5  The study will investigate the claims on the following questions:
6 6  
7 -~1. **Autonomy. **Does the design increase the sense of //autonomy //in PwD?
7 +~1. Does the design increase the sense of //autonomy //in PwD?
8 8  
9 -2. **Relatedness. **Does the design positively affect PwD's //affective //state? Do subjects //like// the system?
9 +2. Does the design positively affect PwD's //affective //state? Do subjects //like// the system?
10 10  
11 -3. **Security. **Is the design //dependable//; is the design accessible enough for PwD to rely on it? Does it feel natural?
11 +3. Is the design //dependable//; is the design accessible enough for PwD to rely on it? Does it feel natural?
12 12  
13 13  
14 14  For a sample size as small as 20 participants, it is most adequate to apply a within-subjects design (they require fewer participants)[1]. That means there is an approach where every subject is experiencing all of the conditions examined. A within-subjects design might be prone to confounds such as pre-existing notions in the environment. That is why the attitude towards robots and the pre-study sense of affect and autonomy should be examined and evaluated as such. Another confounder variable to look into is the study location and environment. The evaluation method will be self-assessment, which can only be included in the study when it is validated.