Changes for page 3. Evaluation Methods
Last modified by William OGrady on 2024/04/08 22:22
From version 24.1
edited by Rixt Hellinga
on 2024/03/19 14:51
on 2024/03/19 14:51
Change comment:
There is no comment for this version
To version 26.1
edited by Jean-Paul Smit
on 2024/03/21 16:57
on 2024/03/21 16:57
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -xwiki:XWiki. RixtHellinga1 +xwiki:XWiki.jeanpaulsmit - Content
-
... ... @@ -10,16 +10,18 @@ 10 10 11 11 2. **Relatedness. **Does the design positively affect the PwD's //affective //state? Do PwDs //like// the system? 12 12 13 -3. **Competence. **Is the design //dependable//; is the design accessible enough for the PwD to rely on it? Does it feel natural? 13 +3. **Competence. **Is the design //dependable//; is the design accessible enough for the PwD to rely on it? Does it feel natural? Can the participant accurately retrieve information through the robot? 14 14 15 +4. Recall. 15 15 17 +5. Memory self-efficacy. 18 + 19 + 16 16 For a sample size as small as 20 participants, it is most adequate to apply a within-subjects design (they require fewer participants) [1]. That means there is an approach where every PwD is experiencing all of the conditions examined. A within-subjects design might be prone to confounds such as pre-existing notions in the environment. That is why the attitude towards robots and the pre-study sense of affect and autonomy should be examined and evaluated as such. Another confounder variable to look into is the study location and environment. 17 17 18 18 [[image:Socio-Cognitive Engineering - Frame 1.jpg]] 19 19 20 -/ 21 21 22 - 23 23 == References == 24 24 25 25 [1] Bethel, C.L., Henkel, Z., Baugus, K. (2020). Conducting Studies in Human-Robot Interaction. In: Jost, C., //et al.// Human-Robot Interaction. Springer Series on Bio- and Neurosystems, vol 12. Springer, Cham. https:~/~/doi.org/10.1007/978-3-030-42307-0_4