Changes for page 3. Evaluation Methods
Last modified by William OGrady on 2024/04/08 22:22
From version 8.1
edited by Jean-Paul Smit
on 2024/03/01 02:55
on 2024/03/01 02:55
Change comment:
There is no comment for this version
To version 10.1
edited by Jean-Paul Smit
on 2024/03/01 11:19
on 2024/03/01 11:19
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -8,7 +8,7 @@ 8 8 9 9 2. Does the design positively affect PwD's //affective //state? 10 10 11 -3. Is the design //dependable//; doPwDsensethattheycanrely on it?11 +3. Is the design //dependable//; is the design accessible enough for PwD to rely on it? Does it feel natural? 12 12 13 13 14 14 For a sample size as small as 20 participants, it is most adequate to apply a within-subjects design (they require fewer participants)[1]. That means there is an approach where every subject is experiencing all of the conditions examined. A within-subjects design might be prone to confounds such as pre-existing notions in the environment. That is why the attitude towards robots and the pre-study sense of affect and autonomy should be examined and evaluated as such. Another confounder variable to look into is the study location and environment. The evaluation method will be self-assessment, which can only be included in the study when it is validated. ... ... @@ -16,7 +16,9 @@ 16 16 17 17 [[image:Socio-Cognitive Engineering - Frame 1.jpg]] 18 18 19 +/ 19 19 21 + 20 20 == References == 21 21 22 22 (1) Bethel, C.L., Henkel, Z., Baugus, K. (2020). Conducting Studies in Human-Robot Interaction. In: Jost, C., //et al.// Human-Robot Interaction. Springer Series on Bio- and Neurosystems, vol 12. Springer, Cham. https:~/~/doi.org/10.1007/978-3-030-42307-0_4