Wiki source code of Measuring Instruments

Version 4.6 by Sofia Kostakonti on 2022/04/03 15:25

Show last authors
1
2 For the evaluation of a prototype, there are several frameworks that can be followed, starting with DECIDE[1]. Decide stands for:
3
4 **D**etermine the goals
5 **E**xplore the questions
6 **C**hoose evaluation approach and methods
7 **I**dentify practical issues
8 **D**ecide about ethical issues
9 **E**valuate, analyze, interpret, present data
10
11 First, we would have to determine the high-level goals for the study and the motivation behind them, since they can influence how we approach it. Then, we choose the evaluation approach, the methods that will be used, whether these are based on quantitative or qualitative data, and the process of data collecting, analysis, and presentation. At the same time, any practical issues, such as participants, budget, or schedule, are identified and a pilot study is performed if needed. It is important to adhere to any ethical procedures that are in place, to ensure the participant knows their rights and is protected. Finally, the evaluation of the data takes place, where it is determined whether the results are reliable, valid, without bias, unrelated to the environment and can generalize well.
12 Another framework used is IMPACT[2]:
13
14 There are several elements that need to be considered when trying to establish evaluation objectives. First, we should present the objectives and the claims of the study. Furthermore, the specific measures and metrics that will be used need to be determined, followed by the participants and the activities they will perform based on a specific use case. We should also define the context, social, ethical, physical or environmental
15 == IMPACT ==
16
17 **I**ntention: Present the objectives and claims
18 **M**easures and metrics: "What, how, and why"
19 **P**eople: Define the participants
20 **A**ctivities: Use cases in activities
21 **C**ontext: Social, ethical, physical, etc. environment definition
22 **T**echnologies: Hardware and software
23
24
25 = Evaluation methods =
26
27 == Formative evaluation ==
28
29 - Open-ended evaluation of the design
30 - E.g. How will the users respond to the new design?
31
32
33 == Summative evaluation ==
34
35 - Focus on the overall effect
36 - Summarizes if the objective is reached
37 - E.g. Are the participants happier when working with design X in comparison to design Y?
38
39 == Data ==
40
41 - Qualitative: Explore, discover, instruct
42
43 - Quantitative: Describe, explain, predict
44
45 - Subjective quantitative
46
47
48 == Statistics ==
49
50 - Descriptive: Describe the dataset, e.g. mean time on task
51 - Inferential: Using a sample to infer about a population, e.g. predicted mean time on task based on user characteristics.
52
53 == Experiment Design: Conditions ==
54
55 === Within Subjects (Repeated Measures) ===
56 Each participant is subjected to all the test conditions.
57 Fewer subjects needed and reduces variance in the Can be difficult to setup due to subjects fatiguing, learning about the setup or simply not having enough time.
58
59 === Between Subjects (Between Groups) ===
60 One subject undergoes only one test. Simple to execute, but results in significant variance due to inter-subject differences in characteristics.
61
62
63 == Lenses ==
64 - Lense means to take different perspectives looking at your system
65 - E.g. perspective of the stakeholders, other groups, or technical/legal
66
67
68 [1] Kurniawan, S. (2004). Interaction design: Beyond human–computer interaction by Preece, Sharp and Rogers (2001), ISBN 0471492787.