Wiki source code of b. Test
Version 2.1 by Michaël Grauwde on 2023/05/08 09:44
Show last authors
author | version | line-number | content |
---|---|---|---|
1 | = 1. Introduction = | ||
2 | |||
3 | <include a short summary of the claims to be tested, i.e., the effects of the functions in a specific use case> | ||
4 | |||
5 | The goal of the conversational agent is to assist stakeholders The claims that we will test are if a conversational agent increases the deliberative quality between stakeholders. While the research is scarce in this field, we hypothesise that the conversational agent will improve value reflection. Kocielnik et al. (2018) and Zhang (2023) which are both focused on reflection by way of a conversational agent and using AI Machine learning models respectively showed that interactions with AI-systems improve reflection by the participants. | ||
6 | |||
7 | We also claim that the conversational agent will help stimulate deliberation as was found in the paper by Zhang (2023). | ||
8 | |||
9 | The goal of the evaluation is that we want to determine if the conversational agent is able to be used for the different stakeholder groups. So, we find usability very important. We want to identify the features in the design of the conversational agent that are working and those that are not to allow for a better system in the future. We also want to compare the design choices to help us to make decisions. We observe the effects of a system on users. | ||
10 | |||
11 | = 2. Method = | ||
12 | |||
13 | |||
14 | == 2.1 Participants == | ||
15 | |||
16 | |||
17 | == 2.2 Experimental design == | ||
18 | |||
19 | |||
20 | == 2.3 Tasks == | ||
21 | |||
22 | |||
23 | == 2.4 Measures == | ||
24 | |||
25 | |||
26 | == 2.5 Procedure == | ||
27 | |||
28 | |||
29 | == 2.6 Material == | ||
30 | |||
31 | |||
32 | = 3. Results = | ||
33 | |||
34 | |||
35 | = 4. Discussion = | ||
36 | |||
37 | |||
38 | = 5. Conclusions = |