Changes for page 3. Human-System Collaboration
Last modified by Michaël Grauwde on 2023/03/26 22:42
From version 2.1
edited by Michaël Grauwde
on 2023/03/26 18:13
on 2023/03/26 18:13
Change comment:
There is no comment for this version
To version 4.1
edited by Michaël Grauwde
on 2023/03/26 21:48
on 2023/03/26 21:48
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -1,0 +1,25 @@ 1 +Below is an example of the relationship between the conversational agent and the human interacting with it. 2 + 3 +[[image:1679857955085-216.png]] 4 + 5 + 6 +**Step 1.** 7 + 8 +The conversational agent will start by asking the user a series of general questions about public safety. As the user interacts with the agent, the agent will receive a model of the user’s values. These will allow the system to place the user in a particular group aligning with the value that their answers corresponded with. 9 + 10 +**Step 2.** 11 + 12 +In the next step, the conversational agent will present the user with a scenario that takes place in the domain of public safety. After presenting the user with this scenario, we want to see how the user has reflected on their interaction with the user scenarios. Importantly, we want to see if the user’s values towards the situation have changed at all when presented with a context-specific situation. As there is much debate between context-specific and universal values, it is important that we also test these to see that there may be a difference between the two. It is important for users to realise that there is a difference between context-specific values and universal values. As we are creating a particular system, we want to focus on the context of public safety and the values that may be important to the user in this unique scenario (Liscio et al., 2022). 13 + 14 +After this presentation from the user, the user will get a value that may be the same or may have changed in their interaction. And will be told of the possible shift of their values in their interaction with the agent. 15 + 16 +**Literature on changing values based on scenarios.** 17 + 18 +**Step 3.** 19 + 20 +In step 3, we will allow the users based on their received values to be placed in groups with users who have received values different to them. Then these users can deliberate between them to see why they found another value more important regarding to the situation. Researchers have found that structured discussion may allow for better quality discussion and produce more diverse opinions within groups (Kim, Eun, Seering, & Lee, 2021). This can result in better perception of the discussion’s deliberative quality (Kim, Eun, Seering, & Lee, 2021). 21 + 22 +Deliberative chatbots have been found to increase participation and lead to finding more ideas and issues through discussion (Hadfi et al. 2021). 23 + 24 + 25 +Inspired from the SCE methodology, this conversational agent can allow for iterative improvement in the deliberations as the conversational agent can cause reflection in the users upon each interaction with it.