Last modified by Demi Breen on 2023/04/09 14:59

From version 4.1
edited by Liza Wensink
on 2023/03/19 22:08
Change comment: There is no comment for this version
To version 6.1
edited by Liza Wensink
on 2023/04/04 15:37
Change comment: There is no comment for this version

Summary

Details

Page properties
Title
... ... @@ -1,1 +1,1 @@
1 -Persuasiveness of conversational agents
1 +7. Persuasiveness of conversational agents
Content
... ... @@ -1,42 +1,23 @@
1 -**Article: **Persuasive Conversational Agent with Persuasion Tactics. [[https:~~/~~/link.springer.com/chapter/10.1007/978-3-642-13226-1_4>>https://link.springer.com/chapter/10.1007/978-3-642-13226-1_4]]
1 +Since the focus of our design is to motivate a PwD to follow along on a walk in the garden together with the robot, we will most likely need to take persuasiveness into account. Persuasiveness in human-human interactions consists of persuasion tactics and behaviors that might make a certain person more or less convincing. When it comes to human-robot interactions these aspects also come into play, with the added challenge of the agent not being able to employ all tactics a human might be able to do. Below we, therefore, dive into persuasiveness in conversational agents and what could be essential when designing a system with an objective like this.
2 2  
3 -A number of studies had been done regarding the persuasiveness of conversational agents and how convincing an agent actually might be to a human person. This paper highlights that for a conversational agent to be persuasive and influence a person's behavior they need to be able to adapt to the outcomes of the conversation and the interactions it has with the human, as would a human who wants to be persuasive and convincing.
3 +Generally, for a conversational agent to be persuasive and influence a person's behavior it needs to be able to adapt to the outcomes of the conversation and the interactions it has with the human, as would a human who wants to be persuasive, according to Narita and Kitamura [1]. When it comes to designing the agent itself, several models can be used. The general approach is to select the response and the rule of replying that is most likely to lead to the desired goal [1].
4 4  
5 -This article also contains useful information when it comes to designing a persuasive conversational agent using the Wizard of Oz method. I won't describe it in detail here, but if it is needed there is some information to be taken from the article.
5 +This could be done through a conversational model which can be represented as a state transition tree, using a goal-oriented approach. The different statements that can be given by the robot are then represented as links to change from one state to another [1]. Since these interactions imply a dialogue there would be two different types of states: human states and agent states which are interconnected in conversation paths. These paths represent the flow of conversations, beginning with an initial state and ending with either success or failure [1]. When the input from the human links to an agent state the agent chooses a statement that leads the agent's state to the human state with the greatest probability of success [1]. The model is updated when an input is provided that the agent is not familiar with. This causes the conversation path to branch and the model updates the probability scores [1]. While it is not feasible to develop a full-scale conversational model for the sake of our design for this project, this clearly illustrates the general approach to persuading with the help of a conversational agent. A clear goal is set for the interaction and the agent attempts to act accordingly, in steps that bring the conversation closer to the goal.
6 6  
7 -When it comes to designing a persuasive conversational agent, there are several models that can be used. The general approach is to select the response and rule of replying that is most likely to lead to success.
7 +The persuasiveness-objective in the given study centered around showing the participant two different cameras, A and B. The purpose of the persuasion was to make the user change their initial choice [1]. The process of persuasion was done according to Table 1 below [1] where the general tactic was to try to convince the user to choose the other camera by explaining why the concerns they raise might not be relevant, such as explaining that either the pixels or the stabilizer do not carry much weight [1]. The model already has set predictions of what a user might inquire about and has pre-written responses that might change the opinion of the user [1]. In our case maybe we could also attempt to catch some reasons somebody might not want to go walking for example, and then try to explain why those reasons are not relevant or important to try to persuade the user to actually go out on the walk.
8 8  
9 -**Goal-oriented conversational model. **
9 +[[image:attach:flowchart.PNG||height="702" width="416"]]
10 10  
11 -(Quotes from the article)
12 -\\- The conversation model can be represented as a state transition tree where a statement is represented as a link to change a state from one to another.
13 -- Two different types of states, agent states and user states (the human). 
14 -- They are interleaved on a conversation path.
15 -- A conversation path represents the flow of conversation between the agent and one or more users and begins with the initial state and terminates with either success or failure.
16 -- If the input matches a statement on a link to an agent state, the agent chooses a statement that links the agent state to a user state with the greatest success probability.
11 +The study does, however, continue by mentioning that the Wizard of Oz approach, where the robot is simply controlled by a human in a wizard-like fashion, managed to persuade 25/60 users and the conversation agent based on the model only managed 1 out of 10 users [1]. A necessary takeaway here is to remember that designing a persuasive conversational agent consists of two important aspects, which will be crucial in the design of our project also. These are:
17 17  
18 -This might not be necessarily a structure we need to implement in its entirety, but some information could definitely be taken from it.
13 +* having the robot follow general human conversational rules
14 +* applying persuasiveness tactics [1].
19 19  
20 -**Updating conversation model. **
21 21  
22 -When updating the above conversation model needs to be updated, it goes according to the following:
23 23  
24 -- When input from the user does not match any statement on the stored conversation path, the conversation path is branched and the success probability scores are updated depending on persuasion success/failure. (Once again, maybe not something we will be able to implement but can try to somehow mimic the idea of).
18 +**Article: **Persuasive Conversational Agent with Persuasion Tactics. [[https:~~/~~/link.springer.com/chapter/10.1007/978-3-642-13226-1_4>>https://link.springer.com/chapter/10.1007/978-3-642-13226-1_4]]
25 25  
26 -The article does, however, continue by mentioning that the Wizard approach where the robot is simply controlled by a human in a wizard-like fashion managed to persuade 25/60 users and the conversation agent based on the model only managed 1 out of 10 users. It is necessary to remember that designing a persuasive conversational agent consists out of two important aspects - having the robot follow general human conversational rules, but also applying persuasiveness tactics. I will attempt to clarify these tactics a bit below.
27 27  
28 -**The persuasiveness example given in this study entails:**
29 -
30 -"We first show two digital cameras to a customer A and B as shown in Table 1. Camera A has better features about the number of pixels and image stabilizer than camera B, but the price and the weight of A are more than those of B. The purpose of this persuasion is to make the user change his/her choice from the initial one to another one."
31 -
32 -The way the persuasion was designed in this case is according to the following:
33 -"Each phase has a goal to achieve such as “Ask which camera he/she prefers?” Hence the process of persuasive conversation can be represented as a sequence of phases. The sequence of phases may change depending on the responses from the user. If the user likes a camera because of the number of pixels, the agent tries to explain that the number of pixels is not important to choose a camera. If the user likes a camera because of its image stabilizer, the agent tries to explain that the image stabilizer is useless if photos are taken only in the day time."
34 -
35 -[[image:attach:flowchart.PNG]]
36 -
37 -From this particular case it is clear that the persuasive strategy is based on the fact that there is a set of expected things the user might bring up (like, a priori assumed aspects that the user might talk about) that the robot will attempt to explain away, or explain why the user does not need to bother about that when choosing the camera. In our case maybe we could also attempt to catch some reasons somebody might not want to go walking for example, and then try to explain away those reasons (once again, just an idea) to try to persuade the user to actually go out on the walk.
38 -
39 -
40 40  **Article**: Applying Psychology of Persuasion to Conversational Agents through Reinforcement Learning: an Exploratory Study. 
41 41  [[https:~~/~~/ceur-ws.org/Vol-2481/paper27.pdf>>https://ceur-ws.org/Vol-2481/paper27.pdf]]
42 42