Changes for page 7. Persuasiveness of conversational agents
Last modified by Demi Breen on 2023/04/09 14:59
From version 5.1
edited by Hugo van Dijk
on 2023/03/23 16:36
on 2023/03/23 16:36
Change comment:
There is no comment for this version
To version 9.1
edited by Demi Breen
on 2023/04/09 14:59
on 2023/04/09 14:59
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. hjpvandijk1 +XWiki.Demibreen1000 - Content
-
... ... @@ -1,57 +1,34 @@ 1 - **Article:**PersuasiveConversationalAgentwithPersuasionTactics. [[https:~~/~~/link.springer.com/chapter/10.1007/978-3-642-13226-1_4>>https://link.springer.com/chapter/10.1007/978-3-642-13226-1_4]]1 +Since the focus of our design is to motivate a PwD to follow along on a walk in the garden together with the robot, we will most likely need to take persuasiveness into account. Persuasiveness in human-human interactions consists of persuasion tactics and behaviours that might make a certain person more or less convincing. When it comes to human-robot interactions these aspects also come into play, with the added challenge of the agent not being able to employ all tactics a human might be able to do. Below we, therefore, dive into persuasiveness in conversational agents and what could be essential when designing a system with an objective like this. 2 2 3 - A number of studies had beendoneegarding the persuasiveness of conversationalagents and how convincing an agent actuallymight be to a human person. This paper highlights thatfor a conversational agent to be persuasive and influence a person's behavior theyneed to be able to adapt to the outcomes of the conversation and the interactions it has with the human, as would a human who wants to be persuasive and convincing.3 +Generally, for a conversational agent to be persuasive and influence a person's behaviour it needs to be able to adapt to the outcomes of the conversation and the interactions it has with the human, as would a human who wants to be persuasive, according to Narita and Kitamura [1]. When it comes to designing the agent itself, several models can be used. The general approach is to select the response and the rule of replying that is most likely to lead to the desired goal [1]. 4 4 5 -This articlealso containsusefulinformationwhen it comes todesigningapersuasive conversationalagentusing theWizard ofOzmethod.Iwon'tdescribe it in detailhere,but if it is needed there is some information tobe taken fromthe article.5 +This could be done through a conversational model which can be represented as a state transition tree, using a goal-oriented approach. The different statements that can be given by the robot are then represented as links to change from one state to another [1]. Since these interactions imply a dialogue there would be two different types of states: human states and agent states which are interconnected in conversation paths. These paths represent the flow of conversations, beginning with an initial state and ending with either success or failure [1]. When the input from the human links to an agent state the agent chooses a statement that leads the agent's state to the human state with the greatest probability of success [1]. The model is updated when an input is provided that the agent is not familiar with. This causes the conversation path to branch and the model updates the probability scores [1]. While it is not feasible to develop a full-scale conversational model for the sake of our design for this project, this clearly illustrates the general approach to persuading with the help of a conversational agent. A clear goal is set for the interaction and the agent attempts to act accordingly, in steps that bring the conversation closer to the goal. 6 6 7 - When itcomes todesigninga persuasive conversationalagent,there are severalmodels thatcan be used. Thegeneralapproach isto select the response andruleofreplyingthat ismostlikely tolead tosuccess.7 +The persuasiveness-objective in the given study centred around showing the participant two different cameras, A and B. The purpose of the persuasion was to make the user change their initial choice [1]. The process of persuasion was done according to Table 1 below [1] where the general tactic was to try to convince the user to choose the other camera by explaining why the concerns they raise might not be relevant, such as explaining that either the pixels or the stabilizer do not carry much weight [1]. The model already has set predictions of what a user might inquire about and has pre-written responses that might change the opinion of the user [1]. In our case maybe we could also attempt to catch some reasons somebody might not want to go walking for example, and then try to explain why those reasons are not relevant or important to try to persuade the user to go out on the walk. 8 8 9 - **Goal-orientedconversationalmodel. **9 +[[image:attach:flowchart.PNG||height="702" width="416"]] 10 10 11 -(Quotes from the article) 12 -\\- The conversation model can be represented as a state transition tree where a statement is represented as a link to change a state from one to another. 13 -- Two different types of states, agent states and user states (the human). 14 -- They are interleaved on a conversation path. 15 -- A conversation path represents the flow of conversation between the agent and one or more users and begins with the initial state and terminates with either success or failure. 16 -- If the input matches a statement on a link to an agent state, the agent chooses a statement that links the agent state to a user state with the greatest success probability. 11 +The study does, however, continue by mentioning that the Wizard of Oz approach, where the robot is simply controlled by a human in a wizard-like fashion, managed to persuade 25/60 users and the conversation agent based on the model only managed 1 out of 10 users [1]. A necessary takeaway here is to remember that designing a persuasive conversational agent consists of two important aspects, which will be crucial in the design of our project also. These are: 17 17 18 -This might not be necessarily a structure we need to implement in its entirety, but some information could definitely be taken from it. 13 +* Having the robot follow general human conversational rules 14 +* Applying persuasiveness tactics [1]. 19 19 20 - **Updating conversation model.**16 +Another example of an attempted persuasion using a conversational agent was done by Massimo et al., where an agent attempted to persuade a person to follow a healthier diet [2]. Here, it is shown that there are three psychosocial antecedents of a behaviour change, and they include: 21 21 22 -When updating the above conversation model needs to be updated, it goes according to the following: 18 +* Self-Efficacy (so the person's ability to do something, here: eat healthily) 19 +* Attitude (the person's evaluation of the pros and cons of the change 20 +* Intention Change (the person's willingness to go through with sticking to this diet [2] 23 23 24 - - Wheninputfromtheuser doesnotmatchanystatementonthestoredconversation path,theconversation pathisbranchedandthesuccessprobabilityscores areupdateddependingon persuasionuccess/failure.(Onceagain,maybe notsomethingwe willbe abletoimplementbutcantrytosomehowmimictheideaof).22 +The above-mentioned aspects cannot be measured directly and are instead captured as latent variables through questionnaires [2]. While this full setup might be too extensive for our project with the Pepper robot our objective still ties into persuading PwD to participate in an activity. It is noteworthy that this also has to do with health and that we can use health-related explanations and persuasions in our design as well, which might mean the three aspects above could be deemed to be relevant. 25 25 26 - Thearticledoes,however,continueby mentioningthatthe Wizardapproachwheretherobot is simplycontrolledbyahumanin awizard-like fashionmanagedtopersuade25/60usersandthe conversationagentbasedon themodelonlymanaged1outf10users.It is necessarytoremember that designingapersuasive conversationalagentconsists outoftwo important aspects-havingtherobot followgeneralhumanconversationalrules,butalsoapplying persuasivenesstactics.Iwillattemptto clarifythese tacticsabitbelow.24 +In the subsequent test that was conducted in this study participants were randomly assigned to one of four groups, each receiving a different type of persuasive message. The persuasive messages were focused on either gain (positive actions give positive outcomes), non-gain (negative actions prevent positive outcomes), loss (negative actions give negative outcomes), and non-loss (positive actions prevent negative outcomes) [2]. Again, while this is rather elaborate it could be relevant to consider these different types of persuasive messages, which could perhaps be incorporated into the motivation we want to provide to PwD. Perhaps it would be possible to investigate what kind of persuasive messages might be effective. We do want to, however, stay on the side of positive messages for motivation and still anchor these in either a goal-oriented or emotion-based motivation approach. Further, the study descends into using reinforcement learning and Bayesian networks to achieve these goals of persuasion using the conversational agent, which is not very relevant to our particular project, even if it is highly interesting to learn about. 27 27 28 -**The persuasiveness example given in this study entails:** 29 29 30 - "Wefirst show two digital cameras to a customer A and B as shown in Table 1. Camera A has betterfeatures about thenumber of pixels and image stabilizer thancamera B, but the price and the weight of A are more than thoseof B. The purpose of this persuasion is to make the user change his/her choice from the initial one to another one."27 +===== References ===== 31 31 32 -The way the persuasion was designed in this case is according to the following: 33 -"Each phase has a goal to achieve such as “Ask which camera he/she prefers?” Hence the process of persuasive conversation can be represented as a sequence of phases. The sequence of phases may change depending on the responses from the user. If the user likes a camera because of the number of pixels, the agent tries to explain that the number of pixels is not important to choose a camera. If the user likes a camera because of its image stabilizer, the agent tries to explain that the image stabilizer is useless if photos are taken only in the day time." 29 +[1] Narita, T., Kitamura, Y., (2010), Persuasive Conversational Agent with Persuasion Tactics, In: Ploug, T., Hasle, P., Oinas-Kukkonen, H. (eds) Persuasive Technology. PERSUASIVE 2010. Lecture Notes in Computer Science, vol 6137. Springer, Berlin, Heidelberg. DOI: [[https:~~/~~/doi.org/10.1007/978-3-642-13226-1_4>>https://doi.org/10.1007/978-3-642-13226-1_4]] 34 34 35 -[ [image:attach:flowchart.PNG]]31 +[2] Massimo, D. F., Carfora, V., Catellani, P., Piastra, M., (2019), Applying Psychology of Persuasion to Conversational Agents through Reinforcement Learning: an Exploratory Study, //Italian Conference on Computational Linguistics//.Online, [[https:~~/~~/ceur-ws.org/Vol-2481/paper27.pdf>>https://ceur-ws.org/Vol-2481/paper27.pdf]]. 36 36 37 -From this particular case it is clear that the persuasive strategy is based on the fact that there is a set of expected things the user might bring up (like, a priori assumed aspects that the user might talk about) that the robot will attempt to explain away, or explain why the user does not need to bother about that when choosing the camera. In our case maybe we could also attempt to catch some reasons somebody might not want to go walking for example, and then try to explain away those reasons (once again, just an idea) to try to persuade the user to actually go out on the walk. 38 38 39 - 40 -**Article**: Applying Psychology of Persuasion to Conversational Agents through Reinforcement Learning: an Exploratory Study. 41 -[[https:~~/~~/ceur-ws.org/Vol-2481/paper27.pdf>>https://ceur-ws.org/Vol-2481/paper27.pdf]] 42 - 43 -This study concerns itself with agents trying to induce a healthier diet into the human they are attempting to persuade, which could be somewhat similar to what we are attempting to do. 44 - 45 -This study mentions: 46 -"Three relevant psychosocial antecedents of behaviour change are the following: Self-Efficacy (the individual perception of being able to eat healthy), Attitude (the individual evaluation of the pros and cons) and Intention Change (the individual willingness of adhering to a healthy diet). These psychosocial dimensions cannot be directly observed and need to be measured as latent variables. To this purpose, questionnaires are used..." 47 - 48 -What was later done during the test was... 49 - 50 -"In a subsequent phase (i.e. message intervention), participants were randomly assigned to one of four groups, each receiving a different type of persuasive message: gain (i.e. positive behavior leads to positive outcomes), non-gain (negative behavior prevents positive outcomes), loss (negative behavior leads to negative outcomes) and non-loss (positive behavior prevents negative outcomes)." Could be something that can be considered during the persuasion stage. 51 - 52 -All this together is maybe be a bit much for us to implement. These questionnaires are quite lengthy and complicated to design and evaluate, since these aspects need to be monitored through latent variables. While we shouldn't and can't implement this in our project currently, it might be good to include as a side point when it comes to designing the complete system. 53 - 54 -Further this article mostly descends into how to translate this different aspects and variables into a Bayesian network and then training the agents using RL, which is not relevant for this course even if it is interesting. Once again, could maybe be mentioned as a side note. 55 - 56 - 57 57