Changes for page 7. Persuasiveness of conversational agents
Last modified by Demi Breen on 2023/04/09 14:59
From version 8.1
edited by Liza Wensink
on 2023/04/04 16:27
on 2023/04/04 16:27
Change comment:
There is no comment for this version
To version 5.1
edited by Hugo van Dijk
on 2023/03/23 16:36
on 2023/03/23 16:36
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. lwensink1 +XWiki.hjpvandijk - Content
-
... ... @@ -1,34 +1,57 @@ 1 - Since the focus of ourdesign isto motivateaPwD to follow along on a walk in thegarden together with the robot, we will most likely need to take persuasivenessintoaccount. Persuasiveness in human-humaninteractionsconsistsof persuasiontacticsand behaviorsthatmight make a certainperson more orless convincing.When it comesto human-robotinteractions these aspects also come into play, with the addedchallenge of the agent not being able to employ alltactics a human might beable to do.Below we, therefore, dive intopersuasiveness in conversational agents and whatcould be essential when designing a systemwithn objectivelike this.1 +**Article: **Persuasive Conversational Agent with Persuasion Tactics. [[https:~~/~~/link.springer.com/chapter/10.1007/978-3-642-13226-1_4>>https://link.springer.com/chapter/10.1007/978-3-642-13226-1_4]] 2 2 3 - Generally,for a conversational agent to be persuasive and influence a person's behaviorit needsto be able to adapt to the outcomes of the conversation and the interactions it has with the human, as would a human who wants to be persuasive,according to Narita andKitamura [1]. When itcomes to designing the agent itself, several modelscan be used. The general approachis to select the response and the rule of replyingthat is most likely to lead to the desired goal [1].3 +A number of studies had been done regarding the persuasiveness of conversational agents and how convincing an agent actually might be to a human person. This paper highlights that for a conversational agent to be persuasive and influence a person's behavior they need to be able to adapt to the outcomes of the conversation and the interactions it has with the human, as would a human who wants to be persuasive and convincing. 4 4 5 -This c ould bedonethrough aconversational model which canbe representedasa statetransition tree,using a goal-orientedapproach. The different statements thatcan be givenby therobot are thenrepresented as linkstochange fromonetatetoanother [1]. Since theseinteractionsimplyadialogue there would be two different typesof states: humanstates and agent states which are interconnected in conversationpaths. These paths represent the flow ofconversations, beginning withan initialstateand ending witheither success or failure [1]. When the input fromthe human links toanagent state the agent chooses a statementthat leadstheagent's stateto thehumanstate with the greatest probability of success[1]. The modelis updatedwhen aninputis providedthattheagentis notfamiliarwith.Thiscauses the conversation path to branch and themodel updatestheprobability scores[1]. While it isnot feasible todevelop a full-scale conversationalmodelfor the sake of our designhis project, this clearly illustrates the generalapproachtopersuadingwith the help ofaconversationalagent. A clear goal is setforthe interactionandthe agent attempts to act accordingly, in steps that bring theconversation closer to the goal.5 +This article also contains useful information when it comes to designing a persuasive conversational agent using the Wizard of Oz method. I won't describe it in detail here, but if it is needed there is some information to be taken from the article. 6 6 7 - Thepersuasiveness-objectiveinthegivenstudycentered aroundshowingthe participant two differentcameras,A and B. Thepurposeof the persuasion was to makethe userchangetheirinitial choice [1]. The process of persuasionwasdoneaccordingto Table1 below [1] where the generaltacticwasto try to convince theuserto choosethe other camera byexplainingwhy theconcerns they raisemight not berelevant,suchas explainingthateither the pixels or the stabilizerdo not carry much weight [1]. Themodel already has set predictions of what a usermight inquireaboutand haspre-written responses that mightchangethe opinion of the user[1]. Inourcasemaybewecould also attemptto catchsome reasonssomebodymight not wantto go walking forexample,and then try to explainwhythose reasons are notrelevant orimportantto try topersuadethe user togo out on thewalk.7 +When it comes to designing a persuasive conversational agent, there are several models that can be used. The general approach is to select the response and rule of replying that is most likely to lead to success. 8 8 9 - [[image:attach:flowchart.PNG||height="702"width="416"]]9 +**Goal-oriented conversational model. ** 10 10 11 -The study does, however, continue by mentioning that the Wizard of Oz approach, where the robot is simply controlled by a human in a wizard-like fashion, managed to persuade 25/60 users and the conversation agent based on the model only managed 1 out of 10 users [1]. A necessary takeaway here is to remember that designing a persuasive conversational agent consists of two important aspects, which will be crucial in the design of our project also. These are: 11 +(Quotes from the article) 12 +\\- The conversation model can be represented as a state transition tree where a statement is represented as a link to change a state from one to another. 13 +- Two different types of states, agent states and user states (the human). 14 +- They are interleaved on a conversation path. 15 +- A conversation path represents the flow of conversation between the agent and one or more users and begins with the initial state and terminates with either success or failure. 16 +- If the input matches a statement on a link to an agent state, the agent chooses a statement that links the agent state to a user state with the greatest success probability. 12 12 13 -* having the robot follow general human conversational rules 14 -* applying persuasiveness tactics [1]. 18 +This might not be necessarily a structure we need to implement in its entirety, but some information could definitely be taken from it. 15 15 16 - Another example of an attemptedpersuasionusingaconversationalagent was done by Massimoet al., where an agent attemptedto persuade a person to follow a healthier diet [2].Here, it is shown that there are three psychosocial antecedents of a behavior change, and they include:20 +**Updating conversation model. ** 17 17 18 -* Self-Efficacy (so the person's ability to do something, here: eat healthily) 19 -* Attitude (the person's evaluation of the pros and cons of the change 20 -* Intention Change (the person's willingness to go through with sticking to this diet [2] 22 +When updating the above conversation model needs to be updated, it goes according to the following: 21 21 22 - Theabove-mentioned aspectscannotbemeasured directlyandareinsteadcapturedaslatentvariablesthrough questionnaires[2]. Whilethisfull setup might be too extensivefor ourprojectwith thePepper robot our objectivestill ties into persuadingPwD toparticipateinanactivity. It is noteworthythatthisalso hastodo with healthand that we can usehealth-related explanationsandpersuasionsin ourdesign as well,whichmight mean thethreeaspectsabovecouldbe deemedto berelevant.24 +- When input from the user does not match any statement on the stored conversation path, the conversation path is branched and the success probability scores are updated depending on persuasion success/failure. (Once again, maybe not something we will be able to implement but can try to somehow mimic the idea of). 23 23 24 - In thesubsequenttestthatwasconductedin thisstudy participantswere randomlyassignedtooneoffourgroups, eachreceivinga differenttype ofpersuasivemessage.Thepersuasivemessageswere focused on eithergain(positiveactions givepositiveoutcomes),non-gain(negativeactionspreventpositiveoutcomes),loss(negativeactionsgivenegativeoutcomes),andnon-loss(positiveactionsprevent negative outcomes) [2]. Again, while thisis ratherelaborate it couldberelevant to consider thesedifferenttypesofpersuasivemessages, which could perhapsbe incorporatedintothe motivationwe wanttoprovideto PwD. Perhapsit wouldbe possibleto investigatewhat kind ofpersuasive messages might be effective.Wedo want to,however, stay on thesideof positivemessagesfor motivation and stillanchorthese inithergoal-orientedor emotion-basedmotivationpproach.Further, the study descendsintousing reinforcementlearning and Bayesian networkstoachieve thesegoalsofpersuasion using theconversational agent, which isnotvery relevant toour particularproject,evenif ithighlyinterestingto learn about.26 +The article does, however, continue by mentioning that the Wizard approach where the robot is simply controlled by a human in a wizard-like fashion managed to persuade 25/60 users and the conversation agent based on the model only managed 1 out of 10 users. It is necessary to remember that designing a persuasive conversational agent consists out of two important aspects - having the robot follow general human conversational rules, but also applying persuasiveness tactics. I will attempt to clarify these tactics a bit below. 25 25 28 +**The persuasiveness example given in this study entails:** 26 26 27 - =====References=====30 +"We first show two digital cameras to a customer A and B as shown in Table 1. Camera A has better features about the number of pixels and image stabilizer than camera B, but the price and the weight of A are more than those of B. The purpose of this persuasion is to make the user change his/her choice from the initial one to another one." 28 28 29 -[1] Narita, T., Kitamura, Y., (2010), Persuasive Conversational Agent with Persuasion Tactics, In: Ploug, T., Hasle, P., Oinas-Kukkonen, H. (eds) Persuasive Technology. PERSUASIVE 2010. Lecture Notes in Computer Science, vol 6137. Springer, Berlin, Heidelberg. DOI: [[https:~~/~~/doi.org/10.1007/978-3-642-13226-1_4>>https://doi.org/10.1007/978-3-642-13226-1_4]] 32 +The way the persuasion was designed in this case is according to the following: 33 +"Each phase has a goal to achieve such as “Ask which camera he/she prefers?” Hence the process of persuasive conversation can be represented as a sequence of phases. The sequence of phases may change depending on the responses from the user. If the user likes a camera because of the number of pixels, the agent tries to explain that the number of pixels is not important to choose a camera. If the user likes a camera because of its image stabilizer, the agent tries to explain that the image stabilizer is useless if photos are taken only in the day time." 30 30 31 -[ 2] Massimo, D. F., Carfora, V., Catellani, P., Piastra, M., (2019), ApplyingPsychology of Persuasionto Conversational Agents through Reinforcement Learning:an Exploratory Study, //Italian Conference on Computational Linguistics//.Online, [[https:~~/~~/ceur-ws.org/Vol-2481/paper27.pdf>>https://ceur-ws.org/Vol-2481/paper27.pdf]].35 +[[image:attach:flowchart.PNG]] 32 32 37 +From this particular case it is clear that the persuasive strategy is based on the fact that there is a set of expected things the user might bring up (like, a priori assumed aspects that the user might talk about) that the robot will attempt to explain away, or explain why the user does not need to bother about that when choosing the camera. In our case maybe we could also attempt to catch some reasons somebody might not want to go walking for example, and then try to explain away those reasons (once again, just an idea) to try to persuade the user to actually go out on the walk. 33 33 39 + 40 +**Article**: Applying Psychology of Persuasion to Conversational Agents through Reinforcement Learning: an Exploratory Study. 41 +[[https:~~/~~/ceur-ws.org/Vol-2481/paper27.pdf>>https://ceur-ws.org/Vol-2481/paper27.pdf]] 42 + 43 +This study concerns itself with agents trying to induce a healthier diet into the human they are attempting to persuade, which could be somewhat similar to what we are attempting to do. 44 + 45 +This study mentions: 46 +"Three relevant psychosocial antecedents of behaviour change are the following: Self-Efficacy (the individual perception of being able to eat healthy), Attitude (the individual evaluation of the pros and cons) and Intention Change (the individual willingness of adhering to a healthy diet). These psychosocial dimensions cannot be directly observed and need to be measured as latent variables. To this purpose, questionnaires are used..." 47 + 48 +What was later done during the test was... 49 + 50 +"In a subsequent phase (i.e. message intervention), participants were randomly assigned to one of four groups, each receiving a different type of persuasive message: gain (i.e. positive behavior leads to positive outcomes), non-gain (negative behavior prevents positive outcomes), loss (negative behavior leads to negative outcomes) and non-loss (positive behavior prevents negative outcomes)." Could be something that can be considered during the persuasion stage. 51 + 52 +All this together is maybe be a bit much for us to implement. These questionnaires are quite lengthy and complicated to design and evaluate, since these aspects need to be monitored through latent variables. While we shouldn't and can't implement this in our project currently, it might be good to include as a side point when it comes to designing the complete system. 53 + 54 +Further this article mostly descends into how to translate this different aspects and variables into a Bayesian network and then training the agents using RL, which is not relevant for this course even if it is interesting. Once again, could maybe be mentioned as a side note. 55 + 56 + 34 34