Changes for page 7. Persuasiveness of conversational agents
Last modified by Demi Breen on 2023/04/09 14:59
From version 7.1
edited by Liza Wensink
on 2023/04/04 16:25
on 2023/04/04 16:25
Change comment:
There is no comment for this version
To version 1.1
edited by Liza Wensink
on 2023/03/18 22:57
on 2023/03/18 22:57
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
-
Attachments (0 modified, 0 added, 1 removed)
Details
- Page properties
-
- Title
-
... ... @@ -1,1 +1,1 @@ 1 - 7.Persuasiveness of conversational agents1 +Persuasiveness of conversational agents - Content
-
... ... @@ -1,34 +1,21 @@ 1 - Since the focus of ourdesign isto motivateaPwD to follow along on a walk in thegarden together with the robot, we will most likely need to take persuasivenessintoaccount. Persuasiveness in human-humaninteractionsconsistsof persuasiontacticsand behaviorsthatmight make a certainperson more orless convincing.When it comesto human-robotinteractions these aspects also come into play, with the addedchallenge of the agent not being able to employ alltactics a human might beable to do.Below we, therefore, dive intopersuasiveness in conversational agents and whatcould be essential when designing a systemwithn objectivelike this.1 +**Article: **Persuasive Conversational Agent with Persuasion Tactics. [[https:~~/~~/link.springer.com/chapter/10.1007/978-3-642-13226-1_4>>https://link.springer.com/chapter/10.1007/978-3-642-13226-1_4]] 2 2 3 - Generally,for a conversational agent to be persuasive and influence a person's behaviorit needsto be able to adapt to the outcomes of the conversation and the interactions it has with the human, as would a human who wants to be persuasive,according to Narita andKitamura [1]. When itcomes to designing the agent itself, several modelscan be used. The general approachis to select the response and the rule of replyingthat is most likely to lead to the desired goal [1].3 +A number of studies had been done regarding the persuasiveness of conversational agents and how convincing an agent actually might be to a human person. This paper highlights that for a conversational agent to be persuasive and influence a person's behavior they need to be able to adapt to the outcomes of the conversation and the interactions it has with the human, as would a human who wants to be persuasive and convincing. 4 4 5 -This c ould bedonethrough aconversational model which canbe representedasa statetransition tree,using a goal-orientedapproach. The different statements thatcan be givenby therobot are thenrepresented as linkstochange fromonetatetoanother [1]. Since theseinteractionsimplyadialogue there would be two different typesof states: humanstates and agent states which are interconnected in conversationpaths. These paths represent the flow ofconversations, beginning withan initialstateand ending witheither success or failure [1]. When the input fromthe human links toanagent state the agent chooses a statementthat leadstheagent's stateto thehumanstate with the greatest probability of success[1]. The modelis updatedwhen aninputis providedthattheagentis notfamiliarwith.Thiscauses the conversation path to branch and themodel updatestheprobability scores[1]. While it isnot feasible todevelop a full-scale conversationalmodelfor the sake of our designhis project, this clearly illustrates the generalapproachtopersuadingwith the help ofaconversationalagent. A clear goal is setforthe interactionandthe agent attempts to act accordingly, in steps that bring theconversation closer to the goal.5 +This article also contains useful information when it comes to designing a persuasive conversational agent using the Wizard of Oz method. I won't describe it in detail here, but if it is needed there is some information to be taken from the article. 6 6 7 - Thepersuasiveness-objectiveinthegivenstudycentered aroundshowingthe participant two differentcameras,A and B. Thepurposeof the persuasion was to makethe userchangetheirinitial choice [1]. The process of persuasionwasdoneaccordingto Table1 below [1] where the generaltacticwasto try to convince theuserto choosethe other camera byexplainingwhy theconcerns they raisemight not berelevant,suchas explainingthateither the pixels or the stabilizerdo not carry much weight [1]. Themodel already has set predictions of what a usermight inquireaboutand haspre-written responses that mightchangethe opinion of the user[1]. Inourcasemaybewecould also attemptto catchsome reasonssomebodymight not wantto go walking forexample,and then try to explainwhythose reasons are notrelevant orimportantto try topersuadethe user togo out on thewalk.7 +When it comes to designing a persuasive conversational agent, there are several models that can be used. The general approach is to select the response and rule of replying that is most likely to lead to success. 8 8 9 -[[image:attach:flowchart.PNG||height="702" width="416"]] 9 +**Goal-oriented conversational model. ** 10 +- The conversation model can be represented as a state transition tree where a statement is represented as a link to change a state from one to another. 11 +- Two different types of states, agent states and user states (the human). 12 +- They are interleaved on a conversation path. 13 +- A conversation path represents the flow of conversation between the agent and one or more users and begins with the initial state and terminates with either success or failure. 14 +- If the input matches a statement on a link to an agent state, the agent chooses a statement that links the agent state to a user state with the greatest success probability. 10 10 11 -Th estudydoes, however, continue bymentioningthatthe Wizardof Oz approach, wheretherobotismply controlledby a human in awizard-likefashion, managed to persuade25/60 users andtheconversationagentbasedon themodel only managed 1 outof 10 users [1]. A necessary takeawayhere istorememberthat designing a persuasive conversationalagentconsistsoftwoimportantaspects, which willbecrucial inthedesignofourprojectalso.These are:16 +This might not be necessarily a structure we need to implement in its entirety, but some information could definitely be taken from it. 12 12 13 -* havingthe robot follow general humanconversationalrules14 - *applying persuasiveness tactics [1].18 +**Updating conversation model.** 19 + 15 15 16 -Another example of an attempted persuasion using a conversational agent was done by Massimo et al., where an agent attempted to persuade a person to follow a healthier diet [2]. Here, it is shown that there are three psychosocial antecedents of a behavior change, and they include: 17 - 18 -* Self-Efficacy (so the person's ability to do something, here: eat healthily) 19 -* Attitude (the person's evaluation of the pros and cons of the change 20 -* Intention Change (the person's willingness to go through with sticking to this diet [2] 21 - 22 -The above-mentioned aspects cannot be measured directly and are instead captured as latent variables through questionnaires [2]. While this full setup might be too extensive for our project with the Pepper robot our objective still ties into persuading PwD to participate in an activity. It is noteworthy that this also has to do with health and that we can use health-related explanations and persuasions in our design as well, which might mean the three aspects above could be deemed to be relevant. 23 - 24 -In the subsequent test that was conducted in this study participants were randomly assigned to one of four groups, each receiving a different type of persuasive message. The persuasive messages were focused on either gain (positive actions give positive outcomes), non-gain (negative actions prevent positive outcomes), loss (negative actions give negative outcomes), and non-loss (positive actions prevent negative outcomes) [2]. Again, while this is rather elaborate it could be relevant to consider these different types of persuasive messages, which could perhaps be incorporated into the motivation we want to provide to PwD. Perhaps it would be possible to investigate what kind of persuasive messages might be effective. We do want to, however, stay on the side of positive messages for motivation and still anchor these in either a goal-oriented or emotion-based motivation approach. Further, the study descends into using reinforcement learning and Bayesian networks to achieve these goals of persuasion using the conversational agent, which is not very relevant to our particular project, even if it is highly interesting to learn about. 25 - 26 - 27 -===== References ===== 28 - 29 -[1] Narita, T., Kitamura, Y., (2010), Persuasive Conversational Agent with Persuasion Tactics, In: Ploug, T., Hasle, P., Oinas-Kukkonen, H. (eds) Persuasive Technology. PERSUASIVE 2010. Lecture Notes in Computer Science, vol 6137. Springer, Berlin, Heidelberg. DOI: [[https:~~/~~/doi.org/10.1007/978-3-642-13226-1_4>>https://doi.org/10.1007/978-3-642-13226-1_4]] 30 - 31 -[2] Massimo, D. F., Carfora, V., Catellani, P., Piastra, M., (2019), Applying Psychology of Persuasion to Conversational Agents through Reinforcement Learning: an Exploratory Study, //Italian Conference on Computational Linguistics//.Online, [[https:~~/~~/ceur-ws.org/Vol-2481/paper27.pdf>>https://ceur-ws.org/Vol-2481/paper27.pdf]]. 32 - 33 - 34 34
- flowchart.PNG
-
- Author
-
... ... @@ -1,1 +1,0 @@ 1 -XWiki.lwensink - Size
-
... ... @@ -1,1 +1,0 @@ 1 -63.3 KB - Content