Wiki source code of 7. Persuasiveness of conversational agents
Last modified by Demi Breen on 2023/04/09 14:59
Show last authors
author | version | line-number | content |
---|---|---|---|
1 | Since the focus of our design is to motivate a PwD to follow along on a walk in the garden together with the robot, we will most likely need to take persuasiveness into account. Persuasiveness in human-human interactions consists of persuasion tactics and behaviours that might make a certain person more or less convincing. When it comes to human-robot interactions these aspects also come into play, with the added challenge of the agent not being able to employ all tactics a human might be able to do. Below we, therefore, dive into persuasiveness in conversational agents and what could be essential when designing a system with an objective like this. | ||
2 | |||
3 | Generally, for a conversational agent to be persuasive and influence a person's behaviour it needs to be able to adapt to the outcomes of the conversation and the interactions it has with the human, as would a human who wants to be persuasive, according to Narita and Kitamura [1]. When it comes to designing the agent itself, several models can be used. The general approach is to select the response and the rule of replying that is most likely to lead to the desired goal [1]. | ||
4 | |||
5 | This could be done through a conversational model which can be represented as a state transition tree, using a goal-oriented approach. The different statements that can be given by the robot are then represented as links to change from one state to another [1]. Since these interactions imply a dialogue there would be two different types of states: human states and agent states which are interconnected in conversation paths. These paths represent the flow of conversations, beginning with an initial state and ending with either success or failure [1]. When the input from the human links to an agent state the agent chooses a statement that leads the agent's state to the human state with the greatest probability of success [1]. The model is updated when an input is provided that the agent is not familiar with. This causes the conversation path to branch and the model updates the probability scores [1]. While it is not feasible to develop a full-scale conversational model for the sake of our design for this project, this clearly illustrates the general approach to persuading with the help of a conversational agent. A clear goal is set for the interaction and the agent attempts to act accordingly, in steps that bring the conversation closer to the goal. | ||
6 | |||
7 | The persuasiveness-objective in the given study centred around showing the participant two different cameras, A and B. The purpose of the persuasion was to make the user change their initial choice [1]. The process of persuasion was done according to Table 1 below [1] where the general tactic was to try to convince the user to choose the other camera by explaining why the concerns they raise might not be relevant, such as explaining that either the pixels or the stabilizer do not carry much weight [1]. The model already has set predictions of what a user might inquire about and has pre-written responses that might change the opinion of the user [1]. In our case maybe we could also attempt to catch some reasons somebody might not want to go walking for example, and then try to explain why those reasons are not relevant or important to try to persuade the user to go out on the walk. | ||
8 | |||
9 | [[image:attach:flowchart.PNG||height="702" width="416"]] | ||
10 | |||
11 | The study does, however, continue by mentioning that the Wizard of Oz approach, where the robot is simply controlled by a human in a wizard-like fashion, managed to persuade 25/60 users and the conversation agent based on the model only managed 1 out of 10 users [1]. A necessary takeaway here is to remember that designing a persuasive conversational agent consists of two important aspects, which will be crucial in the design of our project also. These are: | ||
12 | |||
13 | * Having the robot follow general human conversational rules | ||
14 | * Applying persuasiveness tactics [1]. | ||
15 | |||
16 | Another example of an attempted persuasion using a conversational agent was done by Massimo et al., where an agent attempted to persuade a person to follow a healthier diet [2]. Here, it is shown that there are three psychosocial antecedents of a behaviour change, and they include: | ||
17 | |||
18 | * Self-Efficacy (so the person's ability to do something, here: eat healthily) | ||
19 | * Attitude (the person's evaluation of the pros and cons of the change | ||
20 | * Intention Change (the person's willingness to go through with sticking to this diet [2] | ||
21 | |||
22 | The above-mentioned aspects cannot be measured directly and are instead captured as latent variables through questionnaires [2]. While this full setup might be too extensive for our project with the Pepper robot our objective still ties into persuading PwD to participate in an activity. It is noteworthy that this also has to do with health and that we can use health-related explanations and persuasions in our design as well, which might mean the three aspects above could be deemed to be relevant. | ||
23 | |||
24 | In the subsequent test that was conducted in this study participants were randomly assigned to one of four groups, each receiving a different type of persuasive message. The persuasive messages were focused on either gain (positive actions give positive outcomes), non-gain (negative actions prevent positive outcomes), loss (negative actions give negative outcomes), and non-loss (positive actions prevent negative outcomes) [2]. Again, while this is rather elaborate it could be relevant to consider these different types of persuasive messages, which could perhaps be incorporated into the motivation we want to provide to PwD. Perhaps it would be possible to investigate what kind of persuasive messages might be effective. We do want to, however, stay on the side of positive messages for motivation and still anchor these in either a goal-oriented or emotion-based motivation approach. Further, the study descends into using reinforcement learning and Bayesian networks to achieve these goals of persuasion using the conversational agent, which is not very relevant to our particular project, even if it is highly interesting to learn about. | ||
25 | |||
26 | |||
27 | ===== References ===== | ||
28 | |||
29 | [1] Narita, T., Kitamura, Y., (2010), Persuasive Conversational Agent with Persuasion Tactics, In: Ploug, T., Hasle, P., Oinas-Kukkonen, H. (eds) Persuasive Technology. PERSUASIVE 2010. Lecture Notes in Computer Science, vol 6137. Springer, Berlin, Heidelberg. DOI: [[https:~~/~~/doi.org/10.1007/978-3-642-13226-1_4>>https://doi.org/10.1007/978-3-642-13226-1_4]] | ||
30 | |||
31 | [2] Massimo, D. F., Carfora, V., Catellani, P., Piastra, M., (2019), Applying Psychology of Persuasion to Conversational Agents through Reinforcement Learning: an Exploratory Study, //Italian Conference on Computational Linguistics//.Online, [[https:~~/~~/ceur-ws.org/Vol-2481/paper27.pdf>>https://ceur-ws.org/Vol-2481/paper27.pdf]]. | ||
32 | |||
33 | |||
34 |