Wiki source code of Persuasiveness of conversational agents
Version 2.1 by Liza Wensink on 2023/03/19 21:31
Show last authors
author | version | line-number | content |
---|---|---|---|
1 | **Article: **Persuasive Conversational Agent with Persuasion Tactics. [[https:~~/~~/link.springer.com/chapter/10.1007/978-3-642-13226-1_4>>https://link.springer.com/chapter/10.1007/978-3-642-13226-1_4]] | ||
2 | |||
3 | A number of studies had been done regarding the persuasiveness of conversational agents and how convincing an agent actually might be to a human person. This paper highlights that for a conversational agent to be persuasive and influence a person's behavior they need to be able to adapt to the outcomes of the conversation and the interactions it has with the human, as would a human who wants to be persuasive and convincing. | ||
4 | |||
5 | This article also contains useful information when it comes to designing a persuasive conversational agent using the Wizard of Oz method. I won't describe it in detail here, but if it is needed there is some information to be taken from the article. | ||
6 | |||
7 | When it comes to designing a persuasive conversational agent, there are several models that can be used. The general approach is to select the response and rule of replying that is most likely to lead to success. | ||
8 | |||
9 | **Goal-oriented conversational model. ** | ||
10 | |||
11 | (Quotes from the article) | ||
12 | \\- The conversation model can be represented as a state transition tree where a statement is represented as a link to change a state from one to another. | ||
13 | - Two different types of states, agent states and user states (the human). | ||
14 | - They are interleaved on a conversation path. | ||
15 | - A conversation path represents the flow of conversation between the agent and one or more users and begins with the initial state and terminates with either success or failure. | ||
16 | - If the input matches a statement on a link to an agent state, the agent chooses a statement that links the agent state to a user state with the greatest success probability. | ||
17 | |||
18 | This might not be necessarily a structure we need to implement in its entirety, but some information could definitely be taken from it. | ||
19 | |||
20 | **Updating conversation model. ** | ||
21 | |||
22 | When updating the above conversation model needs to be updated, it goes according to the following: | ||
23 | |||
24 | - When input from the user does not match any statement on the stored conversation path, the conversation path is branched and the success probability scores are updated depending on persuasion success/failure. (Once again, maybe not something we will be able to implement but can try to somehow mimic the idea of). | ||
25 | |||
26 | The article does, however, continue by mentioning that the Wizard approach where the robot is simply controlled by a human in a wizard-like fashion managed to persuade 25/60 users and the conversation agent based on the model only managed 1 out of 10 users. It is necessary to remember that designing a persuasive conversational agent consists out of two important aspects - having the robot follow genera human conversational rules, but also applying persuasiveness tactics. I will attempt to clarify these tactics a bit below. | ||
27 | |||
28 |