Deliberation based on reflection processes (NLP (NLU/NLG))
The conversational agent will attempt to help the user reflect on their values through their interaction with the conversational agent. Conversational agents can have an impact on the users as we are aware that computers are social actors. We see this through the anthropomorphisation of computers and of AI-agents that occurs and has occurred for decades. Conversational agents have the ability to allow humans to rethink their perspective on their actions and on their thoughts. A compassion chatbot, Vincent, was shown to increase the amount of self-compassion that a user could have with themselves through interactions with the conversational agent (Lee et al., 2019).
We want our agent to have natural language understanding as well as natural language generation, as this would allow our agent to understand the manner in which the humans and communicating and respond in a manner that allows the users to reflect on why they said what they said. Existing research in the study of persuasive study has found that there is a fine line between computers being seen in a positive light and being seen negatively. Two paradigms that play a role here are the CASA paradigm and the UVE paradigm. The CASA paradigm is the computers are social actors paradigm while the UVE is the uncanny valley effect (Zhang, 2020). In the case of the CASA, Nass, Steuer and Tauber (1994) found that computers are able to elicit social responses from their users. They found that responses as social are natural responses to social situations that users find themselves in (Nass, Steuer & Tauber, 1994). The uncanny valley on the other hand is about our ability to have a lack of affinity of systems socially, regardless of their ability to look or communicate like humans (Mori, 2012). Through our interactions with users, we aim to avoid the uncanny valley displayed by agents such as Replika and have users interact with the system in a social manner to elicit value responses from their interactions (Ta et al., 2020).
An overview of chatbots has seen the growth of conversational agents from pattern matching agents and rule-based agents to generation-based conversational agents and reinforcement learning approaches used in the creation ChatGPT. To allow for the best interaction with the system, we want the system to be able to learn from the users' inputs, so in this case we will aim to recognise contextual clues about texts from humans and responds based on those clues. We want our systems thus to understand when users are using words to reflect a certain value, to reflect seamless interaction between the agents.
The system's interaction is divided into 4 parts:
1. The user generates text and inputs it into the conversational agent.
2. The input is analysed by the conversational agent to decipher the meaning of the input. The system wants to decipher the system's intent. It does so utilising natural language understanding (NLU).
3. The system is attempting to manage the dialogue. In doing so, the system is generating dialogue by formulating a response that mimics human language. It does so by using natural language generation (NLG).
4. The system uses reinforcement learning to refine its responses over time based on how well the conversational agent did in each iteration.
References:
M. Mori, K. F. MacDorman and N. Kageki, "The Uncanny Valley [From the Field]," in IEEE Robotics & Automation Magazine, vol. 19, no. 2, pp. 98-100, June 2012, doi: 10.1109/MRA.2012.2192811.
Ta V, Griffith C, Boatfield C, Wang X, Civitello M, Bader H, DeCero E, Loggarakis. A User Experiences of Social Support From Companion Chatbots in Everyday Contexts: Thematic Analysis J Med Internet Res 2020;22(3):e16235
Zhang J, Oh YJ, Lange P, Yu Z, Fukuoka Y. Artificial Intelligence Chatbot Behavior Change Model for Designing Artificial Intelligence Chatbots to Promote Physical Activity and a Healthy Diet: Viewpoint. J Med Internet Res 2020;22(9):e22845