Conversational Agent

Last modified by Michaël Grauwde on 2023/05/05 12:13

Conversational Agents have long been fascinating to humans. From 1966, when Joseph Weizenbaum's ELIZA made its users feel connected with it as if it was their real therapists, to people having long conversations with Siri and Alexa in the 21st century. From being used as customer service representatives to assistants, conversational agents are ubiquitous. 

However, since Weizenbaum saw the way people reacted to his chatbot ELIZA, he was concerned with the manner in which humans can anthropomorphise technology. Weizenbaum become concerned that humans were becoming emotionally attached to the computer and found that it was capable of understanding natural language (Weizenbaum, 1976). These concerns have grown over the years with the proliferation of conversational agents in our world.

More recently, since the release of ChatGPT on November 30, 2022, people around the world have been endlessly fascinated and concerned by what it can and can't do. And many companies have made chatbots an important priority in their businesses.

The role of the conversational agent that we hope to design and develop in this project is different than existing conversational agents. 

First off, the goal of the conversational agent is not to replace humans, but rather to aid humans in a reflection of their instrumental values. The goal of this conversational agent is that through the reflection that users have when being confronted with a specific scenario, that they will be able to reflect on their own values and why they hold those values. They then will be able to engage in a deliberation with fellow stakeholders about which values are important in creating AI systems that are to be developed based on human values in the domain of public safety. 

Specifically the conversational agent will focus on the use case, crowd management and will focus on climate protests. The reason for this focus is that as climate change is becoming an ever-present reality of our society, protests due to lack of government action have began to emerge. As such, in the domain of public safety, this presents a new form of protest for governments to deal with. 

Previous literature from Hadfi et al. (2021) has found that chatbots help in allowing more ideas and issues to flow into deliberation and discussion among humans after engaging with a conversational agent. 

This conversational agent will attempt to stimulate productive deliberation on the possible uses of AI systems in the public safety domain. 

References:

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation.

Hadfi, R., Haqbeen, J., Sahab, S., & Ito, T. (2021). Argumentative conversational agents for online discussions. Journal of Systems Science and Systems Engineering30, 450-464.