4. Problem Scenario

Version 4.1 by Michaël Grauwde on 2023/03/27 01:38

Problem Scenario

If governments, police, municipalities, national organisations want to engage in creating responsible and trustworthy AI for Public Safety, it would help to involve multiple stakeholders. The stakeholders that should be involved should be different municipalities, government agencies, organisations and very importantly, the public. However, it is often difficult to reach these various stakeholders and when they are reached, discussion and deliberation amongst all parties is another issue entirely. Nonetheless, it is important that governments put in the effort to engage in deliberation with the public to see if technology should be developed and if AI systems should be developed, what values should be involved in their development. 

Our problem scenario then is that there is currently no way to engage in large-scale deliberation with the public and these diverse stakeholders. Even more complicated is how to receive the values that all of the different stakeholders hold regarding the countless public safety scenarios that could exist. A conversational agent can allow for deliberation amongst large groups of individuals and involve various scenarios within it. In this way, the designers and developers of AI systems can get an idea of how various stakeholders feel about a certain public safety scenario and if technology should be developed to deal with it. And then how the technology should be developed after the fact. 

In a situation prior to the conversational agent's public release, Francien may be unable to conduct the emergency services as the telephone lines with the operators could be overwhelmed. As such, there is a longer wait time for Francien to get help and she becomes more at risk and more stressed about her situation. She can also take more risky decisions in approaching her scenario that can put herself and others at risk.