Changes for page 4. Problem Scenario

Last modified by Michaël Grauwde on 2023/05/03 15:17

From version 4.1
edited by Michaël Grauwde
on 2023/03/27 01:38
Change comment: There is no comment for this version
To version 6.1
edited by Michaël Grauwde
on 2023/05/03 15:17
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -2,6 +2,14 @@
2 2  
3 3  If governments, police, municipalities, national organisations want to engage in creating responsible and trustworthy AI for Public Safety, it would help to involve multiple stakeholders. The stakeholders that should be involved should be different municipalities, government agencies, organisations and very importantly, the public. However, it is often difficult to reach these various stakeholders and when they are reached, discussion and deliberation amongst all parties is another issue entirely. Nonetheless, it is important that governments put in the effort to engage in deliberation with the public to see if technology should be developed and if AI systems should be developed, what values should be involved in their development.
4 4  
5 -Our problem scenario then is that there is currently no way to engage in large-scale deliberation with the public and these diverse stakeholders. Even more complicated is how to receive the values that all of the different stakeholders hold regarding the countless public safety scenarios that could exist. A conversational agent can allow for deliberation amongst large groups of individuals and involve various scenarios within it. In this way, the designers and developers of AI systems can get an idea of how various stakeholders feel about a certain public safety scenario and if technology should be developed to deal with it. And then how the technology should be developed after the fact.
5 +Our problem scenario then is that there is currently no way to engage in large-scale deliberation with the public and these diverse stakeholders. Even more complicated is how to receive the values that all of the different stakeholders hold regarding the countless public safety scenarios that could exist. A conversational agent can allow for deliberation amongst large groups of individuals and involve various scenarios within it.
6 6  
7 -In a situation prior to the conversational agent's public release, Francien may be unable to conduct the emergency services as the telephone lines with the operators could be overwhelmed. As such, there is a longer wait time for Francien to get help and she becomes more at risk and more stressed about her situation. She can also take more risky decisions in approaching her scenario that can put herself and others at risk.
7 +In this way, the designers and developers of AI systems can get an idea of how various stakeholders feel about a certain public safety scenario and if technology should be developed to deal with it. And then how the technology should be developed after the fact.
8 +
9 +The question that then arises here is one of value alignment. When we look at embedding human values into AI-systems, we have to ask "whose values matter for this context", "which values", "are these values context-dependent or universal". These are all questions around not only who do we have at the table but which questions do we ask them so that certain values can be extracted from their interaction with the conversational agent that we can allow them to reflect on. 
10 +
11 +In a situation prior to the conversational agent's public release, stakeholders from the various government agencies and companies making AI systems would have no outside input on the values that the stakeholder finds important in developing the systems.
12 +
13 +In this scenario, these stakeholders, can receive opinions from more stakeholders on which values they find important. For example, Suza could get information from Ben on which values he finds important in his life but also which values he has found important in his work as an ELSA researcher.
14 +
15 +