Changes for page 4. Problem Scenario
Last modified by Michaël Grauwde on 2023/05/03 15:17
From version 5.1
edited by Michaël Grauwde
on 2023/05/02 17:20
on 2023/05/02 17:20
Change comment:
There is no comment for this version
To version 6.1
edited by Michaël Grauwde
on 2023/05/03 15:17
on 2023/05/03 15:17
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -8,4 +8,8 @@ 8 8 9 9 The question that then arises here is one of value alignment. When we look at embedding human values into AI-systems, we have to ask "whose values matter for this context", "which values", "are these values context-dependent or universal". These are all questions around not only who do we have at the table but which questions do we ask them so that certain values can be extracted from their interaction with the conversational agent that we can allow them to reflect on. 10 10 11 -In a situation prior to the conversational agent's public release, Francien may be unable to conduct the emergency services as the telephone lines with the operators could be overwhelmed. As such, there is a longer wait time for Francien to get help and she becomes more at risk and more stressed about her situation. She can also take more risky decisions in approaching her scenario that can put herself and others at risk. 11 +In a situation prior to the conversational agent's public release, stakeholders from the various government agencies and companies making AI systems would have no outside input on the values that the stakeholder finds important in developing the systems. 12 + 13 +In this scenario, these stakeholders, can receive opinions from more stakeholders on which values they find important. For example, Suza could get information from Ben on which values he finds important in his life but also which values he has found important in his work as an ELSA researcher. 14 + 15 +