Changes for page 4. Problem Scenario
Last modified by Michaël Grauwde on 2023/05/03 15:17
From version 3.1
edited by Michaël Grauwde
on 2023/02/28 12:34
on 2023/02/28 12:34
Change comment:
There is no comment for this version
To version 6.1
edited by Michaël Grauwde
on 2023/05/03 15:17
on 2023/05/03 15:17
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -1,37 +17,15 @@ 1 -__**Problem scenario**__ 2 - 3 -**a story about the problem domain as it exists prior to technology introduction** 4 - 5 -__**Design scenario**__ 6 - 7 -**a story that conveys a new vision 8 -activity: narratives of typical or critical services information: details on info provision 9 -interaction: details of user interaction & feedback** 10 - 11 -__Conceptual Scenario__ 12 - 13 -Our conceptual scenario revolves around our member of the public attempting to communicate their position with the conversational agent. 14 - 15 -**Example: **As the situation reaches more chaotic stages, one of our members of the public is in dire straits. They are struggling to text the conversational agent and communicate their position with it. In this manner, as they struggle to text the agent, or send a voice note, there is a widget next to the text box which allows the sharing of their location. They can share their live location which will communicate this directly with the emergency services and save the last known location of them if their internet is interrupted. They can also simply share their location once or share their location once every 5 minutes to keep the emergency services up to date with their location. 16 - 17 17 __Problem Scenario__ 18 18 19 -In asituationpriorto the conversationalagent'spublicrelease,Francienmay beunableto conducte emergencyservicesas the telephone lineswiththe operatorscould be overwhelmed.Assuch,there is alongerwait timeforFrancien togethelpandshebecomesmore atriskandmore stressedaboutsituation.Shecan also takemoreriskydecisionsinapproaching herscenariothatcanputherselfandothersatrisk.3 +If governments, police, municipalities, national organisations want to engage in creating responsible and trustworthy AI for Public Safety, it would help to involve multiple stakeholders. The stakeholders that should be involved should be different municipalities, government agencies, organisations and very importantly, the public. However, it is often difficult to reach these various stakeholders and when they are reached, discussion and deliberation amongst all parties is another issue entirely. Nonetheless, it is important that governments put in the effort to engage in deliberation with the public to see if technology should be developed and if AI systems should be developed, what values should be involved in their development. 20 20 21 - __ConcreteScenario& DesignScenario__5 +Our problem scenario then is that there is currently no way to engage in large-scale deliberation with the public and these diverse stakeholders. Even more complicated is how to receive the values that all of the different stakeholders hold regarding the countless public safety scenarios that could exist. A conversational agent can allow for deliberation amongst large groups of individuals and involve various scenarios within it. 22 22 23 - **SharingYourLocation**7 +In this way, the designers and developers of AI systems can get an idea of how various stakeholders feel about a certain public safety scenario and if technology should be developed to deal with it. And then how the technology should be developed after the fact. 24 24 25 - FrancienvanderFeestisstuck inherapartmentduringafloodandneedsto shareherinformationwiththeconversationalagentto lettheemergencyservicesknowwhere sheis. Sheis not surethat herinternetandthedataservices ofthelecommunications servicewillholdup. Sheverysavvyusingherphone andcomputer,butshehas downloadedtheapptoreceiveupdates andcommunicationfromthegovernmentinthe caseofanemergency.9 +The question that then arises here is one of value alignment. When we look at embedding human values into AI-systems, we have to ask "whose values matter for this context", "which values", "are these values context-dependent or universal". These are all questions around not only who do we have at the table but which questions do we ask them so that certain values can be extracted from their interaction with the conversational agent that we can allow them to reflect on. 26 26 27 -In doing so, Franciendecideshatthe best plan for heris tostay inher apartment for now as sheison the second floor,so she decidestoshare her currentlocation withthe conversational agent.[1]Hereheisdirectedbytheconversationalagenttoclick thewidgetnexttothetextboxwhich hasa pin.[2] Uponclickingthispin,sheisable to makethreechoices: 1. sharingherlivelocation.2. sharingher currentlocation. 3. sharing herlocation every5 minutes.11 +In a situation prior to the conversational agent's public release, stakeholders from the various government agencies and companies making AI systems would have no outside input on the values that the stakeholder finds important in developing the systems. 28 28 29 - [3]Shechoosestoshareurrentlocation and thatinformation issent tothe emergencyservices.13 +In this scenario, these stakeholders, can receive opinions from more stakeholders on which values they find important. For example, Suza could get information from Ben on which values he finds important in his life but also which values he has found important in his work as an ELSA researcher. 30 30 31 -**Some notes to the scenario** 32 - 33 -Some problems with this scenario may arise in Francien's navigation towards the location sharing widget as she is not so tech savvy. So this can lead to a technical mishap which is something to keep in mind for our designers as they design the widget but also the overall architecture of the conversational agent. 34 - 35 -__Use Case__ 36 - 37 -Our use case has to describe what people do and what the system does - Page 67 in book. 15 +