Changes for page Step 3: Effects
Last modified by Mark Neerincx on 2023/04/13 12:11
From version 6.1
edited by Michaël Grauwde
on 2023/02/28 16:08
on 2023/02/28 16:08
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
-
Objects (0 modified, 1 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. grauwde1 +xwiki:XWiki.MarkNeerincx - Content
-
... ... @@ -13,9 +13,9 @@ 13 13 14 14 15 15 )))|(% style="width:566px" %)((( 16 -The positive effects expected from the conversational agent is a speedyevacuation or responsefrom the affected area.Thepositive effects thenrevolvearoundtheefficiency of the evacuation andof thecommunicationthat thecitizensreceive fromheauthority figures.16 +The positive effects expected from the conversational agent is that it allows for better deliberation and discussion between multiple stakeholders. 17 17 18 -The conversational agent can relievestressinthesemomentss well asincreasing the efficiencyof communication.Thisunderstandingofthesituation canlead tosavinglivesand preventingfurtherimpactsfrom disasterand economic consequences.18 +The conversational agent can also help the stakeholders reflect on their values and why they hold these values in the first place. 19 19 ))) 20 20 |(% style="width:176px" %)((( 21 21 [[image:9.png]] ... ... @@ -30,7 +30,7 @@ 30 30 31 31 What are they? 32 32 )))|(% style="width:566px" %)((( 33 -Potential negative effects of the conversational agent could be a miscommunicationbetweentheagentand theitizen. This couldleadtoa misunderstandingin the distrustbetweenthe citizenandthe agent.Thiscouldcausethecitizenonotreceivetherequiredhelpontimeand could hamperthe emergencyservicesresponsestotherequiredcrisis. This couldleadtodistrustinthesystem33 +Potential negative effects of the conversational agent could be that it creates an echo chamber for its users in that they feel that they are now boxed in a group that they weren't before. It may also make users more untrusting of AI systems if they do not get a result they wanted. 34 34 35 35 On the ethical and societal perspective, this could lead to a breakdown between the authority figures in the public safety perspective and the citizens. A breakdown in the communication between the two can lead to a damaging response in the views of technology in the wider public sphere. This can impede the ability for AI and in particular conversational agents to be trusted in the future. 36 36 )))
- XWiki.XWikiComments[0]
-
- Date
-
... ... @@ -1,0 +1,1 @@ 1 +2023-04-13 12:11:26.443 - Author
-
... ... @@ -1,0 +1,1 @@ 1 +xwiki:XWiki.MarkNeerincx - Comment
-
... ... @@ -1,0 +1,1 @@ 1 +See comment in previous page on "value awareness". Stakeholders provide better, value-based arguments on design, acquisition and/or deployment choices concerning AI-technology, explicating their own values and relating them to the relevant values of others.