Changes for page Step 3: Effects
Last modified by Mark Neerincx on 2023/04/13 12:11
From version 7.1
edited by Michaël Grauwde
on 2023/03/27 03:44
on 2023/03/27 03:44
Change comment:
There is no comment for this version
To version 6.1
edited by Michaël Grauwde
on 2023/02/28 16:08
on 2023/02/28 16:08
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -13,9 +13,9 @@ 13 13 14 14 15 15 )))|(% style="width:566px" %)((( 16 -The positive effects expected from the conversational agent is thatitallows forbetter deliberation anddiscussionbetweenmultiplestakeholders.16 +The positive effects expected from the conversational agent is a speedy evacuation or response from the affected area. The positive effects then revolve around the efficiency of the evacuation and of the communication that the citizens receive from the authority figures. 17 17 18 -The conversational agent can alsohelpthekeholders reflectontheirvalueswhytheyhold thesevalues in thefirstplace.18 +The conversational agent can relieve stress in these moments as well as increasing the efficiency of communication. This understanding of the situation can lead to saving lives and preventing further impacts from disaster and economic consequences. 19 19 ))) 20 20 |(% style="width:176px" %)((( 21 21 [[image:9.png]] ... ... @@ -30,7 +30,7 @@ 30 30 31 31 What are they? 32 32 )))|(% style="width:566px" %)((( 33 -Potential negative effects of the conversational agent could be thatitcreatesan echochamberforitssers in thattheyfeelthat theyareowboxedina groupthattheyweren'tbefore.Itmayalso make usersmoreuntrustingof AIsystemsiftheydonotgetaresult theywanted.33 +Potential negative effects of the conversational agent could be a miscommunication between the agent and the citizen. This could lead to a misunderstanding in the distrust between the citizen and the agent. This could cause the citizen to not receive the required help on time and could hamper the emergency services responses to the required crisis. This could lead to distrust in the system 34 34 35 35 On the ethical and societal perspective, this could lead to a breakdown between the authority figures in the public safety perspective and the citizens. A breakdown in the communication between the two can lead to a damaging response in the views of technology in the wider public sphere. This can impede the ability for AI and in particular conversational agents to be trusted in the future. 36 36 )))