Changes for page Step 3: Effects

Last modified by Mark Neerincx on 2023/04/13 12:11

From version 7.1
edited by Michaël Grauwde
on 2023/03/27 03:44
Change comment: There is no comment for this version
To version 6.1
edited by Michaël Grauwde
on 2023/02/28 16:08
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -13,9 +13,9 @@
13 13  
14 14  
15 15  )))|(% style="width:566px" %)(((
16 -The positive effects expected from the conversational agent is that it allows for better deliberation and discussion between multiple stakeholders.
16 +The positive effects expected from the conversational agent is a speedy evacuation or response from the affected area. The positive effects then revolve around the efficiency of the evacuation and of the communication that the citizens receive from the authority figures.
17 17  
18 -The conversational agent can also help the stakeholders reflect on their values and why they hold these values in the first place.
18 +The conversational agent can relieve stress in these moments as well as increasing the efficiency of communication. This understanding of the situation can lead to saving lives and preventing further impacts from disaster and economic consequences.
19 19  )))
20 20  |(% style="width:176px" %)(((
21 21  [[image:9.png]]
... ... @@ -30,7 +30,7 @@
30 30  
31 31  What are they?
32 32  )))|(% style="width:566px" %)(((
33 -Potential negative effects of the conversational agent could be that it creates an echo chamber for its users in that they feel that they are now boxed in a group that they weren't before. It may also make users more untrusting of AI systems if they do not get a result they wanted.
33 +Potential negative effects of the conversational agent could be a miscommunication between the agent and the citizen. This could lead to a misunderstanding in the distrust between the citizen and the agent. This could cause the citizen to not receive the required help on time and could hamper the emergency services responses to the required crisis. This could lead to distrust in the system
34 34  
35 35  On the ethical and societal perspective, this could lead to a breakdown between the authority figures in the public safety perspective and the citizens. A breakdown in the communication between the two can lead to a damaging response in the views of technology in the wider public sphere. This can impede the ability for AI and in particular conversational agents to be trusted in the future.
36 36  )))