Wiki source code of Step 3: Effects
Version 7.1 by Michaël Grauwde on 2023/03/27 03:44
Hide last authors
author | version | line-number | content |
---|---|---|---|
![]() |
6.1 | 1 | (% style="width:961px" %) |
2 | |(% style="width:176px" %)**Topic**|(% style="width:270px" %)**Question**|(% style="width:566px" %)**Answer** | ||
![]() |
2.1 | 3 | |(% style="width:176px" %)((( |
![]() |
1.1 | 4 | [[image:10.png]] |
5 | |||
6 | //Positive consequences// | ||
![]() |
6.1 | 7 | )))|(% style="width:270px" %)((( |
![]() |
1.1 | 8 | Which positive effects are expected from the AI-functions |
9 | |||
10 | - to the performance of the actors who work with the AI (e.g., accuracy, speed, ...)? | ||
11 | |||
12 | - to the state of them (e.g., stress, understanding, trust, ...)? | ||
13 | |||
14 | |||
![]() |
6.1 | 15 | )))|(% style="width:566px" %)((( |
![]() |
7.1 | 16 | The positive effects expected from the conversational agent is that it allows for better deliberation and discussion between multiple stakeholders. |
![]() |
4.1 | 17 | |
![]() |
7.1 | 18 | The conversational agent can also help the stakeholders reflect on their values and why they hold these values in the first place. |
![]() |
4.1 | 19 | ))) |
![]() |
2.1 | 20 | |(% style="width:176px" %)((( |
![]() |
1.1 | 21 | [[image:9.png]] |
22 | |||
23 | //Negative consequences// | ||
![]() |
6.1 | 24 | )))|(% style="width:270px" %)((( |
![]() |
1.1 | 25 | Do you foresee potential negative effects of the AI-functions: |
26 | |||
27 | - on the performance, state and/or values of the actors? | ||
28 | |||
29 | - on more general ethical or societal aspects? | ||
30 | |||
31 | What are they? | ||
![]() |
6.1 | 32 | )))|(% style="width:566px" %)((( |
![]() |
7.1 | 33 | Potential negative effects of the conversational agent could be that it creates an echo chamber for its users in that they feel that they are now boxed in a group that they weren't before. It may also make users more untrusting of AI systems if they do not get a result they wanted. |
![]() |
1.2 | 34 | |
35 | On the ethical and societal perspective, this could lead to a breakdown between the authority figures in the public safety perspective and the citizens. A breakdown in the communication between the two can lead to a damaging response in the views of technology in the wider public sphere. This can impede the ability for AI and in particular conversational agents to be trusted in the future. | ||
36 | ))) | ||
![]() |
2.1 | 37 | |(% style="width:176px" %)((( |
![]() |
1.1 | 38 | [[image:11.png]] |
39 | |||
40 | //Impact on use-case// | ||
![]() |
6.1 | 41 | )))|(% style="width:270px" %)((( |
![]() |
1.1 | 42 | What is the impact of the AI functions on the overall use case? |
43 | |||
44 | What does it add to the use case/how does it improve the use case as a whole? | ||
45 | ))) | ||
![]() |
6.1 | 46 | |(% style="width:176px" %) |(% style="width:270px" %) |