8. Types of explanations

Last modified by Hugo van Dijk on 2023/04/10 14:59

With rising research on artificial intelligence, we need ways to build trust in AI systems. Explainable artificial intelligence (XAI) is the concept of an AI system explaining its own behaviour. It enhances a user's trust in the system. Ensuring the PwD trust our system is vital for making our concept successful. F. Kaptein et al. [1] compare goal-based explanations to belief-based explanations. Goal-based explanations communicate the desired outcome of the actor (the system). Belief-based explanations give information about the context and circumstances that made the actor choose one action over the other. They also state that any good explanation is a personalised one, and different explanation strategies are needed for children and adults. To personalize the robot's actions one can make a goal hierarchy tree. This helps the agent to make a choice based on their goals and beliefs.  The figures below show how these work.

1678811891600-853.png1678811927928-921.png

Adults prefer goal-based explanations more than children. However, both prefer them over belief-based explanations. This contradicts findings from other studies. The differences may be due to the participants' expert level. M. Harbers et al. [2] state for subjects that are not familiar with the task, belief-based explanations are better as they provide more non-objective information. The participants in [1] all had diabetes mellitus and were thus familiar with the domain already.

As a general rule, explanations should not be too long [3].

S.A. Döring [4] says that, next to beliefs and desires, emotions are necessary to properly explain the intentions behind actions. Firstly, simulating the emotions of an agent can be used to sure the most important beliefs and desires are communicated when explaining an action’s motivation [3] ("I ran away because there was a man holding a gun"). Emotions also ensure motivations are formulated in a more human-like manner. An example of this is "I ran away because I was afraid that I might be killed". And finally, an underlying appraisal process can b used to explain why an agent has an emotion ("I was scared because there was a man holding a gun, and guns can kill someone") [5].

In [6], Kaptein et al. evaluated cognitive vs affective explanations in a health-support application for children with type 1 diabetes. Cognitive explanations are when beliefs and goals are used, and affective explanations are explanations using emotions. Affective explanations have not been properly tested in XAI. Kaptein et al. found that children follow task suggestions more often when no explanation is given. There are three possible explanations for this. First of all, it might be that children don’t want to read long explanations and would then just choose a random task from the menu. Secondly, it is possible that the children do read the explanations, but the child already feels like they know what the task is supposed to teach them. The final explanation is that children may get stubborn due to the explanation ("I don’t feel like doing that").

References

[1] Frank Kaptein et al. “Personalised self-explanation by robots: The role of goals versus
beliefs in robot-action explanation for children and adults”. In: 2017 26th IEEE Interna-
tional Symposium on Robot and Human Interactive Communication (RO-MAN) (2017).
doi: 10.1109/roman.2017.8172376.

[2] Maaike Harbers et al. “Guidelines for developing explainable cognitive models”. In: Proceed-
ings of ICCM. Citeseer. 2010, pp. 85–90.

[3] Maaike Harbers et al. “Guidelines for developing explainable cognitive models”. In: Proceed-
ings of ICCM. Citeseer. 2010, pp. 85–90.

[4] Sabine A Döring. “Explaining action by emotion”. In: The Philosophical Quarterly 53.211
(2003), pp. 214–230.

[5] Frank Kaptein et al. “The role of emotion in self-explanations by cognitive agents”. In:
2017 Seventh International Conference on Affective Computing and Intelligent Interaction
Workshops and Demos (ACIIW). IEEE. 2017, pp. 88–93.

[6] Frank Kaptein et al. “Evaluating Cognitive and Affective Intelligent Agent Explanations in
a Long-Term Health-Support Application for Children with Type 1 Diabetes”. In: 2019 8th
International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE.
2019, pp. 1–7.