6. Perception of Robot's Emotions

Last modified by Demi Breen on 2023/04/09 14:58

For us to better understand aspects that could further improve the ability of our robot to persuade and motivate PwDs to participate in a certain activity (walking in our case) and make conversation and social components more satisfying and natural, we need to concern ourselves with the Pepper robot's (and also the ability of a robot in general) ability to portray specific emotions through things like eye colour and body posture. The following section aims to explore these possibilities more in-depth. 

It is crucial to be able to express recognizable emotions in different social settings [1]. Robots like the iCat, which possesses humanoid facial features, are capable of portraying emotions recognizable to humans [1]. If robots do not have mobile facial components for example, like the Pepper or the Nao robot, things like eye colour and body posture have to be used to show emotions instead [1]. The study by Cohen, Looije, and Neerincx, which is highly referenced here, concerns itself with Nao being able to portray emotions, but since our chosen Pepper robot has a similar range of capabilities the aspects presented here are highly relevant to our project also. 

Ultimately, socially assistive robots are meant to be able to provide motivating and engaging interactions for the individuals using them and in need of them, and for this purpose, they are required to have human-oriented skills [1]. Humans tend, regardless of age or technological proficiency, to often project social qualities to the behaviour of technology [1]. It is therefore essential to mimic human-human interactions when interacting with an autonomous robot to create a successful design and understanding of the robot [1].  

Further, the Nao robot's ability to use its eye colour can help support the different emotions being expressed [1]. This is essential for our work since Pepper has the same capability. We will further dive into the different colours below, through some related research regarding colours associated with emotions. The colours associated with the emotions in the study discussed were based on Kaya and Epps's research [2]. Here, the colour corresponding to each of the emotions can be seen in Table 3, retrieved from Kaya and Epps's work [2]. 

table3_1.PNG

table3_2.PNG

When it comes to the body language and posture of the robot we will rely on a study by Caulson [3], similar to the study [1]. Caulson goes into greater detail about body postures and studies surrounding this, for example how generally positive emotions are associated with rounder shapes rather than sharp ones [3]. The following tables below (Figure 2 and Figure 2 continued) show potential body postures that generally tend to be associated with the corresponding emotions presented in the table. The recognition percentages are also shown. For example, the front view for the "anger position" was the most recognizable [3]. In our case, we don't necessarily need to bother about different angles nor hopefully any negative emotions and can assume that the angle will generally be en face. See Figure 2 below [3].

bodySCE1.PNG

bodySCE2.PNG

Further, the results of this particular study which concerns itself with Nao and iCat show that the ability to recognize emotions seems to decline for older adults and that fear and anger were especially affected by this decrease [1]. Aspects like these are important for us to consider since we are working with older adults and even more specifically older adults with dementia. It can be relevant to consider that emotions might not be recognized as easily so we might need to be more clear and intentional in programming and portraying them. Now, the walking robot is meant to be a positive and leisurely activity, which means it will hopefully not involve needing to portray emotions like fear or anger. 

Finally, the conclusion of this study with the iCat and Nao states that the recognition rates for the iCat were significantly higher than Nao's bodily expressions but when looking at the entire set of emotional expressions, no difference was found between facial and bodily expression [1]. This shows that robot platforms like Nao are sufficient to express basic emotions, even though it lacks moveable facial features [1]. This is useful for our project in particular, again due to the functionality of the Pepper robot. It is worth noting that this research was made for children and children recognizing robots' emotions, but can still be used when portraying emotions to older adults as well. 

References

[1] Cohen, I., Looije, R., Neerincx, M. A., (2014), Child’s Perception of Robot’s Emotions: Effects of Platform, Context and Experience, International Journal of Social Robotics 6, 507-518. DOI: https://doi.org/10.1007/s12369-014-0230-6

[2] Kaya, N., Epps H. H., (2004), Relationship between color and emotion: a study of college students, College Student Journal 38(3), 396+, DOI: https://link.gale.com/apps/doc/A123321897/AONE?u=googlescholar&sid=bookmark-AONE&xid=96b09980

[3] Caulson M., (2004), Attributing Emotion to Static Body Postures: Recognition Accuracy, Confusions, and Viewpoint Dependence, Journal of Nonverbal Behavior 28, 117–139. DOI: https://doi.org/10.1023/B:JONB.0000023655.25550.be