Inclusive Design
In general, the design of humanoid robots is made to be as broad and inclusive as possible. By having audio, visual, and haptic support, the robot is accessible, even if the user is limited in what they can do. In our design choices, we did not specifically think about inclusivity, and this can be seen in the downsides which will be listed below. The robot itself is specifically designed for people with dementia, so they are included by default. Other disabilities will be discussed in the following.
Perceptual disabilities
Perceptual disabilities mainly include physical afflictions: blindness, deafness, and many others.
The main interaction between the patient and the robot happens in the form of a discussion, so deaf patients are entirely excluded. One way of mitigating this is to make use of video support, through a tablet or phone. The NAO also has haptic support on its head and limbs, which could also be used (for example touch the front of the head to answer "No" or the back of the head for "Yes", but it does not help with communicating the question itself which is the main issue). As the focus of our design is associating music with tasks, we decided not to focus on accessibility for deaf people, and instead assume that the patients can hear both the robot and the music.
Other perceptual disabilities should not have a problem interacting with the robot. As long as the patient can hear and speak, the interaction should flow smoothly.
Motor disabilities
Motor disabilities fall into the same category as the perceptual ones. The interaction with the robot itself does not require any form of movement (other than speaking), but the patient may have issues with accomplishing the tasks. Help with doing the tasks is not something that the robot is designed to do, and as NAO is quite small, this could also be challenging. However, tasks that involve carrying a small object around are ones where the robot can actually help.
Cognitive disabilities
People with dementia are the main users of the system, so the robot is designed around this cognitive disability. Other cognitive disabilities might affect the user's ability to speak and/or process audio signals, which, in turn, makes interacting with the robot more difficult. While this is true, the robot does not expect full sentence asnwers to its questions, and only focuses on keywords associated with the concept. For example, when the robot plays the song associated with the activity of going to the bathroom, it does not expect to hear specifically "I am going to the bathroom", but instead one of: "bathroom", "toilet", "loo", etc. Considering that the robot's speech recognition is not fully optimal, the users might need to repeat themselves (maybe even multiple times) before the robot understands what they said, which might get annoying and affect the patients negatively.