f. Effects

Version 2.8 by Manali Shah on 2023/02/21 12:29

UpsideThe AI system should help the patient with daily activities, like reminding them to take medication, or have meals. By reminding the caregivers to perform their activities, it improves the effectiveness and helps in building a schedule of activities, which can benefit the patient and caregivers.  Furthermore, it can help with building interest by story-telling, or playing some music which motivates the user to perform daily activities. The AI system could also help in placating the patient in case of anxiousness. This might help in building a sense of trust and mutual understanding between the humans and the system. 
Downside

The system needs constant inputs and feedback to improve and learn the requirements of the patients. This requires data collection on a constant basis, and raises questions on the privacy of the human users. 

Long term use could lead to heavy dependence on the system, and the users may not be able to function without the system (reminders, motivation etc), when the system is not present (ex, taken down for maintenance or upgradation).

If caregivers rely too much on the AI system, they may disregard the benefits of human touch and care, which could lead to negative consequences for the patient.

Use Cases

Overall, the system should have a positive impact on the patient and caregivers. For the patient, it gives the patient someone to talk to and a constant companion which humans are not. It is able to motivate the patient through personalized stories and activities. With this, it builds a sense of trust through which the system can motivate users to do their day to day activities. 

For the caregivers, it gives automatic reminders to them to spend time with the patient, and reduces their workload by performing tasks which the caregiver would otherwise perform, like narrating stories, reminding the patients to take medicines and motivating the patient to perform their activities.

Tests

-  Implicit feedback through usage patterns of patients. Measure using a mood graph with a threshold value to quantify a mood.

Since the person is likely to have some sense of mobility and is in control of their choices, the system could understand the patient's (Georgina's) mood based on her choice of stories.

- Explicit feedback from caregivers 

The system could ask the caregivers to enter a Yes/No for whether each task was performed. For example, did the patient take medicine after being reminded? Or did the patient eat their meal happily?

- To find the dependency of the users on the system, the system could be taken down for a day or two. The caregiver (Eleana) could aid the patient instead and then answer questions on whether she was able to effectively perform tasks otherwise automated by the AI system

- For the mood graph, if the values are between 1 and 10, we could keep a benchmark of around 5-6 so that the system improves its performance to adhere to the patient's preferences. While the patient's mental state is not always in control of the system, it could prove to be a stabilizing factor.

- For the explicit feedback, we could set a benchmark of around 70-80% positive feedback, which would imply that the patient was able to perform 70-80% of the tasks successfully. 

(Scenario A)

- To measure dependency, we could use the same explicit feedback but set a lower benchmark of 65-70% since we remove the system from the interaction.

(Scenario B)