Last modified by Hrishita Chakrabarti on 2023/04/10 17:38

From version 6.1
edited by Hrishita Chakrabarti
on 2023/04/09 18:26
Change comment: There is no comment for this version
To version 6.3
edited by Hrishita Chakrabarti
on 2023/04/09 18:53
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -76,4 +76,23 @@
76 76  
77 77  == Week 7: Evaluation ==
78 78  
79 +Our hypothesis was that **a more interactive i.e conversational robot (experiment scenario) would be better at improving the PwD's mood as well as creating a more immersive and enjoyable storytelling session which would motivate the PwD to finish their meal enthusiastically.**
80 +
81 +Due to the limited number of participants as well as the limited time available for evaluation, my team and I decided to conduct a **within-study** for our evaluation. We invited fellow students taking this course as well as other TU delft students to participate in our experiment. Each participant would be first asked to sign a consent form after which we explained to them how the evaluation would be conducted. The participant would first perform one story session with the robot and then report their evaluation through a questionnaire and then perform another story session with the robot followed by the same questionnaire once again. In both scenarios, the participant took the role of the PwD while the roles of the formal caretaker and family member were performed by one of us within the team.
82 +
83 +There were two types of storytelling sessions which the participant had to participate in and evaluate:
84 +
85 +~1. Experiment scenario, wherein the robot narrated the story and in between asked questions to spark conversations
86 +
87 +2. Control Scenario, wherein the robot narrated the story and enacted conversations via voice modulations to portray different characters
88 +
89 +
90 +We were able to perform the experiment with 14 participants. Half of the participants started with the control scenario and the other half started with the experiment scenario so that our results and analysis would not be influenced by the carry-over bias. The questionnaire used for evaluation was based on the Godspeed questionnaire which tests of perceived anthropomorphism, animacy, safety and the threat of the robot. We modified the questionnaire to also evaluate the mood of the patient (participant) after each story session as well as how much they enjoyed the story.
91 +
92 +We conducted a **one-tailed paired test (dependent t-test)** to test for the statistical significance of our results. All three of our added questions were significant proving that the conversational robots **significantly improved the patient's mood** in comparison to the non-conversational robot. Some other significant differences between the conversational robot and non-conversational robot were that the patients in general perceived the conversational robot to be **more natural and responsible** and they also **liked** the conversational robot more than the non-conversational robot.
93 +
79 79  == Week 8: Final Presentation ==
95 +
96 +My teammates presented our project. They first began with a quick recap of our problem scenario and personas and then moved on to our design scenario elaborating on the theories on which we based our design. Then they explained our experiment and control scenarios, followed by our evaluation procedure and results, and finally wrapped up with our takeaways and limitations.
97 +
98 +We received some feedback on our choice of evaluation questionnaire and statistical analysis which we then added to our XWiki report as part of our critical analysis and takeaway for future projects.