Changes for page Simran - Self Reflection

Last modified by Simran Kaur on 2023/04/11 20:03

From version 10.1
edited by Simran Kaur
on 2023/04/11 20:00
Change comment: There is no comment for this version
To version 8.1
edited by Simran Kaur
on 2023/04/11 12:20
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -79,37 +79,37 @@
79 79  ====
80 80  Lab session: Planning Evaluations ====
81 81  
82 -We also learnt about how evaluations should be planned for our prototype. The purpose of the evaluations is to assess the claims at the task level and check the usability of the interaction design. We were introduced to various approaches, like qualitative analysis to formulate hypothesis and quantitative analysis to test the hypothesis.
82 +We also learnt about how evaluations should be planned for our prototype. The purpose of the evaluations is to assess the claims at the task level and check the usability of the interaction design. We were introduced to various approaches, like qualitative analysis to formulate hypothesis and quantitative analysis to test the hypothesis.
83 83  
84 -We explored the formative and summative evaluations that could be conducted for our system. We decided to focus on the summative evaluation to assess the overall effects of the system. Also, a formative evaluation would not have been feasible within the limited time.
85 85  
86 -For our measurements and metrics, we used the Godspeed questionnaire for a standardized assessment of our robotic agent. It address factors like being interactive, inert, animate, etc which were relevant for our use case. For the effects, we prioritized measuring the following: the mood of the patient post activity (subjective) and whether the patient finished their meal and in how much time (objective). However, we identified that due to the practical issue of not being able to test with actual dementia patients, the objective measures would not be collected accurately.
85 +Ethics
87 87  
88 -We wanted to host our questionnaires for online experimentation. For collecting the responses to our questionnaires, we decided to use the GDPR compliant Qualtrix survey tool.
87 +Formative and Summative evaluation
89 89  
90 -This week was quite informative and learning intensive in terms of all the factors, practical and ethical, that need to be considered while designing an evaluation for a system.
89 +Measures and metrics
91 91  
92 -=== Week 6: Prototype Implementation, Pilot testing ===
91 +Online experimentation
93 93  
94 -(% class="wikigeneratedid" id="HLecture:Ontologies" %)
95 -The focus of this week was implementing our prototype and testing it out within our group. We used Interactive Robots to program the Pepper robot with two pre-filled story templates - Picnic and Thanksgiving - for the testing of our Interactive Storytelling session use case. During this process, we built two versions of each story, non-interactive: one with simple narration and another, interactive: with inbuilt prompts to spark conversations. We planned to gauge the usefulness of the interaction design in our evaluations through comparison of the experimental interactive scenario with the control non-interactive scenario.
96 96  
97 -(% class="wikigeneratedid" %)
98 -With implementation, we faced some challenges with the Pepper robot in using its tablet and getting it to recognize our speech and touch input. We were able to mitigate them by coming up with alternatives such as touch action on remote laptop screen through clicks, in order to preserve the flow of interactions. Working with the Pepper robot and being able to self-test the functionality we envisioned to build was quite interesting and further helped refine the design we had been building through our incrementally iterative design process.
94 +=== Week 6: Prototype Implementation and Initial Testing ===
99 99  
96 +==== Lecture: Ontologies ====
100 100  
98 +====
99 +Lab session: Implementing prototype, Evaluation Study Planning ====
100 +
101 +We were given the time for the lab session to work on our prototype and planning our evaluation.
102 +
103 +
101 101  === Week 7: ===
102 102  
103 -The focus of this week was conducting the evaluation for our prototype with participants and analyzing the results. We had prepared the participation consent form and the measurements questionnaire along with the prototype on the Pepper robot. Since we had limited time and a limited number of participants, we decided to conduct a within-study evaluation, wherein we would have each participant evaluate both the control and the experiment scenario. We also decided that for half of the total evaluation sessions, we would present the participant with the control scenario first, and for the other half, we would present the experimental scenario first. This strategy was employed in order to mitigate carry-over bias.
106 +==== Lecture: Human-Agent/Robot Teamwork ====
104 104  
105 -For each evaluation session, the participants first signed the consent form. Then they engaged in the first storytelling session, filled in the questionnaire, engaged in the second storytelling session, and filled in the questionnaire again. It was really interesting and informative to see how the participants responded to our interaction design, and how our system was perceived by them. Further, participating in other groups’ evaluation sessions also provided a broader view of the kind of effects our peers were trying to have with their design and what they considered worthwhile to measure.
108 +====
109 +Lab session: Conducted Evaluation ====
106 106  
107 -With the evaluation completed, we analyzed the results through a statistical test to determine significant results. With this, our claims around improving the mood of the person with dementia through interactive storytelling were confirmed. The entire process taught me a lot about how to assess a system design in terms of the claims it makes for the effects it wants to achieve.
111 +We used the lab session to prepare our evaluation.
108 108  
109 109  === Week 8: Group Presentation - Endterm ===
110 110  
111 -(% class="wikigeneratedid" id="H" %)
112 -We compiled our project work and the results from the evaluation and I along with two of my teammates presented them to the class. We received interesting questions from our peers and the professors, which further helped us critically reflect on our design decisions. I also had an interesting discussion about designing the system in a way such that it minimises the possible negative effects that it could also have.
113 -
114 -(% class="wikigeneratedid" %)
115 -To conclude, designing a robotic intervention for people with dementia took us through the entire SCE process. It was a fruitful journey in which we learned by experimenting, analyzing and reflecting.
115 +==== ====