Changes for page Manali - Self Reflection
Last modified by Manali Shah on 2023/04/11 12:30
From version 6.2
edited by Manali Shah
on 2023/04/11 12:30
on 2023/04/11 12:30
Change comment:
There is no comment for this version
To version 5.2
edited by Manali Shah
on 2023/04/11 11:58
on 2023/04/11 11:58
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -59,9 +59,7 @@ 59 59 60 60 We also gained an insight into ontologies: what they mean and how they can represent the users and their properties in one diagram. It is an explicit representation of knowledge: the entities used in the system, what they are and their structure. Ontologies represent classes (patient with dementia), their instance (Georgina), relationship (mother of Sam) and properties (80 years old). 61 61 62 -This week we performed the pilot study of our project. We worked with Pepper at the Insyght Lab to test the basic connections and familiarize ourselves with the robot. The tablet did not work, which we later realized is an issue in the connection with InteractiveRobots. We tested voice inputs and touch inputs, and came to the conclusion that voice input may not be so accurate, so went ahead with a backup touch input instead. 63 63 64 - 65 65 **Week 6** 66 66 67 67 We understood the meaning of an inclusive design, and how we can use it in our projects as well. An inclusive design is one which not only considers the "average users" but also special cases, like minorities and persons with disabilities. We were encouraged to look into our own projects and see how we can better our design and make it inclusive. We must not assume anything about users, but quite the opposite, think outside the box and see how the design can cater to a larger population, those including people with disabilities. ... ... @@ -68,17 +68,10 @@ 68 68 69 69 Secondly, we must gather users' needs BEFORE the start of the design, and not assume what they may or may not need, because more often than not, designs turn out to be redundant for the users themselves. 70 70 71 -This week ,wetested the interactiveversionoftherobot with thefinalcode. Wehad twostoryversions:Thanksgivinganda Picnic. Bothofthem were testedwith all participantsinvolved:nurse,familymemberandthepatient.We testedthe promptsand focusedonwhethertheymadelogicalsense in the context.Since voice inputwas notreliable,thenursehadthejobof clicking onthe correct button(on screen)to makechoices.Itgavechances forthefamily memberstointeract with each other,and whentheinteraction was done,the nurse wouldpat Pepper onhehead,prompting it to continuewiththe story. This session wasasuccessintermsoftesting outthe code that was written.69 +This week we performed the pilot study of our project. We worked with Pepper at the Insyght Lab to test the basic connections and familiarize ourselves with the robot. The tablet did not work, which we later realized is an issue in the connection with InteractiveRobots. We tested voice inputs and touch inputs, and came to the conclusion that voice input may not be so accurate, so went ahead with a backup touch input instead. 72 72 73 73 **Week 7** 74 74 75 - Welearn about theconceptsof human agentteamwork, and thedifferentteam aspects. Welearnaboutthe requirementsof performingajointactivity:interpredictability, common groundanddirectability.Thetheoryof mindplays a rolehere,which stateshathumans attributeagencycharacteristicstoselfmovingobjects.HumanAgent teamworkusessharedmemorymodelsand transactivememorysystems, wherebothhumans andagentshavecertainknowledge,andtheyshare itto achieveacommongoal.Thisissimilar to human-humanteamsaswell. Welearnaboutsituationawarenessinhighriskdomains,andhowtheSharedMemoryModel andTransactiveMemorySystem,helpingaining betterawareness.Wegainan insightintocoordinationinteamsandtaskmanagement:thewaysin which teamsworktogethertofulfilagoal.73 +This week, we tested the interactive version of the robot with the final code. We had two story versions: Thanksgiving and a Picnic. Both of them were tested with all participants involved: nurse, family member and the patient. We tested the prompts and focused on whether they made logical sense in the context. Since voice input was not reliable, the nurse had the job of clicking on the correct button (on screen) to make choices. It gave chances for the family members to interact with each other, and when the interaction was done, the nurse would also 76 76 77 -In this week, we carried out our evaluations of the robot with 14 participants. It was quite an experience to see how different people have unique reactions to the robot. I also participated in other group evaluations, and it was interesting to see how everyone used different features of Pepper to achieve different goals. 78 - 79 -After the evaluations, I carried out the statistical test to determine significance and also helped in creating the presentation. 80 - 81 - 82 -**Week 8** 83 - 84 -My teammates presented our final presentation this week. It consisted of a recap, the use cases (TDP and IDP), the evaluation method and the final results. I believe it was a fulfilling team experience, where everyone played their role for the final outcome. 75 +