PDF_export

Version 15.1 by Vladimir Rullens on 2025/11/08 22:27

0. Tunes Beat Dementia

Reflections

Aleks Reflection

Week 1 and 2:

It was interesting, because normally in Computer Science we had a problem that was given to us, and we had to solve it. In this course we had to pick the problem that we wanted to solve. Now one of the challenges was constantly considering "is this actually needed"/ "will this help dementia patients". Thoughts such as "to what extent can we help" was also something that would appear in my mind. Or we had ideas about creating some sort of technology - such as pill dispenser, and we would come to realization that this is not so meaningful afterall. In the context of dementia patients, if they are at the stage of heavily forgetting to take the pills, they need a caregiver either way. And pill dispensing might be complicated for both the patient and the caregiver, and it doesn't bring that much of the positive meaningful value. So in the first weeks, we really had to think about what matters to such patients and caregivers. It was very nice to follow the x-wiki format, as it forced us to first consider stakeholder values. We came up at first with a few bad ideas, but then eventually in week 2 our dancing robot idea got finalized. 

Week 3:
Week 3 in terms of content was interesting, because it made me reflect on my own memory. We discussed different memory impairments, and I could see how I am already affected by some of them to some extent. It is because our memory can't be perfect. I also reflected on my own dimensions of recollective experience. In the context of studying and computer science, the topics that stuck with me the most were the ones that I identified myself with to some extent. Most often I also put some emotional value towards events or people that surrounded me while learning some topics. Some topics, like AI, I found from the start slightly disturbing and since I put negative connotations to those I now don't remember a lot from those topics. 

Discussing Memory Support Technology also stuck with me, cause it made me realize that we can't have technology for everything. For example, recording all our interactions and even extracting some information from those could be overwhelming. I also realized that some of those technologies are super complex, and memory is such complex process actually. I felt grateful that my brain can do all those processes subconsciously. 

Project wise, I feel like we got slightly confused with what we wanted to evaluate. So we were also reiterating on that this week.

Week 4:
Content wise I haven't learned much this week, because all of it was revision of the stuff I learned during other courses already. I had a course of Human Computer Interaction, and for that one we spent a few weeks on learning about different evaluation methods. For that course we also had to do an HREC evaluation. 

But the lectures really made us think in terms of our own project. That was the week when we had to formulate our claims into something that was testable. So in that week we created the first draft for that. Besides that, one of my big tasks that week was preparing the presentation slides fro the midterm. I worked on the foundation, but that forced us to reiterate again and put everything together.

Week 5
Time flies, presentation time. I was proud of our slides and we ended up having a clear storyline. It was also nice to put everything together to see what we don't have and what we have. After this week, we were 90 percent done with foundations and specifications. Only in the last weeks, we had to improve those sections based on feedback. But we were fully ready to start evaluating our prototype now. 

Week 6
I have never heard of a term ontology until then. Or maybe I have, but I haven't used it in context of computer science. Having ontology as a visual creates a nice and simple overview of the entire system. Besides that we covered topics of inclusivity again. It was nice cause it made us think once again about our system design. However, at that stage we already considered a lot of specificaitons and design patters that we believed our system was as inclusive as possible. 

We decided to use the Miro robot for our prototype. We scheduled a meeting to get instructions regarding its use that week.  
 

Week 7
This week we covered even more thoery, and a lot of it related to my bachelor thesis! My thesis was on the topic of theory of mind. That week we further covered how does the robot interact with the environment, so we could go back and reflect on it and see what other design patterns could we implement for our robot. I think around that time we looked more into literature cause we were really thinking how will our robot interact with the outside world. 

That week i wrote 2 reflections: 
Today we worked with Miro for the first time. We managed to succsessfully connect to Miro but working with controlling the sensors is challanging. Further, to enable connection between AI and Miro, python version 3.8 is required, but Miro only operates with version 3.5 and we cannot update it because this would raise conflict with its sensors. 

Week 7 Final touches:
We created the prototype which is actually really cool. Everyone can run it from a computer and it actually talks and interacts with you. As a non AI student I've never done that so it was shocking to me that (mainly Deniz) managed to create one python file that would lead to my computer playing music off my phone and talking to me. 

We also conducted evaluation. Each team member did it individually. The participants were shy, but as long as their favourite song is played they would dance a bit. No-one would actually stand up to dance, but they moved a bit with their body while sitting down. 

Overall reflection:
I was very happy with our process because it was very iterative. It was quite unfortunate that we couldn't get Miro to work especially since when thinking about the robot we saw the value of it being a physical object that interacts with someone. Now, in our evaluation, we could have made it better by showing recordings of a moving Miro, but that still requires the user to use imagination which is not ideal. But the nice thing about this project was that we focused on this one isolated claim, and the functionality was very simple. And it still made me realise how much we had to think about for such simple design. I think in general, now i have a better way for approaching such projects. I know from the start that i have to really consider personal values (as before I would kind of skip this step). And I also have those nice frameworks for writing our claims in terms of causes and effects. For this course we were kind of blindly following the framework, but now I've build an intuition for it and I can see that this approach works! 

 

Week 2: 

Time to reflect weekly!

Week 3:
I feel like we got slightly confused with what we want to evaluate. Now we have this idea for the dance session. 

Week 4: 
Not much work was done, mainly putting X-Wiki together. 

Week 5:
Time flies, presentation time. I was proud of our slides and we ended up having a clear storyline. It was also nice to put everything together to see what we don't have and what we have. 

Week 6:

We decided on the prototype. We will use the actual Miro robot. We scheduled a meeting to get instructions this week. Unfortunately I won't be able to be there. 
 

Week 7:

Today we worked with Miro for the first time. We managed to succsessfully connect to Miro but working with controlling the sensors is challanging. Further, to enable connection between AI and Miro, python version 3.8 is required, but Miro only operates with version 3.5 and we cannot update it because this would raise conflict with its sensors. 

Week 7 Final touches:
We created the prototype which is actually really cool. Everyone can run it from a computer and it actually talks and interacts with you. As a non AI student I've never done that so it was shocking to me that (mainly Deniz) managed to create one python file that would lead to my computer playing music off my phone and talking to me. 

We also conducted evaluation. Each team member did it individually. The participants were shy, but as long as their favourite song is played they would dance a bit. No-one would actually stand up to dance, but they moved a bit with their body while sitting down. 

Week 1:
I learned that cognition is always situated, and technology should fit naturally into human contexts. Our upcoming project seems very interesting, not just as a technical task, but as a way to design meaningful human-robot interactions.
Week 2:
We chose our prototype as a robot to help with dispensing pills. The lecture on value-sensitive design reminded me that good design balances functionality with human values and motivation. It encouraged me to think about how our system could support user autonomy and engagement.
Week 3:
Learning about human memory helped me see how technology can extend cognitive abilities. As a group, we decided to scrap the first idea and continue with a dancing robot. A companion and staying active made more sense for use-case and are also probably more relevant for the user.
Week 4:
We worked on our wiki. I realized that clear documentation and evidence-based reasoning are key to improving both the design and its credibility. Making the slides also helped me see what we decided on so far and what we are missing.
Week 5:
Presentation (I didn't present because I made slides, so it was easy). Other presentations made me think about points that we mention but didn't think deeply about yet. Also made what we are missing a bit more clear.
Week 6:
Make preparations to work with the robot and divide tasks, not much more new this week.
Week 7:
Work with the robot (Miro) and design the dance partner. Working with the actual robot was very tough. There is a lot you need to be familiar with to actually make the robot do what you want. We think that the robot is mostly for visual purposes and that the voice interaction and music is the backbone of the prototype. Deniz coded the dancing session interaction. In hindsight, I'm not sure why we thought coding Miro is going to be easy. It would make sense that coding and interacting with a robot is hard, otherwise there would have been a lot more of them...

Contributions (Excluding group work):
Initial version of section 1.b.2.
Initial version of the TDP (diagram)
Slides for Specifications (of the first presentation)
Initial results and conclusion
Updated (final) modifications to Design scenario: 2.a1 (final paragraph showing a normal use-case loop). Additions and formatting to 2.b, d.

Overall:
- It made me realize how important it is to first think about why we do something. I knew from other courses where we created human-computer interactions (and from creativity courses) about the importance, but coming up with a new idea on our own (that we all agree with) was way tougher than I expected before this course, probably because I did the why and how separately so far. There is so much to consider when creating an idea, not just the product.
- Robots seem very easy but are way harder that I thought. Making multiple systems work in unison and coordinate was too hard. I was too much of an idealist and didn't really think this would be a problem until we hit it. Sticking to just the music and selecting voice as a medium for the interactions is (to me at least) the most rational way of testing whether the idea has any chance of being practical.
- Breaking apart problems into smaller problems (be it for design ideas, implementation or testing) is so much more important when tasks are seemingly abstract. A very big part of our work felt like understanding what to achieve and then how to achieve it. What to achieve was easier to understand, as the lectures and labs helped a lot with that. But how to achieve it is still hard. Continuous work is probably the only feasible way, as there will always be new problems we didn't think about, but it makes getting started way harder than it is to just keep going once you start. It is also very clear why there is so much work done on the process of creating products, as doing just what feels intuitive or right is never going to be enough.

Also, looking back at it all, we mostly did a custom version of a home assistant. The main differences are that we propose to use a robot rather than a speaker, because it helps maintain human values, and that we consider LLMs as the companion instead of a scripted ML algorithm. Maybe they would sell a lot better if they could make a cheap standing robot that acts like a voice assistant but moves (if you want it to). It would be a lot more engaging, and using local LLMs can make it personalized to your preference.