Alex's Reflection

Last modified by Alex Pacurar on 2025/11/09 19:26

Week 1:
I learned that cognition is always situated, and technology should fit naturally into human contexts. Our upcoming project seems very interesting, not just as a technical task, but as a way to design meaningful human-robot interactions.
Week 2:
We chose our prototype as a robot to help with dispensing pills. The lecture on value-sensitive design reminded me that good design balances functionality with human values and motivation. It encouraged me to think about how our system could support user autonomy and engagement.
Week 3:
Learning about human memory helped me see how technology can extend cognitive abilities. As a group, we decided to scrap the first idea and continue with a dancing robot. A companion and staying active made more sense for use-case and are also probably more relevant for the user.
Week 4:
We worked on our wiki. I realized that clear documentation and evidence-based reasoning are key to improving both the design and its credibility. Making the slides also helped me see what we decided on so far and what we are missing.
Week 5:
Presentation (I didn't present because I made slides, so it was easy). Other presentations made me think about points that we mention but didn't think deeply about yet. Also made what we are missing a bit more clear.
Week 6:
Make preparations to work with the robot and divide tasks, not much more new this week.
Week 7:
Work with the robot (Miro) and design the dance partner. Working with the actual robot was very tough. There is a lot you need to be familiar with to actually make the robot do what you want. We think that the robot is mostly for visual purposes and that the voice interaction and music is the backbone of the prototype. Deniz coded the dancing session interaction. In hindsight, I'm not sure why we thought coding Miro is going to be easy. It would make sense that coding and interacting with a robot is hard, otherwise there would have been a lot more of them...

Contributions (Excluding group work):
Initial version of section 1.b.2.
Initial version of the TDP (diagram)
Slides for Specifications (of the first presentation)
Initial results and conclusion
Updated (final) modifications to Design scenario: 2.a1 (final paragraph showing a normal use-case loop). Additions and formatting to 2.b, d.

Overall:
- It made me realize how important it is to first think about why we do something. I knew from other courses where we created human-computer interactions (and from creativity courses) about the importance, but coming up with a new idea on our own (that we all agree with) was way tougher than I expected before this course, probably because I did the why and how separately so far. There is so much to consider when creating an idea, not just the product.
- Robots seem very easy but are way harder that I thought. Making multiple systems work in unison and coordinate was too hard. I was too much of an idealist and didn't really think this would be a problem until we hit it. Sticking to just the music and selecting voice as a medium for the interactions is (to me at least) the most rational way of testing whether the idea has any chance of being practical.
- Breaking apart problems into smaller problems (be it for design ideas, implementation or testing) is so much more important when tasks are seemingly abstract. A very big part of our work felt like understanding what to achieve and then how to achieve it. What to achieve was easier to understand, as the lectures and labs helped a lot with that. But how to achieve it is still hard. Continuous work is probably the only feasible way, as there will always be new problems we didn't think about, but it makes getting started way harder than it is to just keep going once you start. It is also very clear why there is so much work done on the process of creating products, as doing just what feels intuitive or right is never going to be enough.

Also, looking back at it all, we mostly did a custom version of a home assistant. The main differences are that we propose to use a robot rather than a speaker, because it helps maintain human values, and that we consider LLMs as the companion instead of a scripted ML algorithm. Maybe they would sell a lot better if they could make a cheap standing robot that acts like a voice assistant but moves (if you want it to). It would be a lot more engaging, and using local LLMs can make it personalized to your preference.