This week feels like it flew by. Starting on Monday, I felt like I had gathered a solid base to start analyzing the first set of videos my grad student collected prior to this summer. I’m a pretty detail-oriented person, so I found categorizing people’s interactions with robots really interesting. I plan on iterating over the videos a few more times, making sure I don’t skip any details and seeing if any more important categories come to mind. When it comes to Human-Robot Interaction (HRI), it’s pretty difficult to come up with a way to qualify the quality of each person’s interaction. On one level people are just hard to read in general (and I’m not socially inept, I promise); on another level, even if you ask someone upfront if they thought the experience was effective/comfortable/etc., they’re likely to exaggerate how positive their interaction was in order to … make the researcher feel better or something? But, you’d think usable data makes a researcher feel better than just being told what other people think they want to hear.
Last week, each SRP student was assigned a variety of technologies to become familiar with to then review some dummy data with as practice. I misunderstood my technology and (Trifacta) and accidentally started learning this obscure Haskell library instead. I ended up doing some work with the real Trifacta before the meeting scheduled to show my work, but not as much as i should have had. However, I had some sick motivation to learn a new language and I’ll think I’ll keep it up with Haskell. I will also be finishing up the my project with Trifacta so I can have a well rounded perspective on it.