This week, I was excited to read some papers on human robot interaction (HRI) studies and receive some guidance from Ohmnilabs on working with their (our?) robot. My grad student working with NIMBUS’s telepresence robots sent four papers my way to read, and they were all pretty interesting. I will be analyzing videos and placing people’s reactions to a telepresence robot into categories that I develop, so mostly focused on reading for ways other researchers have qualified/quantified reactions to their robots in a non-laboratory setting. One study used a robot called a Robovie (if you haven’t seen this cutie, look it up) inside a mall to study the effectivity of an algorithm to decide who should be approached. With the work with the study I have coming up, I’m excited to do some “there’s no right answer” work. It’s hard to get to the “good-enough” point with tech, but not so much with subjective work.
This developer from Ohmnilabs also emailed me to discuss a temporary solution for controlling the Ohmni with a ROS node. Since their company will be coming out with an update with full ROS integration in August, I won’t be able to help out with a long term solution for integrating the Ohmni into NIMBUS lab’s architecture; so, my temporary solution will probably be to build a web app to use the Ohmni API, that listens to a ROS node, listening to whatever software we have to control where the Ohmni should look and go. Sound convoluted? Realistically, a developer who knows what they’re doing may take longer than three weeks to build a usable system, and that’s all I have left, and I don’t know what I’m doing. Hopefully I’ll be making more progress next week. On the other hand, I have the Fourth of July to look forward to and along with that Wednesday off a long bike ride in the country.