Researchers

Hetherington, N. J. (2020). Design and evaluation of nonverbal motion cues for human-robot spatial interaction (T). University of British Columbia. Retrieved from https://open.library.ubc.ca/collections/ubctheses/24/items/1.0394282
Supervisors: Dr. Elizabeth A. Croft and Dr. H.F. Machiel Van der Loos

Supervisor: Dr. Mike Van der Loos
My study is a continuation of FEATHERS, a project from the RREACH Lab at UBC that focuses on developing and evaluating novel physical exercise technologies for kids with motor disabilities. At this point, I would like to study how immersive VR technology can be used to benefit upper limb rehabilitation for persons with hemiplegia. Specifically, I would like to see how the use of error augmentation (i.e. adding visual or game element feedback to accentuate deviation from the desired exercise motion) might encourage persons with hemiplegia to engage their affected side more effectively by comparing the symmetry between the stronger and weaker limbs. I’m hoping that the immersive environment of VR and the ability to provide 1:1 direct visual feedback will increase active engagement.


Supervisor: Dr. Mike Van der Loos.

Current industrial robots lack the abilities (dexterity, complex sensing and cognitive processes) possessed by skilled workers need to perform many manufacturing tasks such as product assembly, inspection and packaging. For example, in the automotive industry, robots are used to perform tasks that are entirely repeatable and require little or no human intervention, such as painting, welding and pick-and-place operations. Such robots work in confined spaces isolated from human workers, as improper interactions could result in severe injury or death. Since robots have optimized production efficiency under these conditions however, industries are now directing efforts to achieve similar improvements in worker efficiency through the development of safe, robotic assistants that are able to co-operate with workers.
The research project proposed in this document seeks to exploit this emerging paradigm-shift for manufacturing systems. It is in this context that I propose to develop robot controllers and intuitive interaction strategies to facilitate cooperation between intelligent robotic assistants and non-expert human workers. It is expected that this work will focus on developing motion control models for interactions between different participants which involve safe contact, sharing and hand-off of common payloads. These control systems will allow intelligent robots to co-operate with non-expert workers safely, intuitively and effectively within a shared workspace.
To attain this goal, I intend to draw on elements of safe, collaborative human-robot interaction (HRI) explored through previously-conducted research [2, 3] to develop a preliminary motion control framework. Much of the hardware, communication algorithms and generalized interaction strategies necessary for designing this HRI already exist. However, a wide range of technological advancements necessary to support specific task-driven HRI such as real-time gesture recognition, interaction role negotiation and robust safety systems [1] must still be developed. Thus, studies investigating typical human-human collaborative interaction methods will be used to supplement this work. Specific focus will be given to examining how humans use non-verbal communication to negotiate leading and following roles. Several basic gestures and behaviors will be studied including: co-operative lifting, hand-offs and trajectory control of objects. I aim to leverage these findings by developing a library of motion control strategies for mobile manipulator-type robots which are safe, ergonomic, and allow for the efficient use of the worker and robotic assistant’s skills and abilities.
The control models constructed from these methods will be applied in the context of a specific use case representative of a typical production operation. The use case will consist of non-value added activities within an automotive manufacturing process having component tasks deemed to be complex and diverse. Motion control strategies will be evaluated and refined on a robot platform through human participant studies involving component tasks typical of those seen in the use case. These control strategies will be assessed both subjectively as they relate to the user (e.g., intuitiveness, perceived robot intelligence, ease of use) and objectively through performance measures (e.g., time trials).
The significance of this research lies in the advancement of HRI and the development and deployment of a new class of industrial robots intended to work alongside human counterparts beyond the laboratory. Novel forms of admittance control will be developed with the explicit intention of driving HRI, cooperation and shared object handling. This work is expected to produce useful data and methods contributing to the development and application of safe, collaborative HRI and human-in-the-loop control systems. Although this research is directed towards applications in manufacturing, the knowledge acquired will be extendable to HRI in other domains including rehabilitation, homecare and early child development.
[1] Breazeal, C. et al. “Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork”, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 383-388, 2005.
[2] Fischer, K., Muller, J.P., Pischel, M., “Unifying Control in a Layered Agent Architecture,” Int. Joint Conf. on AI (IJCAI’95), Agent Theory, Architecture and Language Workshop, 1995.
[3] Moon, A., Panton, B., Van der Loos, H.F.M., Croft, E.A., “Safe and Ethical Human-Robot Interaction Using Hesitation Gestures,” IEEE Conf. on Robotics and Automation., pp. 2, May 2010.
Today, the vast majority of user interfaces in consumer electronic products are based around explicit channels of interaction with human users. Examples include keyboards, gamepads, and various visual displays. Under normal circumstances, these interfaces provide a clear and controlled interaction between user and device [1]. However, problems with this paradigm arise when these interfaces divert a significant amount of their users attention from more important tasks. For example, consider the situation where a person in a car is trying to adjust the stereo while driving. The driver must divert some attention away from their primary task of driving to operate the device. It can be seen the concept of explicit interaction of peripheral devices can lead to an impairment in the ability for the user to effectively perform primary tasks.
The goal of my research was to design and implement a fundamentally different approach to device interaction. Rather than relying on explicit modes of communication between a user and device, I used implicit channels instead to decrease the device’s demand on the user’s attention. It is well known that human affective (emotional) states can be characterized by psycho-physiological signals that can be measured by off-the-shelf biometric sensors [2]. We had proposed to measure user-affective response and incorporate these signals into the device’s control loop so that it will be able to recognize and respond to user affect. We had also put forward the notion that haptic stimuli through a tactile display can be used as an immediate, yet unobtrusive channel for a device to communicate to a user that it has responded to their affective state, thereby closing the feedback loop [3]. Essentially, this research defined a model for a unique user-device interface that was driven by implicit, low-attention communication. It is theorized that this new paradigm will be minimally invasive and will not require him/her to be distracted by the peripheral device. We have termed this process of affect recognition leading to changes in device behaviour which is then signalled back to the user through haptic stimuli as the Haptic-Affect Loop (HALO).
My focus within the HALO concept was on the design and analysis of the overall control loop. This required me to measure, model and optimize latency, flow and habituation between the user’s affective state and HALO’s haptic display. A related problem which I needed to address was dimensionality – what aspects of a user’s biometrics should be used to characterize affective response? For example, what combination of skin conductance, heart rate, muscle twitch etc. best indicates that a user is happy or depressed? As an extension to this problem, how can the environmental context surrounding a user be established to calibrate affect recognition – for example, jogging in the park verses working in the office? Similarly, I also needed to specify the dimensionality of the haptic channels that notifies the user of device response while maintaining the goal of not distracting the user. I need to address where (e.g. back of neck, fingertip) and with what stimulus (i.e. soft tapping vs. aggressive buzzer) should the haptic feedback be delivered.
To validate the HALO concept, it was implemented in two use-cases – both showcasing HALO’s value in information network environments where attention is highly fragmented: the navigation of streaming media on a computer or portable device and background communication in distributed meetings. The results of the research included new lightweight affect sensing technologies, tactile displays and interaction techniques. This work complements and applies research in the areas of communications, haptics, and biometric sensing.
For more information on this project, please refer to my Masters thesis, which can be found in the UBC cIRcle archives.
[1] C. D. Wickens and J. G. Hollands, Engineering Psychology and Human Performance, 3rd ed. Prentice Hall, 1999.
[2] M. Pantic and L. J. M. Rothkran, “Toward an affect-sensitive multimodal human-computer interaction,” Proceedings of the IEEE, vol. 91, no. 9, pp. 1370-1390, Sep. 2003.
[3] S. Brewster and L. M. Brown, “Tactons: structured tactile messages for non-visual information display,” Proceedings of the fifth conference on Australasian user interface, vol. 28, pp. 15-23, 2004.

My research focuses on developing communication mechanisms for human-robot manipulation interaction. I enjoy working in detail-oriented and multidisciplinary teams to translate research from HRI and computer vision to advance mission-critical system. Thanks to my multidisciplinary background, I have the capacity of turning a novel idea into a functional prototype.
In recent years, robots have started to migrate from industrial to unstructured human environments, some examples include home robotics, search and rescue robotics, assistive robotics, and service robotics. However, this migration has been at a slow pace and with only a few successes. One key reason is that current robots do not have the capacity to interact well with humans in dynamic environments. Finding natural communication mechanisms that allow humans to interact and collaborate with robots effortlessly is a fundamental research direction to integrate robots into our daily living. In this thesis, we study pointing gestures for cooperative human-robot manipulation tasks in unstructured environments. By interacting with a human, the robot can solve tasks that are too complex for current articial intelligence agents and autonomous control systems. Inspired by human-human manipulation interaction, in particular how humans use pointing and gestures to simplify communication during collaborative manipulation tasks; we developed three novel non-verbal pointing based interfaces for human-robot collaboration. 1) Spatial pointing interface: In this interface, both human and robot are collocated and the communication format is done through gestures. We studied human pointing gesturing in the context of human manipulation and using computer vision; we quantified accuracy and precision of human
pointing in household scenarios. Furthermore, we designed a robot and vision system that can see, interpret and act using a gesture-based language. 2) Assistive vision-based interface: We designed an intuitive 2D image-based interface for upper body disabled persons to manipulate daily household objects through an assistive robotic arm (both human and robot are collocated sharing the same environment). The proposed interface reduces operation complexity by providing different levels of autonomy to the end user. 3) Vision-Force Interface for Path Specication in Tele- Manipulation: This is a remote visual interface that allows a user to specify in an on-line fashion a path constraint to a remote robot. By using the proposed interface, the operator can guide and control a 7-DOF remote robot arm through the desired path using only 2-DOF. We validate each of the proposed interfaces through user studies. The proposed interfaces explore the important direction of letting robots and humans work together and the importance of using a good communication channel/ interface during the interaction. Our research involved the integration of several knowledge areas. In particular, we studied and developed algorithms for vision control, object detection, object grasping, object manipulation and human-robot interaction.

People share spaces and objects with each other every day. When conflicts regarding access to these shared resources occur, people communicate with each other to negotiate a solution. But what should a robot do when such conflicts occur between a human and a robotic assistant? Answers to this question depend on the context of the situation. In order for robots to be successfully deployed in homes and workplaces, it is important for robots to be equipped with the ability to make socially and morally acceptable decisions about the conflict at hand. However, robots today are not very good at making such decisions. The objective of my research is to investigate an interactive paradigm of human-robot conflict resolution that does not involve complicated, artificial moral decision making. I am currently working on a robotic system that can communicatively negotiate about resource conflicts with its human partner using nonverbal gestures.
Studies suggest that people feel more positively toward robots that work with people rather than those that replace them. This means that in order to create robots that can collaborate and share tasks with humans, human-human interaction dynamics must be understood – key components of which could be replicated in human-robot interaction.
My master’s research project focused on how simple a non-verbal gesture (like that of jerky hesitant motion of your hand when you and another person reach for the same last piece of chocolate at the same time) can superimposed on the functional reaching motions of a robot, so that robots can express its uncertainty to human users. This research project led to the development of a characteristic motion profile, called the Acceleration-based Hesitation Profile (AHP) a robotic manipulator can use to generate humanlike hesitation motions as a response to resource conflicts (e.g., reaching for the same thing at the same time).
Take a look at how the designed hesitations look in contrast to abrupt collision avoidance responses.
Designed hesitation responses (AHP):
Abrupt stopping responses:




Supervisors: Drs. Elizabeth Croft, Mike Van der Loos, and Jean-Sébastien Blouin
Researchers have suggested different objectives for our neural balance controller: minimizing sway of our center of pressure, center of mass or head, or minimizing motor effort. Optimal control is an attractive architecture for modelling balance because it can achieve a weighted combination of these control objectives and includes mechanisms for controller adaptation. However, we have yet to observe that balance is optimal. For my research, I am testing whether human balance control is optimally adaptive using manipulated balance dynamics simulated by a robotic balance platform.

My goal is to create a bedsheet that has flexible, wireless sensing technology to measure physiological signals of whoever is on it. Measurements for heart rate, pulse oximetry, respiratory rate, and body position will help with the diagnosis of sleep conditions for children with neurodevelopmental disorders.

Supervisors: Dr. Machiel Van der Loos and Dr. Elizabeth Croft.
I am currently involved in the FEATHERS project. My research focuses on investigating the effect of integrating vibrotactile feedback in a rehabilitation therapy system that corrects users’ movements.

Standing balance is controlled by several inputs, including vision, vestibular sense, and ankle proprioception. Research studies in this field actively engage and manipulate these input mechanisms to examine their effects on the balance output, mainly muscle actuation in the lower limbs. While significant progress has been made, it is often difficult to isolate a single input and test its results on the output. The unique Robot for Interactive Sensor Engagement and Rehabilitation (RISER) has been developed in the UBC CARIS laboratory for controlling each sense independently to further our understanding of human balance control and to present new possibilities for the control of bipedal robots. We intend to use this system and the strategies developed to help safely rehabilitate people who have lost the ability to balance.
Researchers in our lab examine the human balance systems involved in maintaining anterior-posterior standing balance using a unique approach: subjects stand on a six-axis force plate mounted on a six-axis Stewart platform. The subjects are secured to the platform, so they cannot move independently of it. The forces that the subject applies to the forceplate are fed back to the platform controller, creating a simulation of standing balance in which the subject has no risk of falling. Immersive 3D stereo display goggles provide visual balance cues, and galvanic vestibular stimulation (GVS) can be employed to produce vestibular input. Additionally a two axis ‘ankle-tilt’ system has been mounted on top of the platform to control ankle angle in the sagittal plane. This decouples ankle proprioception from vestibular input, as the ankles can be moved independently of the head.

In the ongoing effort to make robots more humanesque, studying how people move and perform actions is a necessity. However, dynamic data collection is tricky when it comes to humans due to a distinct lack of built in software and usb ports. Happily things have just gotten a whole lot easier for us in the CARIS Lab with the installation of the new Open Stage motion capture system in room X209.
Open Stage uses colour differentiation to generate a voxel (3D pixel) cloud of your subject, from which a wire-frame skeleton is derived. Translation and rotation data are captured for 21 joints on the skeleton and then stored in Matlab as matrices. And the best part is Open Stage is markerless, so just step into the capture area, strike a pose, and let the magic* begin!

CHARM is a large multi-disciplinary, multi-institutional project in collaboration with General Motors of Canada (GM), which aims to advance safe human-robot interaction (HRI) in vehicle manufacturing industries. We investigate (1) robotic technology development: communication, control, and perception; and (2) system design methodology: interaction design, information coordination (situational awareness), and integration.
In CHARM, I Initiate and conduct collaborative research on human-robot interaction design with other members of the team, manage documentation and reporting, and coordinate project between UBC and the rest of the CHARM team.


Handing over objects is a basic routine in any cooperative scenario. We humans perform many handovers in our everyday lives and even when we never really think about each handover, we generally execute them efficiently and with ease. However, object handover is still a challenging task for many current robot systems. When handing over an object to a person, it is very important for the robot to time the release of the object carefully. Letting go too soon could result in dropping the object and letting go too late may result in the receive pulling very hard on the object.
The goal of my research is to teach robots how to hand over objects to humans safely, efficiently, and intuitively, through understanding the haptic interaction in human-to-human handovers. By enabling robots to perform handovers well, we will be able to allow more natural human-robot interaction.


Robots are quickly becoming incorporated into our daily lives. Development of prototype robotic assistants such as the Willow Garage PR2, the NASA-GM Robonaut and the rise of commercial robotic products such as the Roomba, have demonstrated both interest and applications for robots that can function successfully in human environments. Important to the successful adoption of robots in human workspaces is the ability for the robot to work in semi-structured and even unstructured environments which are far different than current robotic workcells. Enabling this move is the ongoing research in vision guided robot control or, visual servoing which allows robots to operate within the “vision based” world that humans work in. Almost all examples of robot assistants to date incorporate one or more vision systems, typically a camera, which has been mounted on the robot.
One common problem associated with using a camera as the feedback sensor is losing sight of the object that the camera was viewing. In surveillance, a suspect may run away from the camera field of view. In a rescue mission an obstacle may occlude the victim from the camera. In such situations the robot needs to acquire new data to locate the lost object. The new data could be obtained from other sensor platforms if available; alternatively the robot could acquire new data by searching for the target, based on the past data it has collected.
Irrespective of the visual task at hand prior the target being lost, we want robots to find the lost target efficiently and then robustly locate it within a safe region of the acquired image. Search efficiency requires a high speed search through an optimized trajectory while robustness requires cautious transition between the completed search and the restarted visual task once target visualization is regained. This will equip robots with an algorithm to handle lost target scenarios and switch back to their visual tasks autonomously.
Support


Supervisors: Drs. Mike Van der Loos and Jaimie Borisoff
My project involves the development of a virtual prototyping tool using OpenSim for the evaluation of new assistive devices for those with mobility impairments. In particular, this tool will be used to study the power requirements of a new mobility device which aims to merge the benefits of a manual wheelchair with those of a walking exoskeleton.
