Postdoctoral Fellow

Ph.D. Thesis: Research on Affordance-Focused Learning and Generalization through Observation of Proper Handovers and Object Usages in Robot-Human Interactions

Object handover is a common task arising frequently in many cooperative scenarios. Therefore, it is crucial that robots perform handovers well when working with people. However, determining the proper handover method for an object is a difficult problem since it varies depending on each object’s affordances. Towards enabling effective human-robot cooperation, this thesis contributes a framework that enables robots to automatically determine handover methods for various objects by observing human handovers and object usages.

This thesis first documents a user study conducted to characterize and compare the handover orientations used by humans in different conditions. It puts forth the novel idea of object affordance axes for identifying patterns in handover orientations, and a distance minimizing method for computing mean handover orientation from a set of observations.

Next, this thesis presents an object grouping and classification method based on observed object usage for generalizing learned handover methods to new objects. Until now, a demonstrated method for generalizing handover methods to new object has been lacking. The presented method focuses on a set of action features extracted from the movement patterns and inter-object interactions observed during usage. An experiment demonstrates the effectiveness of the method on grouping objects and then classifying new objects and computing proper handover methods for them.

The described framework for learning and generalizing handover methods is implemented onto a Kawada Industries HRP2V robot, and this thesis also documents the verification experiments. The implementation in this thesis overcomes the robot perception challenge of identifying a held object’s pose at handover by detecting the object at the pre-occluded state and tracking its pose using a sequential Monte Carlo method. Results show that the framework allows robots to learn handover methods from demonstrations and compute proper handover methods for new objects. This is the first demonstrated system capable of automatically learning and generalizing handover methods from observations. Finally, integration into a household service robot application shows how this work this can enhance the capabilities of robots working in the real world by enabling them to work effectively with humans.

Through enabling better human-robot object handovers, this thesis contributes towards improving the interaction between humans and robots, thus, allowing safer, more natural, and more efficient human-robot cooperation.

M.A.Sc. Thesis: A Human-Inspired Controller for Robot-Human Object Handovers – A Study of Grip and Load Forces in Handovers and the Design and Implementation of a Novel Handover Controller

Handing over objects is a common basic task that arises between people in many cooperative scenarios. On a daily basis, we effortlessly and successfully perform countless unscripted handovers without any explicit communication. However, handing over an object to a person is a challenging task for robotic “hands”, and the resulting interaction is often unnatural. To improve human-robot cooperation, the work described in this thesis has led to the design of a human-inspired handover controller based on analysis and characterization of the haptic interaction during human-to-human object handover.

The first experiment in this thesis documents novel experimental work done to measure the dynamic interaction in human-human handovers. The grip forces and load forces experienced by the giver and the receiver during a handover are examined, and the key features are identified. Based on these experimental results, guidelines for designing human- robot handovers are proposed. Next, this thesis describes a handover controller model that enables robots to hand over objects to people in a safe, efficient, and intuitive manner, and an implementation of the handover controller on a Willow Garage PR2 robot is documented. Finally, a second experiment is presented, which compares various tunings of the novel controller in a user study. Results show that the novel controller yields more efficient and more intuitive robot-to-human handovers when compared to existing handover controllers.

My research focuses on developing communication mechanisms for human-robot manipulation interaction. I enjoy working in detail-oriented and multidisciplinary teams to translate research from HRI and computer vision to advance mission-critical system. Thanks to my multidisciplinary background, I have the capacity of turning a novel idea into a functional prototype.

Pointing gestures for Cooperative Human-Robot Manipulation Tasks in Unstructured Environments

In recent years, robots have started to migrate from industrial to unstructured human environments, some examples include home robotics, search and rescue robotics, assistive robotics, and service robotics. However, this migration has been at a slow pace and with only a few successes. One key reason is that current robots do not have the capacity to interact well with humans in dynamic environments. Finding natural communication mechanisms that allow humans to interact and collaborate with robots effortlessly is a fundamental research direction to integrate robots into our daily living. In this thesis, we study pointing gestures for cooperative human-robot manipulation tasks in unstructured environments. By interacting with a human, the robot can solve tasks that are too complex for current arti cial intelligence agents and autonomous control systems. Inspired by human-human manipulation interaction, in particular how humans use pointing and gestures to simplify communication during collaborative manipulation tasks; we developed three novel non-verbal pointing based interfaces for human-robot collaboration. 1) Spatial pointing interface: In this interface, both human and robot are collocated and the communication format is done through gestures. We studied human pointing gesturing in the context of human manipulation and using computer vision; we quantified accuracy and precision of human
pointing in household scenarios. Furthermore, we designed a robot and vision system that can see, interpret and act using a gesture-based language. 2) Assistive vision-based interface: We designed an intuitive 2D image-based interface for upper body disabled persons to manipulate daily household objects through an assistive robotic arm (both human and robot are collocated sharing the same environment). The proposed interface reduces operation complexity by providing different levels of autonomy to the end user. 3) Vision-Force Interface for Path Speci cation in Tele- Manipulation: This is a remote visual interface that allows a user to specify in an on-line fashion a path constraint to a remote robot. By using the proposed interface, the operator can guide and control a 7-DOF remote robot arm through the desired path using only 2-DOF. We validate each of the proposed interfaces through user studies. The proposed interfaces explore the important direction of letting robots and humans work together and the importance of using a good communication channel/ interface during the interaction. Our research involved the integration of several knowledge areas. In particular, we studied and developed algorithms for vision control, object detection, object grasping, object manipulation and human-robot interaction.


PhD Candidates and Students

PhD. Thesis: Design and development of a mobility assistive technology to improve autonomy of wheeled mobility assistive device users

Supervisors: Drs. Machiel Van der Loos, Jaimie Borisoff

There are a large number of people all around the world who rely on wheeled mobility assistive devices (WMAD) to perform their daily life activities. The use of WMADs impacts various aspects of peoples’ life including their personal autonomy. In many cases, autonomy – that is, peoples’ choices and controls over what they want to do – is determined by the type of mobility assistive device they are using. Therefore, it is essential to recognize, assess, and address the true autonomy-related needs of mobility device users in the process of assistive device development.
In my research, I’m reviewing the literature to identify the main contributing factors to the autonomy of WMAD users. Next, I compare the design and performance characteristics of existing WMADs across these factors. This knowledge provides an insight into the existing gap between the users’ needs and what is available to them. To address this gap, I plan to establish an autonomy-based framework for mobility assistive technology development. Use of this framework could lead to the design and development of mobility assistive devices that provide a more balanced sense of autonomy to the users

MASc. Thesis: Developing Control Strategies to Mitigate Injury after Falling with a Lower Limb Exoskeleton

Supervisors: Drs. Machiel Van der Loos, Jaimie Borisoff

Powered lower limb exoskeletons (LLEs) are wearable robotic aids that provide mobility assistance for people with mobility impairments. Despite their advanced design, LLEs are still far from being effective assistive devices that can be used to perform activities of daily living. The main challenge in the operation of a LLE is to ensure that balance is maintained. However, maintaining an upright stance is not always achievable and regardless of the quality of user skill and training, inevitably falls will occur. Currently, there is no control strategy developed or implemented in LLEs that help reduce the user’s risk of injury in the case of an unexpected fall.
In this thesis, an optimization methodology was developed and used to create a safer strategy for exoskeletons falling backwards in a simulation environment. Due to the data available regarding the biomechanics of human falls, the optimization methodology was first developed to study falls with simulation parameters characteristic of healthy people. The resulting optimal fall strategy in this study had similar kinematic and dynamic characteristics to the findings of previous studies on human falls. Rapid knee flexion at the onset of the fall, and knee extension prior to ground contact are examples of these characteristics. Following this, the optimization methodology was extended to include the characteristics of an exoskeleton. The results revealed that the hip impact velocity was reduced by 58% when the optimal fall strategy was employed compared to the case where the exoskeleton fell with locked joints. It was also shown that in both cases of optimal human and human-exoskeleton falls, the models contacted the ground with an upright trunk with a near-zero trunk angular velocity to avoid head impact. These results achieved the thesis goal of developing an effective safe fall control strategy. This strategy was then implemented in a prototype exoskeleton test device. The experimental results validated the simulation outcomes and support the feasibility of implementing this control strategy. Future studies are needed to further examine the effectiveness of applying this strategy in an actual LLE.

Upper-body Motion Coordination after Stroke: Insights from Motor Synergies
Ph.D. Thesis: Human-Robot Shared Object Manipulation

Current industrial robots lack the abilities (dexterity, complex sensing and cognitive processes) possessed by skilled workers need to perform many manufacturing tasks such as product assembly, inspection and packaging. For example, in the automotive industry, robots are used to perform tasks that are entirely repeatable and require little or no human intervention, such as painting, welding and pick-and-place operations. Such robots work in confined spaces isolated from human workers, as improper interactions could result in severe injury or death. Since robots have optimized production efficiency under these conditions however, industries are now directing efforts to achieve similar improvements in worker efficiency through the development of safe, robotic assistants that are able to co-operate with workers.

The research project proposed in this document seeks to exploit this emerging paradigm-shift for manufacturing systems. It is in this context that I propose to develop robot controllers and intuitive interaction strategies to facilitate cooperation between intelligent robotic assistants and non-expert human workers. It is expected that this work will focus on developing motion control models for interactions between different participants which involve safe contact, sharing and hand-off of common payloads. These control systems will allow intelligent robots to co-operate with non-expert workers safely, intuitively and effectively within a shared workspace.

To attain this goal, I intend to draw on elements of safe, collaborative human-robot interaction (HRI) explored through previously-conducted research [2, 3] to develop a preliminary motion control framework. Much of the hardware, communication algorithms and generalized interaction strategies necessary for designing this HRI already exist. However, a wide range of technological advancements necessary to support specific task-driven HRI such as real-time gesture recognition, interaction role negotiation and robust safety systems [1] must still be developed. Thus, studies investigating typical human-human collaborative interaction methods will be used to supplement this work. Specific focus will be given to examining how humans use non-verbal communication to negotiate leading and following roles. Several basic gestures and behaviors will be studied including: co-operative lifting, hand-offs and trajectory control of objects. I aim to leverage these findings by developing a library of motion control strategies for mobile manipulator-type robots which are safe, ergonomic, and allow for the efficient use of the worker and robotic assistant’s skills and abilities.

The control models constructed from these methods will be applied in the context of a specific use case representative of a typical production operation. The use case will consist of non-value added activities within an automotive manufacturing process having component tasks deemed to be complex and diverse. Motion control strategies will be evaluated and refined on a robot platform through human participant studies involving component tasks typical of those seen in the use case. These control strategies will be assessed both subjectively as they relate to the user (e.g., intuitiveness, perceived robot intelligence, ease of use) and objectively through performance measures (e.g., time trials).

The significance of this research lies in the advancement of HRI and the development and deployment of a new class of industrial robots intended to work alongside human counterparts beyond the laboratory. Novel forms of admittance control will be developed with the explicit intention of driving HRI, cooperation and shared object handling. This work is expected to produce useful data and methods contributing to the development and application of safe, collaborative HRI and human-in-the-loop control systems. Although this research is directed towards applications in manufacturing, the knowledge acquired will be extendable to HRI in other domains including rehabilitation, homecare and early child development.

[1] Breazeal, C. et al. “Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork”, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 383-388, 2005.
[2] Fischer, K., Muller, J.P., Pischel, M., “Unifying Control in a Layered Agent Architecture,” Int. Joint Conf. on AI (IJCAI’95), Agent Theory, Architecture and Language Workshop, 1995.
[3] Moon, A., Panton, B., Van der Loos, H.F.M., Croft, E.A., “Safe and Ethical Human-Robot Interaction Using Hesitation Gestures,” IEEE Conf. on Robotics and Automation., pp. 2, May 2010.

M.A.Sc. Thesis: The Haptic Affect Loop

Today, the vast majority of user interfaces in consumer electronic products are based around explicit channels of interaction with human users. Examples include keyboards, gamepads, and various visual displays. Under normal circumstances, these interfaces provide a clear and controlled interaction between user and device [1]. However, problems with this paradigm arise when these interfaces divert a significant amount of their users attention from more important tasks. For example, consider the situation where a person in a car is trying to adjust the stereo while driving. The driver must divert some attention away from their primary task of driving to operate the device. It can be seen the concept of explicit interaction of peripheral devices can lead to an impairment in the ability for the user to effectively perform primary tasks.

The goal of my research was to design and implement a fundamentally different approach to device interaction. Rather than relying on explicit modes of communication between a user and device, I used implicit channels instead to decrease the device’s demand on the user’s attention. It is well known that human affective (emotional) states can be characterized by psycho-physiological signals that can be measured by off-the-shelf biometric sensors [2]. We had proposed to measure user-affective response and incorporate these signals into the device’s control loop so that it will be able to recognize and respond to user affect. We had also put forward the notion that haptic stimuli through a tactile display can be used as an immediate, yet unobtrusive channel for a device to communicate to a user that it has responded to their affective state, thereby closing the feedback loop [3]. Essentially, this research defined a model for a unique user-device interface that was driven by implicit, low-attention communication. It is theorized that this new paradigm will be minimally invasive and will not require him/her to be distracted by the peripheral device. We have termed this process of affect recognition leading to changes in device behaviour which is then signalled back to the user through haptic stimuli as the Haptic-Affect Loop (HALO).

My focus within the HALO concept was on the design and analysis of the overall control loop. This required me to measure, model and optimize latency, flow and habituation between the user’s affective state and HALO’s haptic display. A related problem which I needed to address was dimensionality – what aspects of a user’s biometrics should be used to characterize affective response? For example, what combination of skin conductance, heart rate, muscle twitch etc. best indicates that a user is happy or depressed? As an extension to this problem, how can the environmental context surrounding a user be established to calibrate affect recognition – for example, jogging in the park verses working in the office? Similarly, I also needed to specify the dimensionality of the haptic channels that notifies the user of device response while maintaining the goal of not distracting the user. I need to address where (e.g. back of neck, fingertip) and with what stimulus (i.e. soft tapping vs. aggressive buzzer) should the haptic feedback be delivered.

To validate the HALO concept, it was implemented in two use-cases – both showcasing HALO’s value in information network environments where attention is highly fragmented: the navigation of streaming media on a computer or portable device and background communication in distributed meetings. The results of the research included new lightweight affect sensing technologies, tactile displays and interaction techniques. This work complements and applies research in the areas of communications, haptics, and biometric sensing.

For more information on this project, please refer to my Masters thesis, which can be found in the UBC cIRcle archives.

[1] C. D. Wickens and J. G. Hollands, Engineering Psychology and Human Performance, 3rd ed. Prentice Hall, 1999.
[2] M. Pantic and L. J. M. Rothkran, “Toward an affect-sensitive multimodal human-computer interaction,” Proceedings of the IEEE, vol. 91, no. 9, pp. 1370-1390, Sep. 2003.
[3] S. Brewster and L. M. Brown, “Tactons: structured tactile messages for non-visual information display,” Proceedings of the fifth conference on Australasian user interface, vol. 28, pp. 15-23, 2004.

MASc Candidates

Study of Neurological Disorders Through Acquisition and Analysis of Biosignals from Smart Mechatronic Systems (SleepSmart)

Supervisor: Dr. Mike Van der Loos.

Using a combination of engaging games and robotic orthoses to improve rehabilitation of hemiparetic children afflicted with CP (FEATHERS)

Supervisor: Dr. Mike Van der Loos

a place of mind, The University of British Columbia

Faculty of Applied Science
5000 - 2332 Main Mall,
Vancouver, BC, V6T 1Z4, Canada
Tel: 604.822.6413
Department of Mechanical Engineering, UBC,
Vancouver, BC, Canada
Tel: 604.822.3147
Fax: 604.822.2403
See contact page for addresses.

Emergency Procedures | Accessibility | Contact UBC  | © Copyright The University of British Columbia