Researchers

Postdoctoral Fellow

Ph.D. Thesis: Research on Affordance-Focused Learning and Generalization through Observation of Proper Handovers and Object Usages in Robot-Human Interactions

Object handover is a common task arising frequently in many cooperative scenarios. Therefore, it is crucial that robots perform handovers well when working with people. However, determining the proper handover method for an object is a difficult problem since it varies depending on each object’s affordances. Towards enabling effective human-robot cooperation, this thesis contributes a framework that enables robots to automatically determine handover methods for various objects by observing human handovers and object usages.

This thesis first documents a user study conducted to characterize and compare the handover orientations used by humans in different conditions. It puts forth the novel idea of object affordance axes for identifying patterns in handover orientations, and a distance minimizing method for computing mean handover orientation from a set of observations.

Next, this thesis presents an object grouping and classification method based on observed object usage for generalizing learned handover methods to new objects. Until now, a demonstrated method for generalizing handover methods to new object has been lacking. The presented method focuses on a set of action features extracted from the movement patterns and inter-object interactions observed during usage. An experiment demonstrates the effectiveness of the method on grouping objects and then classifying new objects and computing proper handover methods for them.

The described framework for learning and generalizing handover methods is implemented onto a Kawada Industries HRP2V robot, and this thesis also documents the verification experiments. The implementation in this thesis overcomes the robot perception challenge of identifying a held object’s pose at handover by detecting the object at the pre-occluded state and tracking its pose using a sequential Monte Carlo method. Results show that the framework allows robots to learn handover methods from demonstrations and compute proper handover methods for new objects. This is the first demonstrated system capable of automatically learning and generalizing handover methods from observations. Finally, integration into a household service robot application shows how this work this can enhance the capabilities of robots working in the real world by enabling them to work effectively with humans.

Through enabling better human-robot object handovers, this thesis contributes towards improving the interaction between humans and robots, thus, allowing safer, more natural, and more efficient human-robot cooperation.

M.A.Sc. Thesis: A Human-Inspired Controller for Robot-Human Object Handovers – A Study of Grip and Load Forces in Handovers and the Design and Implementation of a Novel Handover Controller

Handing over objects is a common basic task that arises between people in many cooperative scenarios. On a daily basis, we effortlessly and successfully perform countless unscripted handovers without any explicit communication. However, handing over an object to a person is a challenging task for robotic “hands”, and the resulting interaction is often unnatural. To improve human-robot cooperation, the work described in this thesis has led to the design of a human-inspired handover controller based on analysis and characterization of the haptic interaction during human-to-human object handover.

The first experiment in this thesis documents novel experimental work done to measure the dynamic interaction in human-human handovers. The grip forces and load forces experienced by the giver and the receiver during a handover are examined, and the key features are identified. Based on these experimental results, guidelines for designing human- robot handovers are proposed. Next, this thesis describes a handover controller model that enables robots to hand over objects to people in a safe, efficient, and intuitive manner, and an implementation of the handover controller on a Willow Garage PR2 robot is documented. Finally, a second experiment is presented, which compares various tunings of the novel controller in a user study. Results show that the novel controller yields more efficient and more intuitive robot-to-human handovers when compared to existing handover controllers.

My research focuses on developing communication mechanisms for human-robot manipulation interaction. I enjoy working in detail-oriented and multidisciplinary teams to translate research from HRI and computer vision to advance mission-critical system. Thanks to my multidisciplinary background, I have the capacity of turning a novel idea into a functional prototype.

Pointing gestures for Cooperative Human-Robot Manipulation Tasks in Unstructured Environments

In recent years, robots have started to migrate from industrial to unstructured human environments, some examples include home robotics, search and rescue robotics, assistive robotics, and service robotics. However, this migration has been at a slow pace and with only a few successes. One key reason is that current robots do not have the capacity to interact well with humans in dynamic environments. Finding natural communication mechanisms that allow humans to interact and collaborate with robots effortlessly is a fundamental research direction to integrate robots into our daily living. In this thesis, we study pointing gestures for cooperative human-robot manipulation tasks in unstructured environments. By interacting with a human, the robot can solve tasks that are too complex for current arti cial intelligence agents and autonomous control systems. Inspired by human-human manipulation interaction, in particular how humans use pointing and gestures to simplify communication during collaborative manipulation tasks; we developed three novel non-verbal pointing based interfaces for human-robot collaboration. 1) Spatial pointing interface: In this interface, both human and robot are collocated and the communication format is done through gestures. We studied human pointing gesturing in the context of human manipulation and using computer vision; we quantified accuracy and precision of human
pointing in household scenarios. Furthermore, we designed a robot and vision system that can see, interpret and act using a gesture-based language. 2) Assistive vision-based interface: We designed an intuitive 2D image-based interface for upper body disabled persons to manipulate daily household objects through an assistive robotic arm (both human and robot are collocated sharing the same environment). The proposed interface reduces operation complexity by providing different levels of autonomy to the end user. 3) Vision-Force Interface for Path Speci cation in Tele- Manipulation: This is a remote visual interface that allows a user to specify in an on-line fashion a path constraint to a remote robot. By using the proposed interface, the operator can guide and control a 7-DOF remote robot arm through the desired path using only 2-DOF. We validate each of the proposed interfaces through user studies. The proposed interfaces explore the important direction of letting robots and humans work together and the importance of using a good communication channel/ interface during the interaction. Our research involved the integration of several knowledge areas. In particular, we studied and developed algorithms for vision control, object detection, object grasping, object manipulation and human-robot interaction.

Research Associates

Reducing Compensatory Movements in Stroke Therapy through the Use of Robotic Devices and Augmented Feedback

For stroke survivors, the use of compensatory movements can lead to a reduction of range of motion, pain, and a pattern of “learned non-use”. A common compensatory movement present during upper limb reaching is trunk displacement. Although this motion has been identified as an important one to be reduced, few strategies for addressing this problem have been considered. The existing strategies require physical restraint of the person to the back of a chair, making them undesirable for use in unsupervised therapy. As a result, there is a current need for alternate methods that promote the use of correct movement patterns both in the clinic and in the home. In this sense, technology can act as an enabler to create new ways of reducing trunk compensation. Still, there is a gap in the literature as trunk compensation has only been investigated as a secondary theme in robotic and computer-aided rehabilitation. Consequently, in this project I will look into the reduction of trunk compensation using robotic devices and commercially available technology, to enable a focus on the quality of the movements in unsupervised therapy. The potential results from this PhD could later be applied and generalized to other modes of compensation in stroke and other neurological disabled populations.
Supervisor: Machiel Van der Loos

FEATHERS (Functional Engagement in Assisted Therapy through Exercise Robotics)

Stroke rehabilitation professionals acknowledge that about half of upper limb functional recovery after stroke is spontaneous. Any remaining recovery results from intensive, repetitive therapy over months of time, stimulating neuroplastic changes in the brain’s motor control pathways. From a human perspective, this is painful, frustrating and hard work. Sustaining a treatment over months requires significant doses of motivation and funding. Health plans do not provide sufficient coverage; motivation is highly dependent on a person’s support network and inner drive, and is often not adequately tapped.
We are combining low-cost robotic devices, a bimanual training program, social media frameworks such as Facebook Games, and on-line performance sharing between therapy clients and their therapists. This combination of components represents a best-practices approach to bidirectional knowledge transfer, development of technology and design of well-coordinated home-based therapy. We believe that together these approaches will yield interventions for people with stroke and children with hemiparetic cerebral palsy that significantly improve functional ability and lead to improved quality of life.
http://caris.mech.ubc.ca/feathers/project-summary/

PhD Candidates and Students

(604)729-4575
Combined Mobility Base-Orthosis (COMBO)

Supervisors: Drs. Machiel Van der Loos, Jaimie Borisoff

The COMBO design concept merges the best features of walking exoskeletons with the benefits of wheeled mobility to create a novel mobility device with the potential of a significant benefit to the life of people with mobility impairments.

Developing Control Strategies to Mitigate Injury after Falling with a Lower Limb Exoskeleton
Upper-body Motion Coordination after Stroke: Insights from Motor Synergies
Ph.D. Thesis: Human-Robot Shared Object Manipulation

Current industrial robots lack the abilities (dexterity, complex sensing and cognitive processes) possessed by skilled workers need to perform many manufacturing tasks such as product assembly, inspection and packaging. For example, in the automotive industry, robots are used to perform tasks that are entirely repeatable and require little or no human intervention, such as painting, welding and pick-and-place operations. Such robots work in confined spaces isolated from human workers, as improper interactions could result in severe injury or death. Since robots have optimized production efficiency under these conditions however, industries are now directing efforts to achieve similar improvements in worker efficiency through the development of safe, robotic assistants that are able to co-operate with workers.

The research project proposed in this document seeks to exploit this emerging paradigm-shift for manufacturing systems. It is in this context that I propose to develop robot controllers and intuitive interaction strategies to facilitate cooperation between intelligent robotic assistants and non-expert human workers. It is expected that this work will focus on developing motion control models for interactions between different participants which involve safe contact, sharing and hand-off of common payloads. These control systems will allow intelligent robots to co-operate with non-expert workers safely, intuitively and effectively within a shared workspace.

To attain this goal, I intend to draw on elements of safe, collaborative human-robot interaction (HRI) explored through previously-conducted research [2, 3] to develop a preliminary motion control framework. Much of the hardware, communication algorithms and generalized interaction strategies necessary for designing this HRI already exist. However, a wide range of technological advancements necessary to support specific task-driven HRI such as real-time gesture recognition, interaction role negotiation and robust safety systems [1] must still be developed. Thus, studies investigating typical human-human collaborative interaction methods will be used to supplement this work. Specific focus will be given to examining how humans use non-verbal communication to negotiate leading and following roles. Several basic gestures and behaviors will be studied including: co-operative lifting, hand-offs and trajectory control of objects. I aim to leverage these findings by developing a library of motion control strategies for mobile manipulator-type robots which are safe, ergonomic, and allow for the efficient use of the worker and robotic assistant’s skills and abilities.

The control models constructed from these methods will be applied in the context of a specific use case representative of a typical production operation. The use case will consist of non-value added activities within an automotive manufacturing process having component tasks deemed to be complex and diverse. Motion control strategies will be evaluated and refined on a robot platform through human participant studies involving component tasks typical of those seen in the use case. These control strategies will be assessed both subjectively as they relate to the user (e.g., intuitiveness, perceived robot intelligence, ease of use) and objectively through performance measures (e.g., time trials).

The significance of this research lies in the advancement of HRI and the development and deployment of a new class of industrial robots intended to work alongside human counterparts beyond the laboratory. Novel forms of admittance control will be developed with the explicit intention of driving HRI, cooperation and shared object handling. This work is expected to produce useful data and methods contributing to the development and application of safe, collaborative HRI and human-in-the-loop control systems. Although this research is directed towards applications in manufacturing, the knowledge acquired will be extendable to HRI in other domains including rehabilitation, homecare and early child development.

[1] Breazeal, C. et al. “Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork”, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 383-388, 2005.
[2] Fischer, K., Muller, J.P., Pischel, M., “Unifying Control in a Layered Agent Architecture,” Int. Joint Conf. on AI (IJCAI’95), Agent Theory, Architecture and Language Workshop, 1995.
[3] Moon, A., Panton, B., Van der Loos, H.F.M., Croft, E.A., “Safe and Ethical Human-Robot Interaction Using Hesitation Gestures,” IEEE Conf. on Robotics and Automation., pp. 2, May 2010.

M.A.Sc. Thesis: The Haptic Affect Loop

Today, the vast majority of user interfaces in consumer electronic products are based around explicit channels of interaction with human users. Examples include keyboards, gamepads, and various visual displays. Under normal circumstances, these interfaces provide a clear and controlled interaction between user and device [1]. However, problems with this paradigm arise when these interfaces divert a significant amount of their users attention from more important tasks. For example, consider the situation where a person in a car is trying to adjust the stereo while driving. The driver must divert some attention away from their primary task of driving to operate the device. It can be seen the concept of explicit interaction of peripheral devices can lead to an impairment in the ability for the user to effectively perform primary tasks.

The goal of my research was to design and implement a fundamentally different approach to device interaction. Rather than relying on explicit modes of communication between a user and device, I used implicit channels instead to decrease the device’s demand on the user’s attention. It is well known that human affective (emotional) states can be characterized by psycho-physiological signals that can be measured by off-the-shelf biometric sensors [2]. We had proposed to measure user-affective response and incorporate these signals into the device’s control loop so that it will be able to recognize and respond to user affect. We had also put forward the notion that haptic stimuli through a tactile display can be used as an immediate, yet unobtrusive channel for a device to communicate to a user that it has responded to their affective state, thereby closing the feedback loop [3]. Essentially, this research defined a model for a unique user-device interface that was driven by implicit, low-attention communication. It is theorized that this new paradigm will be minimally invasive and will not require him/her to be distracted by the peripheral device. We have termed this process of affect recognition leading to changes in device behaviour which is then signalled back to the user through haptic stimuli as the Haptic-Affect Loop (HALO).

My focus within the HALO concept was on the design and analysis of the overall control loop. This required me to measure, model and optimize latency, flow and habituation between the user’s affective state and HALO’s haptic display. A related problem which I needed to address was dimensionality – what aspects of a user’s biometrics should be used to characterize affective response? For example, what combination of skin conductance, heart rate, muscle twitch etc. best indicates that a user is happy or depressed? As an extension to this problem, how can the environmental context surrounding a user be established to calibrate affect recognition – for example, jogging in the park verses working in the office? Similarly, I also needed to specify the dimensionality of the haptic channels that notifies the user of device response while maintaining the goal of not distracting the user. I need to address where (e.g. back of neck, fingertip) and with what stimulus (i.e. soft tapping vs. aggressive buzzer) should the haptic feedback be delivered.

To validate the HALO concept, it was implemented in two use-cases – both showcasing HALO’s value in information network environments where attention is highly fragmented: the navigation of streaming media on a computer or portable device and background communication in distributed meetings. The results of the research included new lightweight affect sensing technologies, tactile displays and interaction techniques. This work complements and applies research in the areas of communications, haptics, and biometric sensing.

For more information on this project, please refer to my Masters thesis, which can be found in the UBC cIRcle archives.

[1] C. D. Wickens and J. G. Hollands, Engineering Psychology and Human Performance, 3rd ed. Prentice Hall, 1999.
[2] M. Pantic and L. J. M. Rothkran, “Toward an affect-sensitive multimodal human-computer interaction,” Proceedings of the IEEE, vol. 91, no. 9, pp. 1370-1390, Sep. 2003.
[3] S. Brewster and L. M. Brown, “Tactons: structured tactile messages for non-visual information display,” Proceedings of the fifth conference on Australasian user interface, vol. 28, pp. 15-23, 2004.

MASc Candidates

Study of Neurological Disorders Through Acquisition and Analysis of Biosignals from Smart Mechatronic Systems (SleepSmart)

Supervisor: Dr. Mike Van der Loos.

Using a combination of engaging games and robotic orthoses to improve rehabilitation of hemiparetic children afflicted with CP (FEATHERS)

Supervisor: Dr. Mike Van der Loos



a place of mind, The University of British Columbia

Faculty of Applied Science
5000 - 2332 Main Mall,
Vancouver, BC, V6T 1Z4, Canada
Tel: 604.822.6413
Email:
CARIS Lab
Department of Mechanical Engineering, UBC,
Vancouver, BC, Canada
Tel: 604.822.3147
Fax: 604.822.2403
See contact page for addresses.

Emergency Procedures | Accessibility | Contact UBC  | © Copyright The University of British Columbia