PhD Candidates and Students
Supervisors: Drs. Machiel Van der Loos, Jaimie Borisoff
The COMBO design concept merges the best features of walking exoskeletons with the benefits of wheeled mobility to create a novel mobility device with the potential of a significant benefit to the life of people with mobility impairments.
People share spaces and objects with each other every day. When conflicts regarding access to these shared resources occur, people communicate with each other to negotiate a solution. But what should a robot do when such conflicts occur between a human and a robotic assistant? Answers to this question depend on the context of the situation. In order for robots to be successfully deployed in homes and workplaces, it is important for robots to be equipped with the ability to make socially and morally acceptable decisions about the conflict at hand. However, robots today are not very good at making such decisions. The objective of my research is to investigate an interactive paradigm of human-robot conflict resolution that does not involve complicated, artificial moral decision making. I am currently working on a robotic system that can communicatively negotiate about resource conflicts with its human partner using nonverbal gestures.
Studies suggest that people feel more positively toward robots that work with people rather than those that replace them. This means that in order to create robots that can collaborate and share tasks with humans, human-human interaction dynamics must be understood – key components of which could be replicated in human-robot interaction.
My master’s research project focused on how simple a non-verbal gesture (like that of jerky hesitant motion of your hand when you and another person reach for the same last piece of chocolate at the same time) can superimposed on the functional reaching motions of a robot, so that robots can express its uncertainty to human users. This research project led to the development of a characteristic motion profile, called the Acceleration-based Hesitation Profile (AHP) a robotic manipulator can use to generate humanlike hesitation motions as a response to resource conflicts (e.g., reaching for the same thing at the same time).
Take a look at how the designed hesitations look in contrast to abrupt collision avoidance responses.
Designed hesitation responses (AHP):
Abrupt stopping responses:
Current industrial robots lack the abilities (dexterity, complex sensing and cognitive processes) possessed by skilled workers need to perform many manufacturing tasks such as product assembly, inspection and packaging. For example, in the automotive industry, robots are used to perform tasks that are entirely repeatable and require little or no human intervention, such as painting, welding and pick-and-place operations. Such robots work in confined spaces isolated from human workers, as improper interactions could result in severe injury or death. Since robots have optimized production efficiency under these conditions however, industries are now directing efforts to achieve similar improvements in worker efficiency through the development of safe, robotic assistants that are able to co-operate with workers.
The research project proposed in this document seeks to exploit this emerging paradigm-shift for manufacturing systems. It is in this context that I propose to develop robot controllers and intuitive interaction strategies to facilitate cooperation between intelligent robotic assistants and non-expert human workers. It is expected that this work will focus on developing motion control models for interactions between different participants which involve safe contact, sharing and hand-off of common payloads. These control systems will allow intelligent robots to co-operate with non-expert workers safely, intuitively and effectively within a shared workspace.
To attain this goal, I intend to draw on elements of safe, collaborative human-robot interaction (HRI) explored through previously-conducted research [2, 3] to develop a preliminary motion control framework. Much of the hardware, communication algorithms and generalized interaction strategies necessary for designing this HRI already exist. However, a wide range of technological advancements necessary to support specific task-driven HRI such as real-time gesture recognition, interaction role negotiation and robust safety systems  must still be developed. Thus, studies investigating typical human-human collaborative interaction methods will be used to supplement this work. Specific focus will be given to examining how humans use non-verbal communication to negotiate leading and following roles. Several basic gestures and behaviors will be studied including: co-operative lifting, hand-offs and trajectory control of objects. I aim to leverage these findings by developing a library of motion control strategies for mobile manipulator-type robots which are safe, ergonomic, and allow for the efficient use of the worker and robotic assistant’s skills and abilities.
The control models constructed from these methods will be applied in the context of a specific use case representative of a typical production operation. The use case will consist of non-value added activities within an automotive manufacturing process having component tasks deemed to be complex and diverse. Motion control strategies will be evaluated and refined on a robot platform through human participant studies involving component tasks typical of those seen in the use case. These control strategies will be assessed both subjectively as they relate to the user (e.g., intuitiveness, perceived robot intelligence, ease of use) and objectively through performance measures (e.g., time trials).
The significance of this research lies in the advancement of HRI and the development and deployment of a new class of industrial robots intended to work alongside human counterparts beyond the laboratory. Novel forms of admittance control will be developed with the explicit intention of driving HRI, cooperation and shared object handling. This work is expected to produce useful data and methods contributing to the development and application of safe, collaborative HRI and human-in-the-loop control systems. Although this research is directed towards applications in manufacturing, the knowledge acquired will be extendable to HRI in other domains including rehabilitation, homecare and early child development.
 Breazeal, C. et al. “Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork”, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 383-388, 2005.
 Fischer, K., Muller, J.P., Pischel, M., “Unifying Control in a Layered Agent Architecture,” Int. Joint Conf. on AI (IJCAI’95), Agent Theory, Architecture and Language Workshop, 1995.
 Moon, A., Panton, B., Van der Loos, H.F.M., Croft, E.A., “Safe and Ethical Human-Robot Interaction Using Hesitation Gestures,” IEEE Conf. on Robotics and Automation., pp. 2, May 2010.
Today, the vast majority of user interfaces in consumer electronic products are based around explicit channels of interaction with human users. Examples include keyboards, gamepads, and various visual displays. Under normal circumstances, these interfaces provide a clear and controlled interaction between user and device . However, problems with this paradigm arise when these interfaces divert a significant amount of their users attention from more important tasks. For example, consider the situation where a person in a car is trying to adjust the stereo while driving. The driver must divert some attention away from their primary task of driving to operate the device. It can be seen the concept of explicit interaction of peripheral devices can lead to an impairment in the ability for the user to effectively perform primary tasks.
The goal of my research was to design and implement a fundamentally different approach to device interaction. Rather than relying on explicit modes of communication between a user and device, I used implicit channels instead to decrease the device’s demand on the user’s attention. It is well known that human affective (emotional) states can be characterized by psycho-physiological signals that can be measured by off-the-shelf biometric sensors . We had proposed to measure user-affective response and incorporate these signals into the device’s control loop so that it will be able to recognize and respond to user affect. We had also put forward the notion that haptic stimuli through a tactile display can be used as an immediate, yet unobtrusive channel for a device to communicate to a user that it has responded to their affective state, thereby closing the feedback loop . Essentially, this research defined a model for a unique user-device interface that was driven by implicit, low-attention communication. It is theorized that this new paradigm will be minimally invasive and will not require him/her to be distracted by the peripheral device. We have termed this process of affect recognition leading to changes in device behaviour which is then signalled back to the user through haptic stimuli as the Haptic-Affect Loop (HALO).
My focus within the HALO concept was on the design and analysis of the overall control loop. This required me to measure, model and optimize latency, flow and habituation between the user’s affective state and HALO’s haptic display. A related problem which I needed to address was dimensionality – what aspects of a user’s biometrics should be used to characterize affective response? For example, what combination of skin conductance, heart rate, muscle twitch etc. best indicates that a user is happy or depressed? As an extension to this problem, how can the environmental context surrounding a user be established to calibrate affect recognition – for example, jogging in the park verses working in the office? Similarly, I also needed to specify the dimensionality of the haptic channels that notifies the user of device response while maintaining the goal of not distracting the user. I need to address where (e.g. back of neck, fingertip) and with what stimulus (i.e. soft tapping vs. aggressive buzzer) should the haptic feedback be delivered.
To validate the HALO concept, it was implemented in two use-cases – both showcasing HALO’s value in information network environments where attention is highly fragmented: the navigation of streaming media on a computer or portable device and background communication in distributed meetings. The results of the research included new lightweight affect sensing technologies, tactile displays and interaction techniques. This work complements and applies research in the areas of communications, haptics, and biometric sensing.
For more information on this project, please refer to my Masters thesis, which can be found in the UBC cIRcle archives.
 C. D. Wickens and J. G. Hollands, Engineering Psychology and Human Performance, 3rd ed. Prentice Hall, 1999.
 M. Pantic and L. J. M. Rothkran, “Toward an affect-sensitive multimodal human-computer interaction,” Proceedings of the IEEE, vol. 91, no. 9, pp. 1370-1390, Sep. 2003.
 S. Brewster and L. M. Brown, “Tactons: structured tactile messages for non-visual information display,” Proceedings of the fifth conference on Australasian user interface, vol. 28, pp. 15-23, 2004.
For stroke survivors, the use of compensatory movements can lead to a reduction of range of motion, pain, and a pattern of “learned non-use”. A common compensatory movement present during upper limb reaching is trunk displacement. Although this motion has been identified as an important one to be reduced, few strategies for addressing this problem have been considered. The existing strategies require physical restraint of the person to the back of a chair, making them undesirable for use in unsupervised therapy. As a result, there is a current need for alternate methods that promote the use of correct movement patterns both in the clinic and in the home. In this sense, technology can act as an enabler to create new ways of reducing trunk compensation. Still, there is a gap in the literature as trunk compensation has only been investigated as a secondary theme in robotic and computer-aided rehabilitation. Consequently, in this project I will look into the reduction of trunk compensation using robotic devices and commercially available technology, to enable a focus on the quality of the movements in unsupervised therapy. The potential results from this PhD could later be applied and generalized to other modes of compensation in stroke and other neurological disabled populations.
Supervisor: Machiel Van der Loos
Stroke rehabilitation professionals acknowledge that about half of upper limb functional recovery after stroke is spontaneous. Any remaining recovery results from intensive, repetitive therapy over months of time, stimulating neuroplastic changes in the brain’s motor control pathways. From a human perspective, this is painful, frustrating and hard work. Sustaining a treatment over months requires significant doses of motivation and funding. Health plans do not provide sufficient coverage; motivation is highly dependent on a person’s support network and inner drive, and is often not adequately tapped.
We are combining low-cost robotic devices, a bimanual training program, social media frameworks such as Facebook Games, and on-line performance sharing between therapy clients and their therapists. This combination of components represents a best-practices approach to bidirectional knowledge transfer, development of technology and design of well-coordinated home-based therapy. We believe that together these approaches will yield interventions for people with stroke and children with hemiparetic cerebral palsy that significantly improve functional ability and lead to improved quality of life.