Object handover is a common task arising frequently in many cooperative scenarios. Therefore, it is crucial that robots perform handovers well when working with people. However, determining the proper handover method for an object is a difficult problem since it varies depending on each object’s affordances. Towards enabling effective human-robot cooperation, this thesis contributes a framework that enables robots to automatically determine handover methods for various objects by observing human handovers and object usages.
This thesis first documents a user study conducted to characterize and compare the handover orientations used by humans in different conditions. It puts forth the novel idea of object affordance axes for identifying patterns in handover orientations, and a distance minimizing method for computing mean handover orientation from a set of observations.
Next, this thesis presents an object grouping and classification method based on observed object usage for generalizing learned handover methods to new objects. Until now, a demonstrated method for generalizing handover methods to new object has been lacking. The presented method focuses on a set of action features extracted from the movement patterns and inter-object interactions observed during usage. An experiment demonstrates the effectiveness of the method on grouping objects and then classifying new objects and computing proper handover methods for them.
The described framework for learning and generalizing handover methods is implemented onto a Kawada Industries HRP2V robot, and this thesis also documents the verification experiments. The implementation in this thesis overcomes the robot perception challenge of identifying a held object’s pose at handover by detecting the object at the pre-occluded state and tracking its pose using a sequential Monte Carlo method. Results show that the framework allows robots to learn handover methods from demonstrations and compute proper handover methods for new objects. This is the first demonstrated system capable of automatically learning and generalizing handover methods from observations. Finally, integration into a household service robot application shows how this work this can enhance the capabilities of robots working in the real world by enabling them to work effectively with humans.
Through enabling better human-robot object handovers, this thesis contributes towards improving the interaction between humans and robots, thus, allowing safer, more natural, and more efficient human-robot cooperation.
Handing over objects is a common basic task that arises between people in many cooperative scenarios. On a daily basis, we effortlessly and successfully perform countless unscripted handovers without any explicit communication. However, handing over an object to a person is a challenging task for robotic “hands”, and the resulting interaction is often unnatural. To improve human-robot cooperation, the work described in this thesis has led to the design of a human-inspired handover controller based on analysis and characterization of the haptic interaction during human-to-human object handover.
The first experiment in this thesis documents novel experimental work done to measure the dynamic interaction in human-human handovers. The grip forces and load forces experienced by the giver and the receiver during a handover are examined, and the key features are identified. Based on these experimental results, guidelines for designing human- robot handovers are proposed. Next, this thesis describes a handover controller model that enables robots to hand over objects to people in a safe, efficient, and intuitive manner, and an implementation of the handover controller on a Willow Garage PR2 robot is documented. Finally, a second experiment is presented, which compares various tunings of the novel controller in a user study. Results show that the novel controller yields more efficient and more intuitive robot-to-human handovers when compared to existing handover controllers.
PhD Candidates and Students
Prediction and Production of Human Reaching Trajectories for Human-Robot Interaction
My M.A.Sc research project at the CARIS lab on “Speed-Independent Path Control for Industrial Robots” focused on path & trajectory planning and control for industrial robots.
Supervisors: Drs. Machiel Van der Loos, Jaimie Borisoff
There are a large number of people all around the world who rely on wheeled mobility assistive devices (WMAD) to perform their daily life activities. The use of WMADs impacts various aspects of peoples’ life including their personal autonomy. In many cases, autonomy – that is, peoples’ choices and controls over what they want to do – is determined by the type of mobility assistive device they are using. Therefore, it is essential to recognize, assess, and address the true autonomy-related needs of mobility device users in the process of assistive device development.
In my research, I’m reviewing the literature to identify the main contributing factors to the autonomy of WMAD users. Next, I compare the design and performance characteristics of existing WMADs across these factors. This knowledge provides an insight into the existing gap between the users’ needs and what is available to them. To address this gap, I plan to establish an autonomy-based framework for mobility assistive technology development. Use of this framework could lead to the design and development of mobility assistive devices that provide a more balanced sense of autonomy to the users
Supervisors: Drs. Machiel Van der Loos, Jaimie Borisoff
Powered lower limb exoskeletons (LLEs) are wearable robotic aids that provide mobility assistance for people with mobility impairments. Despite their advanced design, LLEs are still far from being effective assistive devices that can be used to perform activities of daily living. The main challenge in the operation of a LLE is to ensure that balance is maintained. However, maintaining an upright stance is not always achievable and regardless of the quality of user skill and training, inevitably falls will occur. Currently, there is no control strategy developed or implemented in LLEs that help reduce the user’s risk of injury in the case of an unexpected fall.
In this thesis, an optimization methodology was developed and used to create a safer strategy for exoskeletons falling backwards in a simulation environment. Due to the data available regarding the biomechanics of human falls, the optimization methodology was first developed to study falls with simulation parameters characteristic of healthy people. The resulting optimal fall strategy in this study had similar kinematic and dynamic characteristics to the findings of previous studies on human falls. Rapid knee flexion at the onset of the fall, and knee extension prior to ground contact are examples of these characteristics. Following this, the optimization methodology was extended to include the characteristics of an exoskeleton. The results revealed that the hip impact velocity was reduced by 58% when the optimal fall strategy was employed compared to the case where the exoskeleton fell with locked joints. It was also shown that in both cases of optimal human and human-exoskeleton falls, the models contacted the ground with an upright trunk with a near-zero trunk angular velocity to avoid head impact. These results achieved the thesis goal of developing an effective safe fall control strategy. This strategy was then implemented in a prototype exoskeleton test device. The experimental results validated the simulation outcomes and support the feasibility of implementing this control strategy. Future studies are needed to further examine the effectiveness of applying this strategy in an actual LLE.
The first prototype consisted of a model of a triple-link inverted pendulum and a control system, designed and fabricated by an undergraduate Engineering Physics student team at UBC. The mechanical test setup characterized a half-plane and half-scale model of a human body. Three joints of the triple-link inverted pendulum replicated the motion of the hip, knee, and ankle joints. Similar to the three-link model of a human fall, the hip and knee joints of the inverted pendulum were actuated and the ankle was a passive joint. The hip and knee joint angles were read through the actuator’s encoder and the ankle joint angle was read by a potentiometer that was installed at the joint. The controller was programmed to start the safe fall control strategy once the ankle angle passed beyond a specified angle. Therefore, the ankle angle sensor was constantly monitored subsequent to the initialization of the hip and knee joints. When the ankle angle exceeded the specified limit, the position control strategy was activated to control the hip and knee joint angles throughout the fall.
Large deviations were observed between the experimental and optimal values of the joints angular velocity throughout the fall duration. This is mainly due to hardware and software limitations of this prototype.
To address the abovementioned issues and to further improve controller performance a second prototype was built by an undergraduate Mechanical Engineering student team at BCIT. The second prototype includes a scaled and adjustable exoskeleton with the same actuation setup as the first prototype and will execute the fall routine, as well as a release mechanism to zero the system and initiate the fall. Currently, we are working on implementing the developed safe fall strategy in this prototype.
Supervisors: Drs. Elizabeth Croft, Machiel Van der Loos
Robots are increasingly used in many areas in our daily life. The ability for non-experts to deal with these robots will become inevitable in the near future. Robot Learning from Demonstration (LfD) is concerned with programming robots to perform tasks by observing demonstrations from humans. The current state of the art in LfD uses unintuitive teaching interfaces to program the robot and lacks the ability to generalize what the robot has learned to a wider range of tasks. The goal of the proposed research is to build a simple and intuitive teaching interface for robot LfD and to build a library of simple tasks that, if combined, can allow the robot to perform several complex tasks.
We propose using innovative user interfaces for intuitively teaching robots from human demonstrations. By doing this, non-experts can easily teach a “virtual” robot, that is a model of the real one, some tasks, then these skills can be transferred to the real robot. In addition, using such interfaces allows us to collect massive data during the teaching process. These data can be structured as a generalizable taxonomy, that ranges from simple tasks to complex ones. We will use this taxonomy to teach the robot how to learn from its past experience and to be a fast-learner in new tasks.
Our proposed research democratizes access to robotics. Building a generalizable database of tasks with efficient structure will significantly increase the applicability of using robots in many areas. With my proposed teaching interfaces, robots can be more adaptable to the increasing changes needed to accommodate the requirements of customers. The increased adaptability comes from giving users more control over the behaviour of their robots in an intuitive and easy manner. In addition, such interfaces will make it easier for domain experts who do not know how to program to transfer their skills to robots.
Supervisor: Dr. Carlo Menon
Hand force estimation is critical for applications that involve physical human-machine interactions for force monitoring and machine control. Force Myography (FMG) is a potential technique to be used for estimating hand force/torque. The FMG signals represent the volumetric changes in the arm muscles due to muscle contraction or expansion during force/torque exertion. The aim of this thesis is to explore the suitability of FMG for hand force/torque estimation.
Studying the feasibility of using FMG for torque estimation was preliminary investigated by using 1-DOF torque sensor for labeling the FMG during torque exertion. A custom designed force-sensing resistors (FSRs) band was donned on the forearm muscle belly for measuring FMG signals, while the participants exerted torque around three axes. A regression model was created for each torque axis and trained using the corresponding data. The average R2 was 0.89 for pronation-supination, flexion-extension, and radial-ulnar deviations.
Using 1-DOF torque sensor for labeling the data needs a new custom-rig for capturing each torque axis. To overcome this limitation, a 6-DOF force/torque load cell was used for labeling the FMG data during force/torque exertion in any direction. In addition, a total number of 60 FSRs were embedded into four bands to be worn on the arm for measuring FMG signals during force/torque exertion. Healthy participants were recruited in this study and were asked to exert isometric force along three perpendicular axes, torque about the same three axes, and force and torque freely in any direction. Three cases were considered to explore the performance of the FMG bands in estimating force/torque in single- and multi- axis. These cases are: (1) 6 axes force/torque individually; (2) 3-DOF force and 3-DOF torque; and (3) 6-DOF force and torque simultaneously. In addition, a comparison between all possible combinations of the four bands was held to provide guidelines about the best placement of the FMG measurements in each case.
Supervisors: Drs. Elizabeth Croft, Machiel Van der Loos
We are investigating robot intent communication as a mechanism for socially-acceptable crowd navigation with a mobile robot.
Supervisor: Dr. Mike Van der Loos
My study is a continuation of FEATHERS, a project from the RREACH Lab at UBC that focuses on developing and evaluating novel physical exercise technologies for kids with motor disabilities. At this point, I would like to study how immersive VR technology can be used to benefit upper limb rehabilitation for persons with hemiplegia. Specifically, I would like to see how the use of error augmentation (i.e. adding visual or game element feedback to accentuate deviation from the desired exercise motion) might encourage persons with hemiplegia to engage their affected side more effectively by comparing the symmetry between the stronger and weaker limbs. I’m hoping that the immersive environment of VR and the ability to provide 1:1 direct visual feedback will increase active engagement.