Successful integration of robotics into the human sphere requires significant research on methods for direct human-robot interaction (HRI), unhindered by handheld interfaces, and grounded in the physical world in which people work and play.
The overall goals of this research are to contribute to the development of knowledge, methods, and algorithms for natural, transparent HRI that enable humans and robots to interact effectively and cooperatively in unstructured, shared spaces. Motions, gestures, forces, and other cues are effectively used by pairs working together to manage cooperative tasks – particularly in situations where noise or other audio barriers preclude verbal communication. Other channels, such as physiological sensing, can provide cues around readiness and satisfaction. These cues signal transition-related information essential to the collaboration flow such as: turn taking/giving, role changes (e.g., leader/follower, instructor/trainee) and state changes (e.g., ready/waiting/busy, unsure/confident).
Check out this video demonstration of a robot-human handover controller developed in our lab.
Existing industrial robot programming interfaces, e.g., teach pendants and computer consoles, are often unintuitive, resulting in slow and tedious teaching process. While kinesthetic teaching provides an alternative for small robots, where interaction can be safe, for large industrial robots, physical interaction is not an option. Emerging augmented reality (AR) technology offers the potential for faster, safer, and more intuitive robot programming as it admits presentation of rich amounts of visual, in-situ information. However, too much information may also overload user visual perception capacity, and it may not provide adequate feedback of robot state.
This project offers a future-focused approach for robot programming using augmented reality (AR) with the goal of enabling safe and intuitive human-robot interaction in collaborative manufacturing. Using a mixed reality head-mounted display (Microsoft Hololens) and a pair of surface electromyography (EMG) and gesture sensing armbands (MYO Armband), we designed a multimodal user interface using AR to ease the robot programming task by proving multiple interactive functions: 1) Trajectory specification. 2) Virtual previews of robot motion. 3) Visualization of robot parameters. 4) Online reprogramming during simulation and execution. 5) Gesture and EMG-based control of robot trajectory execution. 6) Online virtual barrier creation and visualization. We present a multimodal system for trajectory programming and on-line control, merging AR, electromyography reading, gesture control, speech control, and tactile feedback. We validated our AR-robot programming interface by comparing it with kinesthetic teaching and other standard robot control methods and found promising results for our system.
Developed for a project in collaboration with DLR, the German Aerospace Institute, our goal is to solve complex problems in manufacturing processes that are labor-intensive, requires online expert knowledge, and have been too difficult to automate completely, such as the prime example of carbon fiber reinforced polymer manufacturing.
We’re currently working on further integration of new methods to allow more intuitive and natural operation of robotics!
Sidewalks are unique in that the pedestrian-shared space has characteristics of both roads and indoor spaces. Like vehicles on roads, pedestrian movement often manifests as flows in opposing directions. On the other hand, pedestrians also form crowds and can exhibit much more random movements than vehicles. Classical algorithms are insufficient for safe navigation around pedestrians and remaining on the sidewalk space. Our approach takes advantage of natural human motion to allow a robot to adapt to sidewalk navigation in a safe and socially-compliant manner. We developed a group surfing method which aims to imitate the optimal pedestrian group for bringing the robot closer to its goal. For pedestrian-sparse environments, we use a sidewalk edge detection and following method. Underlying these two navigation methods, the collision avoidance scheme is human-aware. Components of the navigation stack are demonstrated in simulation and an integrated simulation and real-world experiment are discussed.
In addition, We’re investigating the effect of intent communication on mobile robot social-acceptability in pedestrian-rich environments. Pedestrians naturally engage in joint collision avoidance with others in public spaces, based on a variety of subtle body language cues. We do not, however, understand very well how robots move. We’re developing and testing a variety of cues that mobile robots can use to help pedestrians quickly build enough understanding and trust to feel both comfortable and safe when sharing a space with a robot.
We have two studies planned for this project. The first, a laboratory experiment, will determine the most communicative of a variety of cues. The second, a field study, will validate the cue chosen in the first study in a pedestrian space on campus.