We develop artificial intelligence of mobile-manipulation robots to endow them with the capabilities that come naturally to humans. Our methodology is to tightly integrate learning, reasoning, and modelling to create adaptive robots that utilize both prior knowledge and experience to effectively operate in novel and unstructured environments.
Perception for sequential robot manipulation
One fundamental capability of a mobile-manipulator is computing a sequence of manipulation motions to move an object given a perceptual input. While deep learning techniques has had significant impact on computer vision tasks, their impact on manipulation planning has been limited due to challenges that arise from motion feasibility, sequentiality, and representation. We are developing an integrated learning and reasoning methods that use deep learning to process high-dimensional sensory data to predict appropriate subgoals and motion constraints, and use motion planning to compute compute sequential manipulation motions to manipulate a diverse set of novel objects.
Learning to guide task and motion planning
AlphaGo had a tremendous success in the game of Go by integrating planning with reinforcement learning (RL). Planning enabled the Go-playing agent to deliberately choose a move among many choices, while RL enabled the agent to prioritize promising moves efficiently from experience. We apply this insight to task and motion planning problems, where the robot has to manipulate multiple objects to achieve a high-level goal. In particular, we are developing representation, planning, and learning algorithms to deal with the fact that the robot faces a real-world environment instead of a game board.