
IRIS Lab focuses on advancing robotics through three core research areas: contact-rich manipulation, autonomy alignment with human feedback, and control and optimization methods. In contact-rich manipulation, they aim to equip robotic hands with human-like dexterity using a physics-grounded, bidirectional approach combining AI learning with physics-based methods. For autonomy alignment, they develop certifiable and efficient methods for robots to learn from human demonstrations, corrections, and preferences, making autonomy acquisition easy and efficient. Their work on control and optimization designs fundamental algorithms for safe and efficient operation of autonomous systems, leveraging optimal control, differentiable optimization, and reinforcement learning. Key research outputs include the TwinTrack system for real-time tracking of dynamic objects in contact-rich scenes, and various publications on dexterous manipulation, reward alignment, and control methods, often with publicly available code and demos. The lab collaborates and presents its work at top-tier robotics conferences such as RSS, ICML, and ICRA.

IRIS Lab focuses on advancing robotics through three core research areas: contact-rich manipulation, autonomy alignment with human feedback, and control and optimization methods. In contact-rich manipulation, they aim to equip robotic hands with human-like dexterity using a physics-grounded, bidirectional approach combining AI learning with physics-based methods. For autonomy alignment, they develop certifiable and efficient methods for robots to learn from human demonstrations, corrections, and preferences, making autonomy acquisition easy and efficient. Their work on control and optimization designs fundamental algorithms for safe and efficient operation of autonomous systems, leveraging optimal control, differentiable optimization, and reinforcement learning. Key research outputs include the TwinTrack system for real-time tracking of dynamic objects in contact-rich scenes, and various publications on dexterous manipulation, reward alignment, and control methods, often with publicly available code and demos. The lab collaborates and presents its work at top-tier robotics conferences such as RSS, ICML, and ICRA.