
We are the People and Robots Teaching and Learning (PoRTaL) group at Cornell Computer Science.
We build everyday robots for everyday users.
Our mission is to make robots accessible, user-friendly and practical for everyday tasks from cooking to cleaning. To this end, our research focuses on imitation learning, decision making and human-robot interaction. We are passionate about both theory and algorithms that equip robots with skills to work with people.
If you are interested in joining our minion army to take over the world, apply here!.
news
Mar 6, 2025 | Proud to share two preprints! muCode presents a simple and scalable method for multi-turn code generation leveraging learned verifiers. ORCA formulates rewards as an ordered coverage problem, enabling robots to learn from a single temporally misaligned video demonstration. |
---|---|
Jan 27, 2025 | Two exciting papers at ICRA 2025! 🎉 RHyME enables robots to learn from human videos even when there are differences in execution gaps. MotionTrack presents a unified action space by representing actions as 2D trajectories on an image, enabling robots to directly imitate from cross-embodiment datasets. |
Jan 22, 2025 | Two amazing papers appearing at ICLR 2025! 😎 SFM presents a non-adversarial approach for inverse reinforcement learning (IRL) and can learn from state-only demonstrations. Robotouille is a challenging benchmark that tests LLM agents synchronous and asynchronous planning capabilities through diverse long-horizon tasks and time delays. |
Dec 20, 2024 | Congratulations to Juntao Ren for being runner up in the CRA Outstanding Undergraduate Researcher Award! |
Nov 5, 2024 | Thrilled to share that a number of our works have been accepted to CoRL 2024, with two conference papers (MOSAIC and APRICOT) and two workshop papers (One-Shot Imitation under Mismatched Execution and ‘Time Your Rewards’)! |
May 17, 2024 | Proud to announce that, at ICRA 2024, MOSAIC won the best paper award at the VLNMN workshop and the best poster at the MoMa workshop! |
May 12, 2024 | We received OpenAI Superalignment grant! Thank you OpenAI! |
May 1, 2024 | Congratulations to Juntao and Gokul for their paper at ICML 2024! Checkout their paper, Hybrid Inverse Reinforcement Learning! Please also checkout the explainer tweet here! |
Apr 6, 2024 | We got a Google Research Scholar Award to work on LLMs and Planning! Thank you Google! |
Apr 6, 2024 | Excited to finally share MOSAIC, our multi-robot collaborative cooking system. Checkout the explainer tweet! |
Apr 6, 2024 | Congratulations to Prithwish Dan for 2024 Merril Presidential Scholar! Also honorable mention for CRA outstanding undergraduate research. |
Apr 6, 2024 | Cool paper at ICRA 2024, Interact, on training transformer models to predict human motion conditioned on robot actions. Congratulations to Kushal and team! |
Apr 1, 2024 | Fun talk at Andrea’s class in CMU on To Rl or not to RL! |
Feb 24, 2024 | We got an award from NASA to work on learning human-drone teaming policies. Thank you NASA! Read more here. |
Jan 4, 2024 | Invited talk at ISAIM Deep RL Workshop on “To RL or not to RL”! |
Oct 2, 2023 | Congratulations to Yuki, Gonzalo and Yash for their first paper at Neurips 2023, Demo2Code! |
Sep 27, 2023 | New NSF FRR grant on Inverse Task Planning with Jeannette Bohg! |
Sep 15, 2023 | Exciting paper at CORL 2023, Manicast, on learning human forecasts for collaborative human-robot manipulation. Congratulations to Kushal and team! Check out the explainer tweet. |
Aug 15, 2023 |
New NSF Grant on superhuman imitation learning with the O.G ![]() |
Jul 15, 2023 | Exciting new work, Demo2Code, on learning from demonstrations with LLMs! Check out the explainer tweet. |