Interactive Machine Learning: From Classifiers to Robotics
Slides from the 3.5 hour tutorial at AAAI-17 are Here. Presented as SUP1 on Sunday, February 5th, from 2-6pm.
This 3.5 hour tutorial was at AAMAS-17: T1. Monday, May 8th. 9am-1pm in WTC Forum Sala B. Slides Here.
We will present at IJCNN-17: T1. Sunday, May 14. 10:20am-12:20pm in Room #1 + 13 + 14. Slides for this 2-hour tutorial are here.
To make virtual agents and physical robots solve real-world tasks, it often becomes necessary to learn not only from static datasets or simulated oracles, but directly from humans. Unfortunately, some of the assumptions underlying traditional statistical machine learning approaches become invalid when learning from data provided by slow, inaccurate, or inconsistent trainers. Furthermore, many additional considerations that are typically outside the purview of machine learning experts, such as user interface, become critical. This tutorial will (1) survey selected existing work in this exciting and growing field; (2) propose a framework to classify and understand different types of work in this area, as well as highlight important opportunities for additional work; and (3) cover a selection of practical considerations, such as participant recruitment and compensation, and useful toolkits or testbeds.
The tutorial outline is as follows:
- Broad overview of interactive machine learning: definitions, and examples of seminal works
- Using the crowd for supervised learning tasks: labeling, model improvement, and interactive development of classifiers
- Teaching robots to perfrom sequential tasks: leveraging explanations and demonstrations for both
single-agent and cooperative tasks
- Training and learning in sequential tasks: combining using human-provided demonstrations or rewards
with reinforcement learning
- Experimental design & other practical considerations:
IRBs, interface design, participant recruitment,
compensation, and common toolkits/testbeds
- Conclusions: open problems, and pointers to additional information
Prerequisite Knowledge: The majority of the material in this tutorial will be understandable without any domain expertise. There will sections in which having a background in machine learning at the level of a one-semester graduate class will be useful for understanding algorithmic details.
Bradley H. Hayes is a postdoctoral associate in the Interactive Robotics Group at MIT. Focusing on enabling fluent human-robot collaboration and interpretable machine learning, his work develops the algorithms necessary to build capable, supportive, and interactive autonomous robotic systems that operate safely and legibly around humans.
Ece Kamar is a researcher at the Adaptive Systems and Interaction group at Microsoft Research Redmond. Ece earned her PhD in computer science from Harvard University. She works on a number of subfields of AI; including planning, machine learning, multiagent systems and human-computer teamwork.
Matthew E. Taylor is an assistant professor at Washington State University and holds the Allred Distinguished Professorship in Artificial Intelligence. His group, the Intelligent Robot Learning Lab, researches topics including intelligent agents, multiagent systems, reinforcement learning, transfer learning, and robotics.