SUP1: Interactive Machine Learning: From Classifiers to Robotics
An AAAI-17 Tutorial
Time: Sunday, February 5th, 2-6pm
Room: Continental 1-3
To make virtual agents and physical robots solve real-world tasks, it often becomes necessary to learn not only from static datasets or simulated oracles, but directly from humans. Unfortunately, some of the assumptions underlying traditional statistical machine learning approaches become invalid when learning from data provided by slow, inaccurate, or inconsistent trainers. Furthermore, many additional considerations that are typically outside the purview of machine learning experts, such as user interface, become critical. This tutorial will (1) survey selected existing work in this exciting and growing field; (2) propose a framework to classify and understand different types of work in this area, as well as highlight important opportunities for additional work; and (3) cover a selection of practical considerations, such as participant recruitment and compensation, and useful toolkits or testbeds.
The tutorial outline is as follows:
- Broad overview of interactive machine learning: definitions, and examples of seminal works
- Using the crowd for supervised learning tasks: labeling, model improvement, and interactive development of classifiers
- Teaching robots to perfrom sequential tasks: leveraging explanations and demonstrations for both
single-agent and cooperative tasks
- Training and learning in sequential tasks: combining using human-provided demonstrations or rewards
with reinforcement learning
- Experimental design & other practical considerations:
IRBs, interface design, participant recruitment,
compensation, and common toolkits/testbeds
- Conclusions: open problems, and pointers to additional information
Prerequisite Knowledge: The majority of the material in this tutorial will be understandable without any domain expertise. There will sections in which having a background in machine learning at the level of a one-semester graduate class will be useful for understanding algorithmic details.
Slides for download: Here!
Contact us: InteractiveML.firstname.lastname@example.org
Questions submitted here: googl/slides/jpmfvm
Bradley H. Hayes is a postdoctoral associate in the Interactive Robotics Group at MIT. Focusing on enabling fluent human-robot collaboration and interpretable machine learning, his work develops the algorithms necessary to build capable, supportive, and interactive autonomous robotic systems that operate safely and legibly around humans.
Ece Kamar is a researcher at the Adaptive Systems and Interaction group at Microsoft Research Redmond. Ece earned her PhD in computer science from Harvard University. She works on a number of subfields of AI; including planning, machine learning, multiagent systems and human-computer teamwork.
Matthew E. Taylor is an assistant professor at Washington State University and holds the Allred Distinguished Professorship in Artificial Intelligence. His group, the Intelligent Robot Learning Lab, researches topics including intelligent agents, multiagent systems, reinforcement learning, transfer learning, and robotics.