Matthew E. Taylor

taylorm@eecs.wsu.edu
EME 137
509-335-6457 (but I prefer email)

Assistant Professor
Allred Distinguished Professorship in Artificial Intelligence
School of Electrical Engineering and Computer Science
Washington State University
Pullman, WA 99164
 
Director of the IRL Lab





Publications       Research       Teaching       CV       Code       Links       Bio



News

Matt was elected to serve as one of the 27 members of the IFAAMAS Board of directors.
 
Our NSF National Robotics Initiative grant was accepted and is being funded by the USDA:
  • Intelligent In-Orchard Bin Managing System for Tree Fruit Production with Qin Zhang (PI) and Geoff Hollinger (Co-PI)
We're very fortunate to have won two grants from the Air Force Research Laboratory:
  • Lifelong Transfer Learning for Heterogenous Teams of Agents in Sequential Decision Processes with Eric Eaton (Co-PI) and Paul Ruvolo (Co-PI)
  • Curriculum Development for Transfer Learning in Dynamic Multiagent Settings with Peter Stone (PI)
Recent paper acceptances:
  • ECML-14: Agents Teaching Agents in Reinforcement Learning (Nectar Abstract) by Matthew E. Taylor and Lisa Torrey
  • RO-MAN-14: Learning Something from Nothing: Leveraging Implicit Human Feedback Strategies by Robert Loftin, B. Peng, J. MacGlashan, M. Littman, M. E. Taylor, D. Roberts, and J. Huang
  • IAT-14: CLEANing the Reward: Counterfactual Actions Remove Exploratory Action Noise in Multiagent Learning by Chris HolmesParker, Mathew Talor, Adrian Agogino, and Kagan Tumer
  • AAAI-14: Combining Multiple Correlated Reward and Shaping Signals by Measuring Confidence by Tim Brys, A.Nowe, D. Kudenko, and M. E. Taylor
  • AAAI-14: A Strategy-Aware Technique for Learning Behaviors from Discrete Human Feedback by Robert Loftin, J. MacGlashan, M. Littman, M. E. Taylor, D. Roberts, and J. Huang
  • AAAI-14 MLIS workshop: Training an Agent to Ground Commands with Reward and Punishment by James Macglashan, M. Littman, R. Loftin, B. Peng, D. Roberts and M. E. Taylor
  • AAAI-14 MLIS workshop: An Automated Measure of MDP Similarity for Transfer in Reinforcement Learning by Haitham Bou Ammar, E. Eaton, M. E. Taylor, D. C. Mocanu, K. Driessens, G. Weiss, and K. Tuyls
  • IJCNN-14: Multi-Objectivization of Reinforcement Learning Problems by Reward Shaping by Tim Brys, A. Harutyunyan, P. Vrancx, M. E. Taylor, D. Kudenko, and A. Nowe.
  • ICML-14: Online Multi-Task Learning for Policy Gradient Methods by Haitham Bou Ammar, P. Ruvolo, M. E. Taylor, and E. Eaton
  • Journal of Connection Science: Reinforcement Learning Agents Providing Advice in Complex Video Games by Matthew E. Taylor, Nicholas Carboni, Anestis Fachantidis, Ioannis Vlahavas and Lisa Torrey.


Research

I have worked with Milind Tambe as part of the TEAMCORE research group and am also a former member of the Learning Agents Research Group, directed by Peter Stone.

My research focuses on agents, physical or virtual entities that interact with their environments. My main goals are to enable individual agents, and teams of agents, to

  1. learn tasks in real world environments that are not fully known when the agents are designed;
  2. perform multiple tasks, rather than just a single task; and
  3. allow agents to robustly coordinate with, and reason about, other agents.
Additionally, I am interested in exploring how agents can learn from humans, whether the human is explicitly teaching the agent, the agent is passively observing the human, or the agent is actively cooperating with the human on a task.

A selection of current and past research projects follows.

Transfer Learning  

Transfer Learning

My dissertation focused on leveraging knowledge from a previous task to speed up learning in a novel task, focusing on reinforcement learning domains.
I gave a talk at AGI-08 that gives a brief introduction to, and motivation for, transfer learning.

Representative Publication:
Transfer Learning via Inter-Task Mappings for Temporal Difference Learning (JMLR-07)
Full list of relevant publications
 
RL Agent  

Reinforcement Learning

Much of my graduate work centered on reinforcement learning (RL) tasks, where agents learn to perform (initially) unknown tasks by optimizing a scalar reward. RL is well suited to allowing both virtual and physical agents to learn when humans are unable (or unwilling) to design optimal solutions themselves.

Representative Publication
Critical Factors in the Empirical Performance of Temporal Difference and Evolutionary Methods for Reinforcement Learning (JAAMAS-09)
Full list of relevant publications
 
Exploration/Exploitation
 

Multi-agent Exploration and Optimization

Since coming to USC, one of the most exciting projects we have worked on is a version of Distributed Constraint Optimization Problem (DCOP) where the agents have unknown rewards. This may also be thought of as a multi-agent, multi-armed bandit. This problem is relevant for tasks that require coordination under uncertainty, such as in wireless sensor networks.

Representative Publication
DCOPs Meet the Real World: Exploring Unknown Reward Matrices with Applications to Mobile Sensor Networks (IJCAI-09)
Full list of relevant publications
 


Teaching

Fall 2014:
CptS 483: Introduction to Robotics
Spring 2014: CptS 580-03: Intelligent Agents
 
Previous courses: here  
 

CV

View my CV as:
pdf


Code

Links

Brief Biography

Matthew E. Taylor graduated magna cum laude with a double major in computer science and physics from Amherst College in 2001. After working for two years as a software developer, he began his Ph.D. work at the University of Texas at Austin with an MCD fellowship from the College of Natural Sciences. He received his doctorate from the Department of Computer Sciences in the summer of 2008, supervised by Peter Stone. Matt then completed a two year postdoctoral research position at the University of Southern California with Milind Tambe and spent 2.5 years as an assistant professor at Lafayette College in the computer science department. He is currently an assistant professor at Washington State University in the School of Electrical Engineering and Computer Science and is a recipient of the National Science Foundation CAREER award. Current research interests include intelligent agents, multi-agent systems, reinforcement learning, transfer learning, and robotics.