Matthew E. Taylor
509-335-6457 (but I prefer email)
Allred Distinguished Professorship in Artificial Intelligence
School of Electrical Engineering and Computer Science
Washington State University
Pullman, WA 99164
Director of the IRL Lab
If you'd like to meet (physically or virtually),
please schedule a meeting: meetme.so/taylorm
Recent paper acceptances:
- IJCAI-15: Reinforcement Learning from Demonstration through Shaping by Tim Brys, Anna Harutyunyan, Halit Suay, Sonia Chernova, E. Matthew Taylor and Ann Nowe (poster + long talk)
- RLDM-15: Reward Shaping by Demonstration by Halit Bener Suay, Tim Brys, Matthew E. Taylor, and Sonia Chernova (poster)
- RLDM-15: Ensembles of Shapings by Tim Brys, Anna Harutyunyan, Matthew E. Taylor, and Ann Nowe (poster + talk)
- AAMAS-15: Policy Transfer using Reward Shaping by Tim Brys, Anna Harutyunyan, Matthew E. Taylor, and Ann Nowe
- AAMAS-15: Bidding in Non-Stationary Energy Markets by Pablo Hernandez-Leal, Matthew E. Taylor, Enrique Munoz de Cote, and L. Enrique Sucar
- ALA workshop at AAMAS-15: Learning Against Non-Stationary Opponents in Double Auctions Pablo Hernandez-Leal, Matthew E. Taylor, Enrique Munoz de Cote, and L. Enrique Sucar
- IUI-15: Towards integrating real-time crowd advice with reinforcement learning by Gabriel V. de la Cruz Jr., Bei Peng, Walter S. Lasecki, and Matthew E. Taylor
- AAAI-15 workshop: Generating real-time crowd advice to improve reinforcement learning agents by Gabriel V. de la Cruz Jr., Bei Peng, Walter S. Lasecki, and Matthew E. Taylor
AAAI-15: Unsupervised Cross-Domain Transfer in Policy Gradient Reinforcement Learning via Manifold Alignment by
Haitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew E. Taylor
- Adaptive Behavior: Transfer Learning with Probabilistic Mapping Selection by Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, and Ioannis Vlahavas
I have worked with Milind Tambe as part of the
TEAMCORE research group and am also a
former member of
Agents Research Group, directed
by Peter Stone.
My research focuses on agents, physical or virtual entities
that interact with their environments. My main goals are to enable
individual agents, and teams of agents, to
Additionally, I am interested in exploring how agents can learn from
humans, whether the human is explicitly teaching the agent, the agent
is passively observing the human, or the agent is actively cooperating
with the human on a task.
- learn tasks in real world environments that are not fully known when the agents are designed;
- perform multiple tasks, rather than just a single task; and
- allow agents to robustly coordinate with, and reason about, other agents.
A selection of current research projects can be found at the IRL Lab website.
Spring 2015: CptS 580: Reinforcement Learning
Fall 2014: CptS 483: Introduction to Robotics
Previous courses: here
View my CV as: pdf
Matthew E. Taylor
graduated magna cum laude with a double major in computer
science and physics from Amherst College in 2001. After working for
two years as a software developer, he began his Ph.D. work at the University of Texas at Austin with an MCD
fellowship from the College of Natural Sciences. He received his
doctorate from the Department of Computer Sciences in the summer of 2008, supervised by Peter Stone.
Matt then completed a two year
postdoctoral research position at the
University of Southern California with Milind Tambe and spent 2.5 years as an assistant professor at Lafayette College in the computer science department. He is currently an assistant professor at Washington State University in the School of Electrical Engineering and Computer Science and is a recipient of the National Science Foundation CAREER award.
Current research interests
include intelligent agents, multi-agent systems, reinforcement learning, transfer learning, and robotics.