Matthew E. Taylor's Publications

Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Research Category

Sorted by Date

20142013201220112010200920082007200620052004


2014

  1. Haitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew E. Taylor. Online Multi-Task Learning for Policy Gradient Methods. In Proceedings of the 31st International Conferences on Machine Learning (ICML), June 2014. 25% acceptance rate
    Details     Download: [pdf] (3.1MB )  

  2. Haitham Bou Ammar, Eric Eaton, Matthew E. Taylor, Decibal C. Mocanu, Kurt Driessens, Gerhard Weiss, and Karl Tuyls. An Automated Measure of MDP Similarity for Transfer in Reinforcement Learning. In Proceedings of the Machine Learning for Interactive Systems workshop (at AAAI-14), July 2014.
    Details     Download: [pdf] (456.0kB )  

  3. Tim Brys, Ann Nowé, Daniel Kudenko, and Matthew E. Taylor. Combining Multiple Correlated Reward and Shaping Signals by Measuring Confidence. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI), July 2014. 28% acceptance rate
    Details     Download: [pdf] (529.7kB )  

  4. Tim Brys, Anna Harutyunyan, Peter Vrancx, Matthew E. Taylor, Daniel Kudenko, and Ann Nowé. Multi-Objectivization of Reinforcement Learning Problems by Reward Shaping. In Proceedings of the IEEE 2014 International Joint Conference on Neural Networks (IJCNN), July 2014. 59% acceptance rate
    Details     Download: [pdf] (524.2kB )  

  5. Tim Brys, Matthew E. Taylor, and Ann Nowé. Using Ensemble Techniques and Multi-Objectivization to Solve Reinforcement Learning Problems. In Proceedings of the 21st European Conference on Artificial Intelligence (ECAI), August 2014. 41% acceptance rate for short papers
    Details     Download: [pdf] (151.7kB )  

  6. Tim Brys, Kristof Van Moffaert, Ann Nowe, and Matthew E. Taylor. Adaptive Objective Selection for Correlated Objectives in Multi-Objective Reinforcement Learning (Extended Abstract). In The 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), May 2014. Extended abstract: 24% acceptance rate for papers, additional 22% for extended abstracts
    Details     Download: [pdf] (182.1kB )  

  7. Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, and Ioannis Vlahavas. An Autonomous Transfer Learning Algorithm for TD-Learners. In Proceedings of the 8th Hellenic Conference on Artificial Intelligence (SETN), May 2014. 50% acceptance rate
    Details     Download: [pdf] (249.9kB )  

  8. Chris HolmesParker, Matthew E. Taylor, Adrian Agogino, and Kagan Tumer. CLEANing the Reward: Counterfactual Actions Remove Exploratory Action Noise in Multiagent Learning. In Proceedings of the 2014 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT), August 2014. 43% acceptance rate
    Details     Download: [pdf] (560.5kB )  

  9. Chris HolmesParker, Matthew E. Taylor, Adrian Agogino, and Kagan Tumer. CLEANing the Reward: Counterfactual Actions Remove Exploratory Action Noise in Multiagent Learning (Extended Abstract). In The Thirteenth International Joint Conference on Autonomous Agents and Multiagent Systems, May 2014. Extended abstract: 24% acceptance rate for papers, additional 22% for extended abstracts
    Details     Download: [pdf] (195.4kB )  

  10. Chris HolmesParker, Matthew E. Taylor, Yusen Zhan, and Kagan Tumer. Exploiting Structure and Agent-Centric Rewards to Promote Coordination in Large Multiagent Systems. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-14), May 2014.
    Details     Download: [pdf] (586.4kB )  

  11. Robert Loftin, Bei Peng, James MacGlashan, Michael Littman, Matthew E. Taylor, David Roberts, and Jeff Huang. Learning Something from Nothing: Leveraging Implicit Human Feedback Strategies. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), August 2014.
    Details     Download: [pdf] (434.7kB )  

  12. Robert Loftin, Bei Peng, James MacGlashan, Machiael L. Littman, Matthew E. Taylor, Jeff Huang, and David L. Roberts. A Strategy-Aware Technique for Learning Behaviors from Discrete Human Feedback. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI), July 2014. 28% acceptance rate
    Details     Download: [pdf] (667.3kB )  

  13. James Macglashan, Michael L. Littman, Robert Loftin, Bei Peng, David Roberts, and Matthew E. Taylor. Training an Agent to Ground Commands with Reward and Punishment. In Proceedings of the Machine Learning for Interactive Systems workshop (at AAAI-14), July 2014.
    Details     Download: [pdf] (439.2kB )  

  14. Matthew E. Taylor, Nicholas Carboni, Anestis Fachantidis, Ioannis Vlahavas, and Lisa Torrey. Reinforcement learning agents providing advice in complex video games. Connection Science, 26(1):45–63, 2014.
    Details     Download: [pdf] (587.5kB )  

  15. Tim Tim, Tong T. Pham, and Matthew E. Taylor. Distributed learning and multi-objectivity in traffic light control. Connection Science, 26(1):65–83, 2014.
    Details     Download: [pdf] (756.1kB )  

  16. Yusen Zhan, Anestis Fachantidis, Ioannis Vlahavas, and Matthew E. Taylor. Agents Teaching Humans in Reinforcement Learning Tasks. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-14), May 2014.
    Details     Download: [pdf] (422.7kB )  

2013

  1. Haitham Bou Ammar, Matthew E. Taylor, Karl Tuyls, and Gerhard Weiss. Reinforcement Learning Transfer using a Sparse Coded Inter-Task Mapping. In LNAI Post-proceedings of the European Workshop on Multi-agent Systems, Springer-Verlag, 2013.
    Details     Download: [pdf] (535.0kB )  

  2. Haitham Bou Ammar, Decebal Constantin Mocanu, Matthew E. Taylor, Kurt Driessens, Karl Tuyls, and Gerhard Weiss. Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), September 2013. 25% acceptance rate
    Details     Download: [pdf] (519.2kB )  

  3. Haitham Bou Ammar, Decebal Constantin Mocanu, Matthew E. Taylor, Kurt Driessens, Karl Tuyls, and Gerhard Weiss. Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines. In The 25th Benelux Conference on Artificial Intelligence (BNAIC), November 2013.
    Details     Download: [pdf] (85.8kB )  

  4. Ravi Balasubramanian and Matthew E. Taylor. Learning for Mobile-Robot Error Recovery (Extended Abstract). In The AAAI 2013 Spring Symposium --- Designing Intelligent Robots: Reintegrating AI II, March 2013.
    Designing Intelligent Robots
    Details     Download: [pdf] (490.0kB )  

  5. Nicholas Carboni and Matthew E. Taylor. Preliminary Results for 1 vs. 1 Tactics in Starcraft. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-13), May 2013.
    ALA-13
    Details     Download: [pdf] (386.5kB )  

  6. Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, and Ioannis Vlahavas. Autonomous Selection of Inter-Task Mappings in Transfer Learning (extended abstract). In The AAAI 2013 Spring Symposium --- Lifelong Machine Learning, March 2013.
    Lifelong Machine Learning
    Details     Download: [pdf] (215.4kB )  

  7. Tong Pham, Aly Tawfika, and Matthew E. Taylor. A Simple, Naive Agent-based Model for the Optimization of a System of Traffic Lights: Insights from an Exploratory Experiment. In Proceedings of Conference on Agent-Based Modeling in Transportation Planning and Operations, September 2013.
    Details     Download: [pdf] (2.6MB )  

  8. Tong Pham, Tim Brys, and Matthew E. Taylor. Learning Coordinated Traffic Light Control. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-13), May 2013.
    ALA-13
    Details     Download: [pdf] (471.3kB )  

  9. Lisa Torrey and Matthew E. Taylor. Teaching on a Budget: Agents Advising Agents in Reinforcement Learning. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS), May 2013. 23% acceptance rate
    AAMAS-13
    Details     Download: [pdf] (253.0kB )  

2012

  1. Matthew Adams, Robert Loftin, Matthew E. Taylor, Michael Littman, and David Roberts. An Empirical Analysis of RL's Drift From Its Behaviorism Roots. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-12), June 2012.
    ALA-12
    Details     Download: [pdf] (338.4kB )  

  2. Haitham Bou Ammar, Karl Tuyls, Matthew E. Taylor, Kurt Driessen, and Gerhard Weiss. Reinforcement Learning Transfer via Sparse Coding. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS), June 2012. 20% acceptance rate
    AAMAS-12
    Details     Download: [pdf] (286.7kB )  

  3. Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, and Ioannis Vlahavas. Transfer Learning via Multiple Inter-Task Mappings. In Scott Sanner and Marcus Hutter, editors, Recent Advances in Reinforcement Learning, Lecture Notes in Artificial Intelligence, pp. 225–236, Springer-Verlag, Berlin, 2012.
    Details     Download: [pdf] (176.6kB )  

  4. Sanjeev Sharma and Matthew E. Taylor. Autonomous Waypoint Generation Strategy for On-Line Navigation in Unknown Environments. In IROS Workshop on Robot Motion Planning: Online, Reactive, and in Real-Time, October 2012.
    Details     Download: [pdf] (901.0kB )  

  5. Lisa Torrey and Matthew E. Taylor. Towards Student/Teacher Learning in Sequential Decision Tasks. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS), June 2012. Extended Abstract: 20% acceptance rate for papers, additional 23% for extended abstracts
    AAMAS-12
    Details     Download: [pdf] (138.6kB )  

  6. Lisa Torrey and Matthew E. Taylor. Help an Agent Out: Student/Teacher Learning in Sequential Decision Tasks. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-12), June 2012.
    ALA-12
    Details     Download: [pdf] (380.1kB )  

2011

  1. Marcos A. M. Vieira, Matthew E. Taylor, Prateek Tandon, Manish Jain, Ramesh Govindan, Gaurav S. Sukhatme, and Milind Tambe. Mitigating Multi-path Fading in a Mobile Mesh Network. Ad Hoc Networks Journal, 2011.
    Details     Download: [pdf] (1007.7kB )  

  2. Scott Alfeld, Kumera Berkele, Stephen A. Desalvo, Tong Pham, Daniel Russo, Lisa Yan, and Matthew E. Taylor. Reducing the Team Uncertainty Penalty: Empirical and Theoretical Approaches. In Proceedings of the Workshop on Multiagent Sequential Decision Making in Uncertain Domains (at AAMAS-11), May 2011.
    MSDM-11
    Details     Download: [pdf] (604.8kB )  

  3. Haitham Bou Ammar, Matthew E. Taylor, and Karl Tuyls. Common Sub-Space Transfer for Reinforcement Learning Tasks (Poster). In The 23rd Benelux Conference on Artificial Intelligence (BNAIC), November 2011. 44% overall acceptance rate
    BNAIC-11
    Details     Download: (unavailable)

  4. Haitham Bou Ammar, Matthew E. Taylor, Karl Tuyls, and Gerhard Weiss. Reinforcement Learning Transfer using a Sparse Coded Inter-Task Mapping. In Proceedings of the European Workshop on Multi-agent Systems, November 2011.
    EUMAS-11
    Details     Download: [pdf] (359.2kB )  

  5. Haitham Bou Ammar and Matthew E. Taylor. Common Subspace Transfer for Reinforcement Learning Tasks. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-11), May 2011.
    ALA-11
    Details     Download: [pdf] (445.0kB )  

  6. Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, and Ioannis Vlahavas. Transfer Learning via Multiple Inter-Task Mappings. In Proceedings of European Workshop on Reinforcement Learning (at ECML-11), September 2011.
    EWRL-11
    Details     Download: [pdf] (175.5kB )  

  7. W. Bradley Knox, Matthew E. Taylor, and Peter Stone. Understanding Human Teaching Modalities in Reinforcement Learning Environments: A Preliminary Report. In Proceedings of the Agents Learning Interactively from Human Teachers workshop (at IJCAI-11), July 2011.
    ALIHT-11
    Details     Download: [pdf] (372.4kB )  

  8. Jun-young Kwak, Rong Yang, Zhengyu Yin, Matthew E. Taylor, and Milind Tambe. Towards Addressing Model Uncertainty: Robust Execution-time Coordination for Teamwork (Short Paper). In The IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT), August 2011. Short Paper: 21% acceptance rate for papers, additional 28% for short papers
    IAT-11
    Details     Download: [pdf] (189.7kB )  

  9. Jun-young Kwak, Rong Yang, Zhengyu Yin, Matthew E. Taylor, and Milind Tambe. Teamwork in Distributed POMDPs: Execution-time Coordination Under Model Uncertainty (Poster). In International Conference on Autonomous Agents and Multiagent Systems (AAMAS), May 2011. Extended Abstract: 22% acceptance rate for papers, additional 25% for extended abstracts
    AAMAS-11
    Details     Download: [pdf] (201.8kB )  

  10. Jun-young Kwak, Zhengyu Yin, Rong Yang, Matthew E. Taylor, and Milind Tambe. Robust Execution-time Coordination in DEC-POMDPs Under Model Uncertainty. In Proceedings of the Workshop on Multiagent Sequential Decision Making in Uncertain Domains (at AAMAS-11), May 2011.
    MSDM-11
    Details     Download: [pdf] (922.2kB )  

  11. Paul Scerri, Balajee Kannan, Pras Velagapudi, Kate Macarthur, Peter Stone, Matthew E. Taylor, John Dolan, Alessandro Farinelli, Archie Chapman, Bernadine Dias, and George Kantor. Flood Disaster Mitigation: A Real-world Challenge Problem for Multi-Agent Unmanned Surface Vehicles. In Proceedings of the Autonomous Robots and Multirobot Systems workshop (at AAMAS-11), May 2011.
    ARMS-11
    Details     Download: [pdf] (765.1kB )  

  12. Matthew E. Taylor, Christopher Kiekintveld, and Milind Tambe. Evaluating Deployed Decision Support Systems for Security: Challenges, Arguments, and Approaches. In Milind Tambe, editors, Security Games: Theory, Deployed Applications, Lessons Learned, pp. 254–283, Cambridge University Press, 2011.
    Details     Download: [pdf] (2.6MB )  

  13. Matthew E. Taylor, Manish Jain, Christopher Kiekintveld, Jun-young Kwak, Rong Yang, Zhengyu Yin, and Milind Tambe. Two Decades of Multiagent Teamwork Research: Past, Present, and Future. In C. Guttmann, F. Dignum, and M. Georgeff, editors, Collaborative Agents - REsearch and Development (CARE) 2009-2010, Lecture Notes in Artificial Intelligence, Springer-Verlag, 2011.
    Details     Download: [pdf] (617.2kB )  

  14. Matthew E. Taylor, Halit Bener Suay, and Sonia Chernova. Integrating Reinforcement Learning with Human Demonstrations of Varying Ability. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), May 2011. 22% acceptance rate
    AAMAS-11
    Details     Download: [pdf] (157.4kB )  

  15. Matthew E. Taylor, Brian Kulis, and Fei Sha. Metric Learning for Reinforcement Learning Agents. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), May 2011. 22% acceptance rate
    AAMAS-11
    Details     Download: [pdf] (250.2kB )  

  16. Matthew E. Taylor, Manish Jain, Prateek Tandon, Makoto Yokoo, and Milind Tambe. Distributed On-line Multi-Agent Optimization Under Uncertainty: Balancing Exploration and Exploitation. Advances in Complex Systems, 2011.
    Details     Download: [pdf] (830.8kB )  

  17. Matthew E. Taylor and Peter Stone. An Introduction to Inter-task Transfer for Reinforcement Learning. AI Magazine, 32(1):15–34, 2011.
    Details     Download: [pdf] (237.0kB )  

  18. Matthew E. Taylor. Model Assignment: Reinforcement Learning in a Generalized Mario Domain. In Proceedings of the Second Symposium on Educational Advances in Artificial Intelligence, August 2011.
    EAAI-11
    Assignment Webpage
    Details     Download: (unavailable)

  19. Matthew E. Taylor. Teaching Reinforcement Learning with Mario: An Argument and Case Study. In Proceedings of the Second Symposium on Educational Advances in Artificial Intelligence, August 2011.
    EAAI-11
    Details     Download: [pdf] (1.3MB )  

  20. Matthew E. Taylor, Halit Bener Suay, and Sonia Chernova. Using Human Demonstrations to Improve Reinforcement Learning. In The AAAI 2011 Spring Symposium --- Help Me Help You: Bridging the Gaps in Human-Agent Collaboration, March 2011.
    HMHY2011
    Details     Download: [pdf] (116.5kB )  

  21. Jason Tsai, Natalie Fridman, Emma Bowring, Matthew Brown, Shira Epstein, Gal Kaminka, Stacy Marsella, Andrew Ogden, Inbal Rika, Ankur Sheel, Matthew E. Taylor, Xuezhi Wang, Avishay Zilka, and Milind Tambe. ESCAPES: Evacuation Simulation with Children, Authorities, Parents, Emotions, and Social Comparison. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), May 2011. 22% acceptance rate
    AAMAS-11
    Details     Download: [pdf] (2.4MB )  

  22. Shimon Whiteson, Brian Tanner, Matthew E. Taylor, and Peter Stone. Protecting Against Evaluation Overfitting in Empirical Reinforcement Learning. In Proceedings of the IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), April 2011.
    ADPRL 2011
    Details     Download: [pdf] (165.3kB )  

2010

  1. Scott Alfeld, Matthew E. Taylor, Prateek Tandon, and Milind Tambe. Towards a Theoretic Understanding of DCEE. In Proceedings of the Distributed Constraint Reasoning workshop (at AAMAS-10), May 2010.
    DCR-10
    Details     Download: [pdf] (378.1kB )  

  2. Samuel Barrett, Matthew E. Taylor, and Peter Stone. Transfer Learning for Reinforcement Learning on a Physical Robot. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-10), May 2010.
    ALA-10
    Details     Download: [pdf] (684.9kB )  

  3. Marc Ponsen, Matthew E. Taylor, and Karl Tuyls. Abstraction and Generalization in Reinforcement Learning. In Matthew E. Taylor and Karl Tuyls, editors, Adaptive Agents and Multi-Agent Systems IV, pp. 1–33, Springer-Verlag, 2010.
    Details     Download: [pdf] (1.5MB )  

  4. Matthew E. Taylor, Katherine E. Coons, Behnam Robatmili, Bertrand A. Maher, Doug Burger, and Kathryn S. McKinley. Evolving Compiler Heuristics to Manage Communication and Contention. In Proceedings of the Twenty-Fourth Conference on Artificial Intelligence (AAAI), July 2010. Nectar Track, 25% acceptance rate
    AAAI-2010. This paper is based on results presented in our earlier PACT-08 paper.
    Details     Download: [pdf] (127.8kB )  

  5. Matthew E. Taylor, Manish Jain, Yanquin Jin, Makoto Yooko, and Milind Tambe. When Should There be a ``Me'' in ``Team''? Distributed Multi-Agent Optimization Under Uncertainty. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), May 2010. 24% acceptance rate
    Supplemental material is available at http://teamcore.usc.edu/dcop/.
    Details     Download: [pdf] (2.9MB )  

  6. Matthew E. Taylor, Christopher Kiekintveld, Craig Western, and Milind Tambe. A Framework for Evaluating Deployed Security Systems: Is There a Chink in your ARMOR?. Informatica, 34(2):129–139, 2010.
    Details     Download: [pdf] (402.6kB )  

  7. Matthew E. Taylor and Karl Tuyls, editors. Adaptive Agents and Multi-Agent Systems IV, Lecture Notes in Computer Science, Springer-Verlag, 2010.
    Many chapters are extended versions of papers appearing at the AAMAS 2009 workshop on Adaptive and Learning Agents. Publisher's website: http://www.springer.com/computer/ai/book/978-3-642-11813-5
    Details     Download: (unavailable)

  8. Matthew E. Taylor and Sonia Chernova. Integrating Human Demonstration and Reinforcement Learning: Initial Results in Human-Agent Transfer. In Proceedings of the Agents Learning Interactively from Human Teachers workshop (at AAMAS-10), May 2010.
    ALIHT-10
    Details     Download: [pdf] (142.6kB )  

  9. Shimon Whiteson, Matthew E. Taylor, and Peter Stone. Critical Factors in the Empirical Performance of Temporal Difference and Evolutionary Methods for Reinforcement Learning. Journal of Autonomous Agents and Multi-Agent Systems, 21(1):1–27, 2010.
    Details     Download: [pdf] (760.6kB )  

2009

  1. Manish Jain, Matthew E. Taylor, Makoto Yokoo, and Milind Tambe. DCOPs Meet the Real World: Exploring Unknown Reward Matrices with Applications to Mobile Sensor Networks. In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI), July 2009. 26% acceptance rate
    IJCAI-2009
    Details     Download: [pdf] (250.8kB )  

  2. Manish Jain, Matthew E. Taylor, Makoto Yokoo, and Milind Tambe. DCOPs Meet the Real World: Exploring Unknown Reward Matrices with Applications to Mobile Sensor Networks. In Proceedings of the Third International Workshop on Agent Technology for Sensor Networks (at AAMAS-09), May 2009.
    ATSN-2009
    Superseded by the IJCAI-09 conference paper DCOPs Meet the Real World: Exploring Unknown Reward Matrices with Applications to Mobile Sensor Networks.
    Details     Download: (unavailable)

  3. Jun-young Kwak, Pradeep Varakantham, Matthew E. Taylor, Janusz Marecki, Paul Scerri, and Milind Tambe. Exploiting Coordination Locales in Distributed POMDPs via Social Model Shaping. In Proceedings of the Fourth Workshop on Multi-agent Sequential Decision-Making in Uncertain Domains (at AAMAS-09), May 2009.
    MSDM-2009
    Superseded by the ICAPS-09 conference paper Exploiting Coordination Locales in Distributed {POMDP}s via Social Model Shaping.
    Details     Download: [pdf] (449.4kB )  

  4. Matthew E. Taylor. Transfer in Reinforcement Learning Domains, Studies in Computational Intelligence, Springer-Verlag, 2009.
    A book based on my PhD thesis.
    Publisher's Webpage.
    Details     Download: (unavailable)

  5. Matthew E. Taylor and Peter Stone. Transfer Learning for Reinforcement Learning Domains: A Survey. Journal of Machine Learning Research, 10(1):1633–1685, 2009.
    Details     Download: [pdf] (399.8kB )  

  6. Matthew E. Taylor, Manish Jain, Prateek Tandon, and Milind Tambe. Using DCOPs to Balance Exploration and Exploitation in Time-Critical Domains. In Proceedings of the IJCAI 2009 Workshop on Distributed Constraint Reasoning, July 2009.
    DCR-2009
    Details     Download: [pdf] (698.3kB )  

  7. Matthew E. Taylor, Chris Kiekintveld, Craig Western, and Milind Tambe. Is There a Chink in Your ARMOR? Towards Robust Evaluations for Deployed Security Systems. In Proceedings of the IJCAI 2009 Workshop on Quantitative Risk Analysis for Security Applications, July 2009.
    QRASA-2009
    Superseded by the journal article A Framework for Evaluating Deployed Security Systems: Is There a Chink in your ARMOR?.
    Details     Download: [pdf] (939.1kB )  

  8. Matthew E. Taylor and Peter Stone. Categorizing Transfer for Reinforcement Learning. In Poster at the Multidisciplinary Symposium on Reinforcement Learning, June 2009.
    MSRL-09.
    Details     Download: [pdf] (144.5kB )  

  9. Matthew E. Taylor, Chris Kiekintveld, Craig Western, and Milind Tambe. Beyond Runtimes and Optimality: Challenges and Opportunities in Evaluating Deployed Security Systems. In Proceedings of the AAMAS-09 Workshop on Agent Design: Advancing from Practice to Theory, May 2009.
    ADAPT-2009
    Details     Download: [pdf] (71.5kB )  

  10. Matthew E. Taylor. Assisting Transfer-Enabled Machine Learning Algorithms: Leveraging Human Knowledge for Curriculum Design. In The AAAI 2009 Spring Symposium on Agents that Learn from Human Teachers, March 2009.
    AAAI 2009 Spring Symposium on Agents that Learn from Human Teachers
    Details     Download: [pdf] (39.8kB )  

  11. Jason Tsai, Emma Bowring, Shira Epstein, Natalie Fridman, Prakhar Garg, Gal Kaminka, Andrew Ogden, Milind Tambe, and Matthew E. Taylor. Agent-based Evacuation Modeling: Simulating the Los Angeles International Airport. In Proceedings of the Workshop on Emergency Management: Incident, Resource, and Supply Chain Management, November 2009.
    EMWS09-2009
    Details     Download: [pdf] (68.6kB )  

  12. Pradeep Varakantham, Jun-young Kwak, Matthew E. Taylor, Janusz Marecki, Paul Scerri, and Milind Tambe. Exploiting Coordination Locales in Distributed POMDPs via Social Model Shaping. In Proceedings of the Nineteenth International Conference on Automated Planning and Scheduling (ICAPS), September 2009. 34% acceptance rate
    ICAPS-2009
    Details     Download: [pdf] (1.2MB )  

  13. Shimon Whiteson, Brian Tanner, Matthew E. Taylor, and Peter Stone. Generalized Domains for Empirical Evaluations in Reinforcement Learning. In Proceedings of the Fourth Workshop on Evaluation Methods for Machine Learning at ICML-09, June 2009.
    Fourth annual workshop on Evaluation Methods for Machine Learning
    Details     Download: [pdf] (90.2kB )  

2008

  1. Katherine K. Coons, Behnam Robatmili, Matthew E.  Taylor, Bertrand A. Maher, Kathryn McKinley, and Doug Burger. Feature Selection and Policy Optimization for Distributed Instruction Placement Using Reinforcement Learning. In Proceedings of the Seventh International Joint Conference on Parallel Architectures and Compilation Techniques (PACT), pp. 32–42, October 2008. 19% acceptance rate
    PACT-2008
    Details     Download: [pdf] (297.8kB )  

  2. Matthew E. Taylor, Nicholas K. Jong, and Peter Stone. Transferring Instances for Model-Based Reinforcement Learning. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), pp. 488–505, September 2008. 19% acceptance rate
    ECML-2008
    Details     Download: [pdf] (304.9kB )  

  3. Matthew E. Taylor, Gregory Kuhlmann, and Peter Stone. Autonomous Transfer for Reinforcement Learning. In Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 283–290, May 2008. 22% acceptance rate
    AAMAS-2008
    Details     Download: [pdf] (233.7kB )  

  4. Matthew E. Taylor, Gregory Kuhlmann, and Peter Stone. Transfer Learning and Intelligence: an Argument and Approach. In Proceedings of the First Conference on Artificial General Intelligence (AGI), March 2008. 50% acceptance rate
    AGI-2008
    A video of talk is available here.
    Details     Download: [pdf] (149.0kB )  

  5. Matthew E. Taylor. Autonomous Inter-Task Transfer in Reinforcement Learning Domains. Ph.D. Thesis, Department of Computer Sciences, The University of Texas at Austin, 2008. Available as Technical Report UT-AI-TR-08-5.
    Details     Download: [pdf] (2.3MB )  

  6. Matthew E. Taylor, Nicholas K. Jong, and Peter Stone. Transferring Instances for Model-Based Reinforcement Learning. In The Adaptive Learning Agents and Multi-Agent Systems (ALAMAS+ALAG) workshop at AAMAS, May 2008.
    AAMAS 2008 workshop on Adaptive Learning Agents and Multi-Agent Systems
    Superseded by the ECML-08 conference paper Transferring Instances for Model-Based Reinforcement Learning.
    Details     Download: (unavailable)

2007

  1. Mazda Ahmadi, Matthew E. Taylor, and Peter Stone. IFSA: Incremental Feature-Set Augmentation for Reinforcement Learning Tasks. In Proceedings of the the Sixth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 1120–1127, May 2007. 22% acceptance rate, Finalist for Best Student Paper
    Best Student Paper Nomination at AAMAS-2007.
    Details     Download: [pdf] (261.6kB )  

  2. Matthew E. Taylor and Peter Stone. Cross-Domain Transfer for Reinforcement Learning. In Proceedings of the Twenty-Fourth International Conference on Machine Learning (ICML), June 2007. 29% acceptance rate
    ICML-2007
    Details     Download: [pdf] (220.7kB )  

  3. Matthew E. Taylor, Shimon Whiteson, and Peter Stone. Temporal Difference and Policy Search Methods for Reinforcement Learning: An Empirical Comparison. In Proceedings of the Twenty-Second Conference on Artificial Intelligence (AAAI), pp. 1675–1678, July 2007. Nectar Track, 38% acceptance rate
    AAAI-2007
    Details     Download: [pdf] (99.7kB )  

  4. Matthew E. Taylor, Shimon Whiteson, and Peter Stone. Transfer via Inter-Task Mappings in Policy Search Reinforcement Learning. In Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 156–163, May 2007. 22% acceptance rate
    AAMAS-2007
    Details     Download: [pdf] (222.5kB )  

  5. Matthew E. Taylor, Cynthia Matuszek, Pace Reagan Smith, and Michael Witbrock. Guiding Inference with Policy Search Reinforcement Learning. In Proceedings of the Twentieth International FLAIRS Conference (FLAIRS), May 2007. 52% acceptance rate
    FLAIRS-2007
    Details     Download: [pdf] (138.5kB )  

  6. Matthew E. Taylor, Cynthia Matuszek, Bryan Klimt, and Michael Witbrock. Autonomous Classification of Knowledge into an Ontology. In Proceedings of the Twentieth International FLAIRS Conference (FLAIRS), May 2007. 52% acceptance rate
    FLAIRS-2007
    Details     Download: [pdf] (107.8kB )  

  7. Matthew E. Taylor, Peter Stone, and Yaxin Liu. Transfer Learning via Inter-Task Mappings for Temporal Difference Learning. Journal of Machine Learning Research, 8(1):2125–2167, 2007.
    Details     Download: [pdf] (499.9kB )  

  8. Matthew E. Taylor and Peter Stone. Towards Reinforcement Learning Representation Transfer (Poster). In The Sixth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 683–685, May 2007. Poster: 22% acceptance rate for talks, additional 25% for posters.
    AAMAS-2007.
    Superseded by the symposium paper Representation Transfer for Reinforcement Learning.
    Details     Download: (unavailable)

  9. Matthew E. Taylor, Katherine E. Coons, Behnam Robatmili, Doug Burger, and Kathryn S. McKinley. Policy Search Optimization for Spatial Path Planning. In NIPS-07 workshop on Machine Learning for Systems Problems, December 2007. (Two page extended abstract.)
    NIPS 2007 workshop on Machine Learning for Systems Problems
    Superseded by the PACT-08 conference paper Using Reinforcement Learning to Select Policy Features for Distributed Instruction Placement.
    Details     Download: (unavailable)

  10. Matthew E. Taylor, Gregory Kuhlmann, and Peter Stone. Accelerating Search with Transferred Heuristics. In ICAPS-07 workshop on AI Planning and Learning, September 2007.
    ICAPS 2007 workshop on AI Planning and Learning
    Details     Download: [pdf] (139.9kB )  

  11. Matthew E. Taylor and Peter Stone. Representation Transfer for Reinforcement Learning. In AAAI 2007 Fall Symposium on Computational Approaches to Representation Change during Learning and Development, November 2007.
    2007 AAAI Fall Symposium: Computational Approaches to Representation Change during Learning and Development
    Details     Download: [pdf] (144.9kB )  

  12. Shimon Whiteson, Matthew E. Taylor, and Peter Stone. Empirical Studies in Action Selection for Reinforcement Learning. Adaptive Behavior, 15(1), 2007.
    Details     Download: [pdf] (828.6kB )  

  13. Shimon Whiteson, Matthew E. Taylor, and Peter Stone. Adaptive Tile Coding for Value Function Approximation. Technical Report AI-TR-07-339, University of Texas at Austin, 2007.
    Details     Download: [pdf] (329.4kB )  

2006

  1. Peter Stone, Gregory Kuhlmann, Matthew E. Taylor, and Yaxin Liu. Keepaway Soccer: From Machine Learning Testbed to Benchmark. In Itsuki Noda, Adam Jacoff, Ansgar Bredenfeld, and Yasutake Takahashi, editors, RoboCup-2005: Robot Soccer World Cup IX, pp. 93–105, Springer-Verlag, Berlin, 2006. 28% acceptance rate at RoboCup-2005
    Some simulations of keepaway referenced in the paper and keepaway software.
    Official version from Publisher's Webpage© Springer-Verlag
    Details     Download: [pdf] (567.7kB )  

  2. Matthew E. Taylor, Shimon Whiteson, and Peter Stone. Comparing Evolutionary and Temporal Difference Methods for Reinforcement Learning. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), pp. 1321–28, July 2006. 46% acceptance rate, Best Paper Award in GA track (of 85 submissions)
    Best Paper Award (Genetic Algorithms Track) at GECCO-2006.
    Details     Download: [pdf] (235.9kB )  

  3. Matthew E. Taylor, Shimon Whiteson, and Peter Stone. Transfer Learning for Policy Search Methods. In ICML workshop on Structural Knowledge Transfer for Machine Learning, June 2006.
    ICML-2006 workshop on Structural Knowledge Transfer for Machine Learning.
    Superseded by the conference paper Transfer via Inter-Task Mappings in Policy Search Reinforcement Learning.
    Details     Download: (unavailable)

  4. Shimon Whiteson, Matthew E. Taylor, and Peter Stone. Adaptive Tile Coding for Reinforcement Learning. In NIPS workshop on: Towards a New Reinforcement Learning?, December 2006.
    NIPS-2006 (Poster).
    Superseded by the technical report Adaptive Tile Coding for Value Function Approximation.
    Details     Download: (unavailable)

2005

  1. Matthew E. Taylor, Peter Stone, and Yaxin Liu. Value Functions for RL-Based Behavior Transfer: A Comparative Study. In Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI), July 2005. 18% acceptance rate.
    AAAI-2005.
    Superseded by the journal article Transfer Learning via Inter-Task Mappings for Temporal Difference Learning.
    Details     Download: [pdf] (147.3kB )  

  2. Matthew E. Taylor and Peter Stone. Behavior Transfer for Value-Function-Based Reinforcement Learning. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 53–59, July 2005. 25% acceptance rate.
    AAMAS-2005.
    Superseded by the journal article Transfer Learning via Inter-Task Mappings for Temporal Difference Learning.
    Details     Download: [pdf] (230.4kB )  

2004

  1. Matthew E. Taylor and Peter Stone. Speeding up Reinforcement Learning with Behavior Transfer. In AAAI 2004 Fall Symposium on Real-life Reinforcement Learning, October 2004.
    Superseded by the journal article Transfer Learning via Inter-Task Mappings for Temporal Difference Learning.
    Details     Download: [pdf] (144.9kB )  


Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:11