Matthew E. Taylor's Publications

Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Research Category

Representation Transfer for Reinforcement Learning

Matthew E. Taylor and Peter Stone. Representation Transfer for Reinforcement Learning. In AAAI 2007 Fall Symposium on Computational Approaches to Representation Change during Learning and Development, November 2007.
2007 AAAI Fall Symposium: Computational Approaches to Representation Change during Learning and Development

Download

[PDF]144.9kB  

Abstract

Transfer learning problems are typically framed as leveraging knowledge learned on a source task to improve learning on a related, but different, target task. Current transfer learning methods are able to successfully transfer knowledge from a source reinforcement learning task into a target task, reducing learning time. However, the complimentary task of transferring knowledge between agents with different internal representations has not been well explored The goal in both types of transfer problems is the same: reduce the time needed to learn the target with transfer, relative to learning the target without transfer. This work defines representation transfer, contrasts it with task transfer, and introduces two novel algorithms. Additionally, we show representation transfer algorithms can also be successfully used for task transfer, providing an empirical connection between the two problems. These algorithms are fully implemented in a complex multiagent domain and experiments demonstrate that transferring the learned knowledge between different representations is both possible and beneficial.

BibTeX Entry

@InProceedings(AAAI07-Symposium,
        author="Matthew E.\ Taylor and Peter Stone",
        title="Representation Transfer for Reinforcement Learning",
        Booktitle="AAAI 2007 Fall Symposium on Computational
        Approaches to Representation Change during Learning and
        Development",
        month="November",year="2007",
        abstract={Transfer learning problems are typically framed as
        leveraging knowledge learned on a source task to improve
        learning on a related, but different, target task. Current
        transfer learning methods are able to successfully transfer
        knowledge from a source reinforcement learning task into a
        target task, reducing learning time. However, the
        complimentary task of transferring knowledge between agents
        with different internal representations has not been well
        explored The goal in both types of transfer problems is the
        same: reduce the time needed to learn the target with
        transfer, relative to learning the target without
        transfer. This work defines representation transfer, contrasts
        it with task transfer, and introduces two novel
        algorithms. Additionally, we show representation transfer
        algorithms can also be successfully used for task transfer,
        providing an empirical connection between the two
        problems. These algorithms are fully implemented in a complex
        multiagent domain and experiments demonstrate that
        transferring the learned knowledge between different
        representations is both possible and beneficial.  },
        wwwnote={<a
        href="http://yertle.isi.edu/~clayton/aaai-fss07/index.php/Welcome">2007
        AAAI Fall Symposium: Computational Approaches to
        Representation Change during Learning and Development</a>},
)

Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:11