Matthew E. Taylor's Publications

Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Research Category

Transfer Learning via Inter-Task Mappings for Temporal Difference Learning

Matthew E. Taylor, Peter Stone, and Yaxin Liu. Transfer Learning via Inter-Task Mappings for Temporal Difference Learning. Journal of Machine Learning Research, 8(1):2125–2167, 2007.

Download

[PDF]499.9kB  

Abstract

Temporal difference (TD) learning has become apopular reinforcement learning technique in recent years. TD methods,relying on function approximators to generalize learning to novelsituations, have had some experimental successes and have been shownto exhibit some desirable properties in theory, but the most basicalgorithms have often been found slow in practice. This empiricalresult has motivated the development of many methods that speed upreinforcement learning by modifying a task for the learner or helpingthe learner better generalize to novel situations. This articlefocuses on generalizing across tasks, thereby speeding uplearning, via a novel form of transfer using handcoded taskrelationships. We compare learning on a complex task with threefunction approximators, a cerebellar model arithmetic computer (CMAC),an artificial neural network (ANN), and a radial basis function (RBF),and empirically demonstrate that directly transferring theaction-value function can lead to a dramatic speedup inlearning with all three. Using transfer via inter-task mapping, agents are able to learn one task and then markedly reducethe time it takes to learn a more complex task. Our algorithms arefully implemented and tested in the RoboCup soccer Keepaway domain.

BibTeX Entry

@Article{JMLR07-taylor,
	Author="Matthew E.\ Taylor and Peter Stone and Yaxin Liu",
	title="Transfer Learning via Inter-Task Mappings for Temporal Difference Learning",
        journal="Journal of Machine Learning Research",
	year="2007",
	volume="8",number="1",
        pages="2125--2167",
abstract="Temporal difference (TD) learning has become a
popular reinforcement learning technique in recent years. TD methods,
relying on function approximators to generalize learning to novel
situations, have had some experimental successes and have been shown
to exhibit some desirable properties in theory, but the most basic
algorithms have often been found slow in practice. This empirical
result has motivated the development of many methods that speed up
reinforcement learning by modifying a task for the learner or helping
the learner better generalize to novel situations. This article
focuses on generalizing across tasks, thereby speeding up
learning, via a novel form of transfer using handcoded task
relationships. We compare learning on a complex task with three
function approximators, a cerebellar model arithmetic computer (CMAC),
an artificial neural network (ANN), and a radial basis function (RBF),
and empirically demonstrate that directly transferring the
action-value function can lead to a dramatic speedup in
learning with all three. Using transfer via inter-task mapping, agents are able to learn one task and then markedly reduce
the time it takes to learn a more complex task. Our algorithms are
fully implemented and tested in the RoboCup soccer Keepaway domain.",
}

Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:10