• Sorted by Date • Classified by Publication Type • Sorted by First Author Last Name • Classified by Research Category •
Shimon Whiteson, Brian Tanner, Matthew E. Taylor, and Peter
Stone. Generalized Domains for Empirical Evaluations in Reinforcement Learning. In Proceedings of the Fourth
Workshop on Evaluation Methods for Machine Learning at ICML-09, June 2009.
Fourth
annual workshop on Evaluation Methods for Machine Learning
Many empirical results in reinforcement learning are based on a very small set of environments. These results often represent the best algorithm parameters that were found after an ad-hoc tuning or fitting process. We argue that presenting tuned scores from a small set of environments leads to method overfitting, wherein results may not generalize to similar environments. To address this problem, we advocate empirical evaluations using generalized domains: parameterized problem generators that explicitly encode variations in the environment to which the learner should be robust. We argue that evaluating across a set of these generated problems offers a more meaningful evaluation of reinforcement learning algorithms.
@inproceedings(ICMLWS09-Whiteson, author="Shimon Whiteson and Brian Tanner and Matthew E.\ Taylor and Peter Stone", title="Generalized Domains for Empirical Evaluations in Reinforcement Learning", Booktitle="Proceedings of the Fourth Workshop on Evaluation Methods for Machine Learning at {ICML}-09", month="June", year= "2009", wwwnote={<a href="http://www.site.uottawa.ca/ICML09WS/">Fourth annual workshop on Evaluation Methods for Machine Learning</a>}, abstract = {Many empirical results in reinforcement learning are based on a very small set of environments. These results often represent the best algorithm parameters that were found after an ad-hoc tuning or fitting process. We argue that presenting tuned scores from a small set of environments leads to method overfitting, wherein results may not generalize to similar environments. To address this problem, we advocate empirical evaluations using generalized domains: parameterized problem generators that explicitly encode variations in the environment to which the learner should be robust. We argue that evaluating across a set of these generated problems offers a more meaningful evaluation of reinforcement learning algorithms.}, )
Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:11