Learning to shoot goals, Analysing the Learning Process and the Resulting Policies (bibtex)
by Markus Geipel and Michael Beetz
Abstract:
Reinforcement learning is a very general unsupervised learning mechanism. Due to its generality reinforcement learning does not scale very well for tasks that involve inferring subtasks. In particular when the subtasks are dynamically changing and the environment is adversarial. One of the most challenging reinforcement learning tasks so far has been the 3 to 2 keepaway task in the RoboCup simulation league. In this paper we apply reinforcement learning to a even more challenging task: attacking the opponents goal. The main contribution of this paper is the empirical analysis of a portfolio of mechanisms for scaling reinforcement learning towards learning attack policies in simulated robot soccer.
Reference:
Markus Geipel and Michael Beetz, "Learning to shoot goals, Analysing the Learning Process and the Resulting Policies", In RoboCup-2006: Robot Soccer World Cup X, Springer Verlag, Berlin, 2006. to be published
Bibtex Entry:
@InProceedings{geipel06learning,
  author =	 {Markus Geipel and Michael Beetz},
  title =	 {Learning to shoot goals, Analysing the Learning
                  Process and the Resulting Policies},
  editor =	 {Gerhard Lakemeyer and Elizabeth Sklar and Domenico
                  Sorenti and Tomoichi Takahashi},
  note =	 {to be published},
  year =	 {2006},
  booktitle =	 {RoboCup-2006: Robot Soccer World Cup X},
  organization = {RoboCup},
  publisher =	 {Springer Verlag, Berlin},
  bib2html_pubtype ={Refereed Conference Paper},
  bib2html_rescat = {Robocup},
  bib2html_groups = {IAS},
  abstract =	 {Reinforcement learning is a very general
                  unsupervised learning mechanism. Due to its
                  generality reinforcement learning does not scale
                  very well for tasks that involve inferring
                  subtasks. In particular when the subtasks are
                  dynamically changing and the environment is
                  adversarial. One of the most challenging
                  reinforcement learning tasks so far has been the 3
                  to 2 keepaway task in the RoboCup simulation
                  league. In this paper we apply reinforcement
                  learning to a even more challenging task: attacking
                  the opponents goal. The main contribution of this
                  paper is the empirical analysis of a portfolio of
                  mechanisms for scaling reinforcement learning
                  towards learning attack policies in simulated robot
                  soccer.}
}
Powered by bibtexbrowser