Research Output
Generalized Early Stopping in Evolutionary Direct Policy Search
  Lengthy evaluation times are common in many optimization problems such as direct policy search tasks, especially when they involve conducting evaluations in the physical world, e.g. in robotics applications. Often when evaluating solution over a fixed time period it becomes clear that the objective value will not increase with additional computation time (for example when a two wheeled robot continuously spins on the spot). In such cases, it makes sense to stop the evaluation early to save computation time. However, most approaches to stop the evaluation are problem specific and need to be specifically designed for the task at hand. Therefore, we propose an early stopping method for direct policy search. The proposed method only looks at the objective value at each time step and requires no problem specific knowledge. We test the introduced stopping criterion in five direct policy search environments drawn from games, robotics and classic control domains, and show that it can save up to 75% of the computation time. We also compare it with problem specific stopping criteria and show that it performs comparably, while being more generally applicable.

  • Type:

    Article

  • Date:

    20 March 2024

  • Publication Status:

    In Press

  • DOI:

    10.1145/3653024

  • ISSN:

    2688-299X

  • Funders:

    EPSRC Engineering and Physical Sciences Research Council

Citation

Arza, E., Le Goff, L. K., & Hart, E. (in press). Generalized Early Stopping in Evolutionary Direct Policy Search. ACM Transactions on Evolutionary Learning and Optimization, https://doi.org/10.1145/3653024

Authors

Keywords

Applied computing, Engineering, Mathematics of computing, Mathematical optimization, Computing methodologies, Simulation evaluation, Optimization, Early Stopping, Policy Learning

Monthly Views:

Available Documents