Skip to main content Accessibility help

Towards life-long adaptive agents: using metareasoning for combining knowledge-based planning with situated learning

  • Priyam Parashar (a1), Ashok K. Goel (a2), Bradley Sheneman (a3) and Henrik I. Christensen (a4)


We consider task planning for long-living intelligent agents situated in dynamic environments. Specifically, we address the problem of incomplete knowledge of the world due to the addition of new objects with unknown action models. We propose a multilayered agent architecture that uses meta-reasoning to control hierarchical task planning and situated learning, monitor expectations generated by a plan against world observations, forms goals and rewards for the situated reinforcement learner, and learns the missing planning knowledge relevant to the new objects. We use occupancy grids as a low-level representation for the high-level expectations to capture changes in the physical world due to the additional objects, and provide a similarity method for detecting discrepancies between the expectations and the observations at run-time; the meta-reasoner uses these discrepancies to formulate goals and rewards for the learner, and the learned policies are added to the hierarchical task network plan library for future re-use. We describe our experiments in the Minecraft and Gazebo microworlds to demonstrate the efficacy of the architecture and the technique for learning. We test our approach against an ablated reinforcement learning (RL) version, and our results indicate this form of expectation enhances the learning curve for RL while being more generic than propositional representations.



Hide All
Anderson, M. L. & Oates, T. 2007. A review of recent research in metareasoning and metalearning. AI Magazine 28, 12.
Argall, B. D., Chernova, S., Veloso, M. & Browning, B. 2009. A survey of robot learning from demonstration. Robotics and Autonomous Systems 57, 469483.
Blum, A. L. & Furst, M. L. 1997. Fast planning through planning graph analysis. Artificial Intelligence 90, 281300.
Breazeal, C. 2004. Designing Sociable Robots. MIT Press.
Breazeal, C. & Scassellati, B. 2002. Robots that imitate humans. Trends in Cognitive Sciences 6, 481487.
Cox, M. T. 2005. Field review: metacognition in computation: a selected research review. Artificial Intelligence 169, 104141.
Cox, M. T., Alavi, Z., Dannenhauer, D., Eyorokon, V., Muñoz-Avila, H. & Perlis, D. 2016. MIDCA: a metacognitive, integrated dual-cycle architecture for self-regulated autonomy, In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 3712–3718. AAAI Press.
Cox, M. T., Muñoz-Avila, H. & Bergmann, R. 2005. Case-based planning. Knowledge Engineering Review 20, 283287.
Cox, M. T. & Raja, A. 2011. Metareasoning: Thinking about Thinking. MIT Press.
Dannenhauer, D. & Muñoz-Avila, H. 2015a. Goal-driven autonomy with semantically-annotated hierarchical cases. In Case-Based Reasoning Research and Development,Lecture Notes in Computer Science 9343, Hüllermeier, E. & Minor, M. (eds). Springer International Publishing, 88103.
Dannenhauer, D. & Muñoz-Avila, H. 2015b. Raising expectations in GDA agents acting in dynamic environments. In Proceedings of the 24th International Conference on Artificial Intelligence, 2241–2247. AAAI Press.
Dannenhauer, D., Muñoz-Avila, H. & Cox, M. T. 2016. Informed expectations to guide GDA agents in partially observable environments, In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence 2493–2499. AAAI Press.
Efthymiadis, K., Devlin, S. & Kudenko, D. 2016. Overcoming incorrect knowledge in plan-based reward shaping. Knowledge Engineering Review 31, 3143.
Efthymiadis, K. & Kudenko, D. 2013. Using plan-based reward shaping to learn strategies in StarCraft: Broodwar. In 2013 IEEE Conference on Computational Inteligence in Games (CIG), 1–8. IEEE.
Efthymiadis, K. & Kudenko, D. 2015. Knowledge revision for reinforcement learning with abstract MDPs. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems’, AAMAS ‘15, 763–770. International Foundation for Autonomous Agents and Multiagent Systems.
Erol, K., Hendler, J. & Nau, D. S. 1994. HTN planning: complexity and expressivity. AAAI 94, 1123–1128.
Fitzgerald, T., Bullard, K., Thomaz, A. & Goel, A. K. 2016. Situated mapping for transfer learning. In Fourth Annual Conference on Advances in Cognitive Systems.
Goel, A. K. & Jones, J. 2011. Metareasoning for self-adaptation in intelligent agents. Metareasoning. The MIT Press.
Goel, A. K. & Rugaber, S. 2017. GAIA: A CAD-like environment for designing game-playing agents. IEEE Intelligent Systems 32, 6067.
Grounds, M. & Kudenko, D. 2008. Combining reinforcement learning with symbolic planning. In Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning, 75–86. Springer.
Grzes, M. & Kudenko, D. 2008. Plan-based reward shaping for reinforcement learning. In 2008 4th International IEEE Conference Intelligent Systems, 2, 10–22–10–29. IEEE.
Hammond, K. J. 2012. Case-Based Planning: Viewing Planning as a Memory Task. Elsevier.
Jaidee, U., Muñoz-Avila, H. & Aha, D. W. 2011a. Case-based learning in goal-driven autonomy agents for real-time strategy combat tasks. Proceedings of the ICCBR.
Jaidee, U., Muñoz-Avila, H. & Aha, D. W. 2011b. Integrated learning for goal-driven autonomy. In IJCAI Proceedings—International Joint Conference on Artificial Intelligence, 22, 2450. IJCAI/AAAI.
Jaidee, U., Muñoz-Avila, H. & Aha, D. W. 2012. Learning and reusing goal-specific policies for goal-driven autonomy. In Case-Based Reasoning Research and Development Lecture Notes in Computer Science 7466, B. D. Agudo & I. Watson (eds). Springer, 182–195.
Jones, J. K. & Goel, A. K. 2012. Perceptually grounded self-diagnosis and self-repair of domain knowledge. Knowledge-Based Systems 27, 281301.
Koenig, N. & Howard, A. 2004. Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566) 3, 2149–2154. IEEE.
Kolodner, J. 2014. Case-Based Reasoning. Morgan Kaufmann.
Leake, D. B. 1996. Case-Based Reasoning: Experiences, Lessons and Future Directions, 1st edition. MIT Press.
Lozano-Perez, T. 1983. Spatial planning: a configuration space approach. IEEE Transactions on Computers C-32, 108120.
Muñoz-Avila, H., Jaidee, U., Aha, D. W. & Carter, E. 2010. Goal-d autonomy with case-based reasoning. In Case-Based Reasoning. Research and Development, 228–241. Springer.
Murdock, J. W. & Goel, A. K. 2001. Meta-case-based reasoning: using functional models to adapt case-based agents. In Case-Based Reasoning Research and Development, 407–421. Springer.
Murdock, J. W. & Goel, A. K. 2008. Meta-case-based reasoning: self-improvement through self-understanding. Journal of Experimental and Theoretical Artificial Intelligence 20, 136.
Murdock, J. W. & Goel, A. K. 2011. Self-improvement through self-understanding: model-based reflection for agent adaptation. Georgia Institute of Technology.
Nau, D. S., Au, T. C., Ilghami, O., Kuter, U., Murdock, J. W., Wu, D. & Yaman, F. 2003. SHOP2: An HTN planning system. 1, 379404.
Nau, D. S., Cao, Y., Lotem, A. & Munoz-Avila, H. 1999. SHOP: simple hierarchical ordered planner. In Proceedings of the 16th International Joint Conference on Artificial Intelligence, 2, 968–973. IJCAI'99, Morgan Kaufmann Publishers Inc.
Ng, A. Y., Harada, D. & Russell, S. 1999. Policy invariance under reward transformations: theory and application to reward shaping. ICML 99, 278–287.
Nilsson, N. J. 1998. Artificial Intelligence: A New Synthesis. Elsevier.
Nilsson, N. J. 2014. Principles of Artificial Intelligence. Morgan Kaufmann.
Ontanón, S., Mishra, K., Sugandh, N. & Ram, A. 2010. On-line case-based planning. Computational Intelligence 26, 84119.
Paisner, M., Maynord, M., Cox, M. T. & Perlis, D. 2013. Goal-driven autonomy in dynamic environments. In Goal Reasoning: Papers from the ACS Workshop, 79.
Riesbeck, C. K. & Schank, R. C. 2013. Inside Case-Based Reasoning. Psychology Press.
Stroulia, E. & Goel, A. K. 1995. Functional representation and reasoning for reflective systems. Applications of Artificial Intelligence 9, 101124.
Stroulia, E. & Goel, A. K. 1999. Evaluating PSMs in evolutionary design: the UTOGNOSTIC experiments. International Journal of Human-Computer Studies 51, 825847.
Sutton, R. S. & Barto, A. G. 1998. Reinforcement learning: An introduction, 1. MIT Press.
Thrun, S., Burgard, W. & Fox, D. 2005. Probabilistic Robotics. MIT Press.
Ulam, P., Goel, A. K., Jones, J. & Murdock, W. 2005. Using model-based reflection to guide reinforcement learning. In Proceedings of the IJCAI 2005 Workshop on Reasoning, Representation and Learning in Computer Games.
Ulam, P., Jones, J. & Goel, A. K. 2008. Combining model-based meta-reasoning and reinforcement learning for adapting game-playing agents. In Proceedings of the Fourth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 132–137. AAAI Press.
Vattam, S., Klenk, M., Molineaux, M. & Aha, D. W. 2013. Breadth of Approaches to Goal Reasoning: A Research Survey. Technical Report. Naval Research Lab.
Watkins, C. J. C. H. & Dayan, P. 1992. Q-learning. Machine Learning 8, 279292.

Towards life-long adaptive agents: using metareasoning for combining knowledge-based planning with situated learning

  • Priyam Parashar (a1), Ashok K. Goel (a2), Bradley Sheneman (a3) and Henrik I. Christensen (a4)


Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed