Skip to main content Accessibility help
×
Home
Hostname: page-component-cf9d5c678-mpvvr Total loading time: 0.234 Render date: 2021-07-29T10:32:18.721Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true }

Towards life-long adaptive agents: using metareasoning for combining knowledge-based planning with situated learning

Published online by Cambridge University Press:  18 December 2018

Priyam Parashar
Affiliation:
Contextual Robotics Institute, UC San Diego, La Jolla, CA 92093, USA; e-mail: pparashar@ucsd.edu
Ashok K. Goel
Affiliation:
Design & Intelligence Laboratory, Georgia Institute of Technology, Atlanta, GA 30308, USA; e-mail: goel@cc.gatech.edu
Bradley Sheneman
Affiliation:
American Family Insurance, Chicago, IL; e-mail: bradsheneman@gmail.com
Henrik I. Christensen
Affiliation:
Contextual Robotics Institute, UC San Diego, La Jolla, CA 92093, USA; e-mail: hichristensen@ucsd.edu

Abstract

We consider task planning for long-living intelligent agents situated in dynamic environments. Specifically, we address the problem of incomplete knowledge of the world due to the addition of new objects with unknown action models. We propose a multilayered agent architecture that uses meta-reasoning to control hierarchical task planning and situated learning, monitor expectations generated by a plan against world observations, forms goals and rewards for the situated reinforcement learner, and learns the missing planning knowledge relevant to the new objects. We use occupancy grids as a low-level representation for the high-level expectations to capture changes in the physical world due to the additional objects, and provide a similarity method for detecting discrepancies between the expectations and the observations at run-time; the meta-reasoner uses these discrepancies to formulate goals and rewards for the learner, and the learned policies are added to the hierarchical task network plan library for future re-use. We describe our experiments in the Minecraft and Gazebo microworlds to demonstrate the efficacy of the architecture and the technique for learning. We test our approach against an ablated reinforcement learning (RL) version, and our results indicate this form of expectation enhances the learning curve for RL while being more generic than propositional representations.

Type
Special Issue Contribution
Copyright
© Cambridge University Press, 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Anderson, M. L. & Oates, T. 2007. A review of recent research in metareasoning and metalearning. AI Magazine 28, 12.Google Scholar
Argall, B. D., Chernova, S., Veloso, M. & Browning, B. 2009. A survey of robot learning from demonstration. Robotics and Autonomous Systems 57, 469483.CrossRefGoogle Scholar
Blum, A. L. & Furst, M. L. 1997. Fast planning through planning graph analysis. Artificial Intelligence 90, 281300.CrossRefGoogle Scholar
Breazeal, C. 2004. Designing Sociable Robots. MIT Press.Google Scholar
Breazeal, C. & Scassellati, B. 2002. Robots that imitate humans. Trends in Cognitive Sciences 6, 481487.CrossRefGoogle ScholarPubMed
Cox, M. T. 2005. Field review: metacognition in computation: a selected research review. Artificial Intelligence 169, 104141.CrossRefGoogle Scholar
Cox, M. T., Alavi, Z., Dannenhauer, D., Eyorokon, V., Muñoz-Avila, H. & Perlis, D. 2016. MIDCA: a metacognitive, integrated dual-cycle architecture for self-regulated autonomy, In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 3712–3718. AAAI Press.Google Scholar
Cox, M. T., Muñoz-Avila, H. & Bergmann, R. 2005. Case-based planning. Knowledge Engineering Review 20, 283287.CrossRefGoogle Scholar
Cox, M. T. & Raja, A. 2011. Metareasoning: Thinking about Thinking. MIT Press.CrossRefGoogle Scholar
Dannenhauer, D. & Muñoz-Avila, H. 2015a. Goal-driven autonomy with semantically-annotated hierarchical cases. In Case-Based Reasoning Research and Development,Lecture Notes in Computer Science 9343, Hüllermeier, E. & Minor, M. (eds). Springer International Publishing, 88103.Google Scholar
Dannenhauer, D. & Muñoz-Avila, H. 2015b. Raising expectations in GDA agents acting in dynamic environments. In Proceedings of the 24th International Conference on Artificial Intelligence, 2241–2247. AAAI Press.Google Scholar
Dannenhauer, D., Muñoz-Avila, H. & Cox, M. T. 2016. Informed expectations to guide GDA agents in partially observable environments, In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence 2493–2499. AAAI Press.Google Scholar
Efthymiadis, K., Devlin, S. & Kudenko, D. 2016. Overcoming incorrect knowledge in plan-based reward shaping. Knowledge Engineering Review 31, 3143.CrossRefGoogle Scholar
Efthymiadis, K. & Kudenko, D. 2013. Using plan-based reward shaping to learn strategies in StarCraft: Broodwar. In 2013 IEEE Conference on Computational Inteligence in Games (CIG), 1–8. IEEE.Google Scholar
Efthymiadis, K. & Kudenko, D. 2015. Knowledge revision for reinforcement learning with abstract MDPs. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems’, AAMAS ‘15, 763–770. International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
Erol, K., Hendler, J. & Nau, D. S. 1994. HTN planning: complexity and expressivity. AAAI 94, 1123–1128.Google Scholar
Fitzgerald, T., Bullard, K., Thomaz, A. & Goel, A. K. 2016. Situated mapping for transfer learning. In Fourth Annual Conference on Advances in Cognitive Systems.Google Scholar
Goel, A. K. & Jones, J. 2011. Metareasoning for self-adaptation in intelligent agents. Metareasoning. The MIT Press.Google Scholar
Goel, A. K. & Rugaber, S. 2017. GAIA: A CAD-like environment for designing game-playing agents. IEEE Intelligent Systems 32, 6067.CrossRefGoogle Scholar
Grounds, M. & Kudenko, D. 2008. Combining reinforcement learning with symbolic planning. In Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning, 75–86. Springer.Google Scholar
Grzes, M. & Kudenko, D. 2008. Plan-based reward shaping for reinforcement learning. In 2008 4th International IEEE Conference Intelligent Systems, 2, 10–22–10–29. IEEE.Google Scholar
Hammond, K. J. 2012. Case-Based Planning: Viewing Planning as a Memory Task. Elsevier.Google Scholar
Jaidee, U., Muñoz-Avila, H. & Aha, D. W. 2011a. Case-based learning in goal-driven autonomy agents for real-time strategy combat tasks. Proceedings of the ICCBR.Google Scholar
Jaidee, U., Muñoz-Avila, H. & Aha, D. W. 2011b. Integrated learning for goal-driven autonomy. In IJCAI Proceedings—International Joint Conference on Artificial Intelligence, 22, 2450. IJCAI/AAAI.Google Scholar
Jaidee, U., Muñoz-Avila, H. & Aha, D. W. 2012. Learning and reusing goal-specific policies for goal-driven autonomy. In Case-Based Reasoning Research and Development Lecture Notes in Computer Science 7466, B. D. Agudo & I. Watson (eds). Springer, 182–195.Google Scholar
Jones, J. K. & Goel, A. K. 2012. Perceptually grounded self-diagnosis and self-repair of domain knowledge. Knowledge-Based Systems 27, 281301.CrossRefGoogle Scholar
Koenig, N. & Howard, A. 2004. Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566) 3, 2149–2154. IEEE.Google Scholar
Kolodner, J. 2014. Case-Based Reasoning. Morgan Kaufmann.Google Scholar
Leake, D. B. 1996. Case-Based Reasoning: Experiences, Lessons and Future Directions, 1st edition. MIT Press.Google Scholar
Lozano-Perez, T. 1983. Spatial planning: a configuration space approach. IEEE Transactions on Computers C-32, 108120.CrossRefGoogle Scholar
Muñoz-Avila, H., Jaidee, U., Aha, D. W. & Carter, E. 2010. Goal-d autonomy with case-based reasoning. In Case-Based Reasoning. Research and Development, 228–241. Springer.Google Scholar
Murdock, J. W. & Goel, A. K. 2001. Meta-case-based reasoning: using functional models to adapt case-based agents. In Case-Based Reasoning Research and Development, 407–421. Springer.Google Scholar
Murdock, J. W. & Goel, A. K. 2008. Meta-case-based reasoning: self-improvement through self-understanding. Journal of Experimental and Theoretical Artificial Intelligence 20, 136.CrossRefGoogle Scholar
Murdock, J. W. & Goel, A. K. 2011. Self-improvement through self-understanding: model-based reflection for agent adaptation. Georgia Institute of Technology.Google Scholar
Nau, D. S., Au, T. C., Ilghami, O., Kuter, U., Murdock, J. W., Wu, D. & Yaman, F. 2003. SHOP2: An HTN planning system. 1, 379404.Google Scholar
Nau, D. S., Cao, Y., Lotem, A. & Munoz-Avila, H. 1999. SHOP: simple hierarchical ordered planner. In Proceedings of the 16th International Joint Conference on Artificial Intelligence, 2, 968–973. IJCAI'99, Morgan Kaufmann Publishers Inc.Google Scholar
Ng, A. Y., Harada, D. & Russell, S. 1999. Policy invariance under reward transformations: theory and application to reward shaping. ICML 99, 278–287.Google Scholar
Nilsson, N. J. 1998. Artificial Intelligence: A New Synthesis. Elsevier.Google Scholar
Nilsson, N. J. 2014. Principles of Artificial Intelligence. Morgan Kaufmann.Google Scholar
Ontanón, S., Mishra, K., Sugandh, N. & Ram, A. 2010. On-line case-based planning. Computational Intelligence 26, 84119.CrossRefGoogle Scholar
Paisner, M., Maynord, M., Cox, M. T. & Perlis, D. 2013. Goal-driven autonomy in dynamic environments. In Goal Reasoning: Papers from the ACS Workshop, 79.Google Scholar
Riesbeck, C. K. & Schank, R. C. 2013. Inside Case-Based Reasoning. Psychology Press.CrossRefGoogle Scholar
Stroulia, E. & Goel, A. K. 1995. Functional representation and reasoning for reflective systems. Applications of Artificial Intelligence 9, 101124.CrossRefGoogle Scholar
Stroulia, E. & Goel, A. K. 1999. Evaluating PSMs in evolutionary design: the UTOGNOSTIC experiments. International Journal of Human-Computer Studies 51, 825847.CrossRefGoogle Scholar
Sutton, R. S. & Barto, A. G. 1998. Reinforcement learning: An introduction, 1. MIT Press.Google Scholar
Thrun, S., Burgard, W. & Fox, D. 2005. Probabilistic Robotics. MIT Press.Google Scholar
Ulam, P., Goel, A. K., Jones, J. & Murdock, W. 2005. Using model-based reflection to guide reinforcement learning. In Proceedings of the IJCAI 2005 Workshop on Reasoning, Representation and Learning in Computer Games.Google Scholar
Ulam, P., Jones, J. & Goel, A. K. 2008. Combining model-based meta-reasoning and reinforcement learning for adapting game-playing agents. In Proceedings of the Fourth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 132–137. AAAI Press.Google Scholar
Vattam, S., Klenk, M., Molineaux, M. & Aha, D. W. 2013. Breadth of Approaches to Goal Reasoning: A Research Survey. Technical Report. Naval Research Lab.Google Scholar
Watkins, C. J. C. H. & Dayan, P. 1992. Q-learning. Machine Learning 8, 279292.CrossRefGoogle Scholar
2
Cited by

Send article to Kindle

To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Towards life-long adaptive agents: using metareasoning for combining knowledge-based planning with situated learning
Available formats
×

Send article to Dropbox

To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

Towards life-long adaptive agents: using metareasoning for combining knowledge-based planning with situated learning
Available formats
×

Send article to Google Drive

To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

Towards life-long adaptive agents: using metareasoning for combining knowledge-based planning with situated learning
Available formats
×
×

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *