• Login
    View Item 
    •   SMARTech Home
    • College of Computing (CoC)
    • College of Computing Technical Reports
    • View Item
    •   SMARTech Home
    • College of Computing (CoC)
    • College of Computing Technical Reports
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Model-Based Reflection for Agent Evolution

    Thumbnail
    View/Open
    GIT-CC-00-34.pdf (176.7Kb)
    Date
    2000
    Author
    Murdock, J. William
    Metadata
    Show full item record
    Abstract
    Adaptability is a key characteristic of intelligence. My research explores techniques for enabling software agents to adapt themselves as their functional requirements change incrementally. In the domain of manufacturing, for example, a software agent designed to assemble physical artifacts may be given a new goal of disassembling artifacts. As another example, in the internet domain, a software agent designed to browse some types of documents may be called upon to browse a document of another type. In particular, my research examines the use of reflection (an agent's knowledge and reasoning about itself) to accomplish evolution (incremental adaptation of an agent's capabilities). I have developed a language called TMKL (Task-Method-Knowledge Language) that enables modeling of an agent's composition and functioning. A TMKL model of an agent explicitly represents the tasks the agent addresses, the methods it applies, and the knowledge it uses. TMKL models are hierarchical, i.e., they represents tasks, methods and knowledge at multiple levels of abstraction. I have also developed a reasoning shell called REM (Reflective Evolutionary Mind) which provides support for the execution and evolution of agents represented in TMKL. REM employs a variety of strategies for evolving TMKL agents. Some of these strategies are purely model-based: knowledge of composition and functioning encoded in TMKL directly enables adaptation. REM also employs two traditional artificial intelligence and machine learning techniques: generative planning and reinforcement learning. The combination of model-based adaptation, generative planning, and reinforcement learning constitutes a mechanism for re ective agent evolution which is capable of addressing a variety of problems to which none of these individual approaches alone is suited. My research demonstrates the computational feasibility of this mechanism using experiments involving a variety of intelligent software agents in a variety of domains.
    URI
    http://hdl.handle.net/1853/6597
    Collections
    • College of Computing Technical Reports [506]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology