Simulation of a Motivated Learning Agent
Abstract
In this paper we discuss how to design a simple motivated learning agent with symbolic I/O using a simulation environment within the NeoAxis game engine. The purpose of this work is to explore autonomous development of motivations and memory of agents in a virtual environment. The approach we took should speed up the development process, bypassing the need to create a physical embodied agent as well as reducing the learning effort. By rendering low-level motor actions such as grasping or walking into symbolic commands we remove the need to learn elementary motions. Instead, we use several basic primitive motor procedures, which can form more complex procedures. Furthermore, by simulating the agent’s environment, we both improve and simplify the learning process. There are a few adaptive learning variables associated with both the agent and its environment, and learning takes less time, than it would in a more complex real world environment.
Origin | Files produced by the author(s) |
---|
Loading...