Russian version English version
Volume 8   Issue 2   Year 2013
Lakhman K.V., Burtsev M.S.

Short-Term Memory Mechanisms in the Goal-Directed Behavior of the Neural Network Agents

Mathematical Biology & Bioinformatics. 2013;8(2):419-431.

doi: 10.17537/2013.8.419.

References

  1. Botvinick MM, Niv Y, Barto AC. Hierarchically organized behavior and its neural foundations: A reinforcement learning perspective. Cognition. 2009;113:262-280. doi: 10.1016/j.cognition.2008.08.011
  2. Sutton RS, Barto AG. Reinforcement Learning: An Introduction. MIT Press; 1998.
  3. Sutton RS, Precup D, Singh S. Etween MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence. 1999;112:181-211. doi: 10.1016/S0004-3702(99)00052-1
  4. Sutton RS, Rafols EJ, Koop A. Temporal abstraction in temporal-difference networks. In: Proceedings of NIPS-18. MIT Press; 2006. P. 1313-1320.
  5. Sutton RS, Modayil J, Delp M, Degris T, Pilarski PM, White A, Precup D. Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In: The 10th International Conference on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems; 2011. V. 2. P. 761-768.
  6. Barto AG, Mahadevan S. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems. 2003;13(1-2):41-77. doi: 10.1023/A:1022140919877
  7. Satinder S, Lewis RL, Barto AG. Where do rewards come from? In: Proceedings of the 31st Annual Meeting of the Cognitive Science Society. Cognitive Science Society; 2009. P. 2601-2606.
  8. Sandamirskaya Y, Schöner G. An embodied account of serial order: How instabilities drive sequence generation. Neural Networks. 2010;23(10):1164-1179. doi: 10.1016/j.neunet.2010.07.012
  9. Komarov MA, Osipov GV, Burtsev M.S. Adaptive functional systems: Learning with chaos. Chaos. 2010;20(4):045119. doi: 10.1063/1.3521250
  10. Floreano D, Mondada F. Automatic creation of an autonomous agent: genetic evolution of a neural-network driven robot. In: Proceedings of the third international conference on Simulation of adaptive behavior : from animals to animats 3. MIT Press; 1994. P. 421-430.
  11. Floreano D, Dürr P, Mattiussi C. Neuroevolution: from architectures to learning. Evolutionary Intelligence. 2008;1:47-62. doi: 10.1007/s12065-007-0002-4
  12. Schrum J, Miikkulainen R. Evolving multimodal networks for multitask games. IEEE Transactions on Computational Intelligence and AI in Games. 2012;4(2):94-111. doi: 10.1109/TCIAIG.2012.2193399
  13. Kaelbling LP, Littman ML, Moore AW. Reinforcement learning: a survey. Journal of Artificial Intelligence Research. 1996;4:237-285.
  14. Hochreiter S, Informatik FF, Bengio Y, Frasconi P, Schmidhuber J. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: Field Guide to Dynamical Recurrent Networks. Eds. Kolen J., Kremer S. IEEE Press; 2001.
  15. Botvinick MM, Plaut DC. Short-term memory for serial order: A recurrent neural network model. Psychological Review. 2006;113:201-233. doi: 10.1037/0033-295X.113.2.201
  16. Grossberg S. Contour enhancement, short term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics. 1973;52(3):213-257.
  17. Anokhin P. Biology and Neurophysiology of the Conditioned Reflex and Its Role in Adaptive Behavior. Pergamon Press; 1974.
  18. Edelman G. Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books; 1987.
  19. Taylor JS, Raes J. Duplication and divergence: the evolution of new genes and old ideas. Annual Review of Genetics. 2004;38:615-643. doi: 10.1146/annurev.genet.38.072902.092831
  20. Stanley KO, Miikkulainen R. Evolving neural networks through augmenting topologies. Evolutionary Computation. 2002;10(2):99-127. doi: 10.1162/106365602320169811
Table of Contents Original Article
Math. Biol. Bioinf.
2013;8(2):419-431
doi: 10.17537/2013.8.419
published in Russian

Abstract (rus.)
Abstract (eng.)
Full text (rus., pdf)
References

 

  Copyright IMPB RAS © 2005-2024