MURAL - Maynooth University Research Archive Library



    Learning state space trajectories in recurrent neural networks


    Pearlmutter, Barak A. (1987) Learning state space trajectories in recurrent neural networks. Neural Computation, 1 (2). pp. 263-269. ISSN 0899-7667

    [thumbnail of BP_learning state.pdf]
    Preview
    Text
    BP_learning state.pdf

    Download (530kB) | Preview

    Abstract

    Many neural network learning procedures compute gradients of the errors on the output layer of units after they have settled to their final values. We describe a procedure for finding aE/aw,, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wy are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E. Simulations in which networks are taught to move through limit cycles are shown.
    Item Type: Article
    Keywords: state space trajectories; recurrent neural networks;
    Academic Unit: Faculty of Science and Engineering > Computer Science
    Faculty of Science and Engineering > Research Institutes > Hamilton Institute
    Item ID: 5486
    Depositing User: Barak Pearlmutter
    Date Deposited: 13 Oct 2014 15:46
    Journal or Publication Title: Neural Computation
    Publisher: MIT Press
    Refereed: Yes
    Related URLs:
    URI: https://mu.eprints-hosting.org/id/eprint/5486
    Use Licence: This item is available under a Creative Commons Attribution Non Commercial Share Alike Licence (CC BY-NC-SA). Details of this licence are available here

    Repository Staff Only (login required)

    Item control page
    Item control page

    Downloads

    Downloads per month over past year

    Origin of downloads