Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example

The ability to store and manipulate information is a hallmark of computational systems. Recent efforts have made progress in modeling the representation and recall of information in neural systems. However, precisely how neural systems learn to modify these representations remains far from understood. Here we drive a recurrent neural network (RNN) with examples of translated, linearly transformed, or pre-bifurcated time series from a chaotic Lorenz system, alongside an additional control signal c that changes value for each example. By training the network to replicate the Lorenz inputs, it learns to autonomously evolve about a Lorenz-shaped manifold, and to continuously interpolate and extrapolate the translation, transformation, and bifurcation of this representation far beyond the training data by changing the control signal. Finally, we provide a simple but powerful mechanism for how these computations are learned, enabling the principled study and precise design of RNNs.

Συνεδρία: 
Authors: 
Jason Kim, Zhixin Lu, Erfan Nozari, George Pappas and Danielle Bassett
Room: 
3
Date: 
Friday, December 11, 2020 - 18:30 to 18:45

Partners

Twitter

Facebook

Contact

For information please contact :
ccs2020conf@gmail.com