Computational and Systems Neuroscience (COSYNE) I-22, 2007.

Memory lifetime depends on synaptic meta-plasticity and the size of cell assemblies.

R. Kempter and C. Leibold

Changes of synaptic states are thought to underly learning and memory. The storage of new memories and the associated synaptic changes necessarily impair previously acquired memory traces, and the smaller this impairment the longer is a network's memory lifetime. Two strategies have been suggested to keep old memories from being overwritten too rapidly while preserving receptiveness to new contents: either introducing synaptic meta levels to store the history of synaptic state changes [1] or reducing the number of cells that fire together in an assembly to decrease the interference between memory traces[2]. To compute memory lifetimes, synapse models cannot be considered independently of the size of synchronously active cell assemblies (sparseness) because the postsynaptic depolarization depends on both the presynaptic activity and the synaptic states.

We derive memory lifetimes in a randomly-coupled recurrent network [3] with synaptic meta-plasticity. Via simultaneous optimization of assembly size and synaptic complexity, we find the maximum memory lifetime to coincide with a high level of sparseness and a simple two-state synaptic model. If the sparse-coding limit is unfeasible, synapses with a high number of meta states can be beneficial. We discuss two alternative synaptic cascade models with binary weights and find that a serial topology of synaptic state transitions gives rise to larger memory capacities as compared to a model with cross transitions between states. The optimal number of synaptic states for a serial model topology grows much faster as a function of the network size and connectivity than for the cross transition model. For both cascade models of synaptically stored memories, however, sparseness of representation outweighs the virtues of meta-plasticity by orders of magnitudes.

As an example, we derived memory lifetimes for parameters corresponding to the hippocampal CA3 region of rats, where assemblies have been estimated to contain thousands of neurons. In this system, a sparser code could be prohibited by requiring dynamical stability of the replay of sequential activity patterns [4]. The evaluation of both meta-plasticity models in a CA3-like parameter regime yields a maximal lifetime of about 7,000 subsequent memories at an optimal cascade order n=2 for the model with cross transition, and a lifetime of 13,000 memories at n=3 for the serial topology. We thus conclude that low cascade orders are likely to be helpful to increase memory longevity in the hippocampus.

Acknowledgments
We thank S. Fusi, T. Gollisch, R. Schaette, and W. Senn for valuable suggestions. This work was supported by DFG grants (Emmy Noether program: Ke 788/1-3, SFB 618) and the BMBF grant 01GQ0410.

References
[1] Cascade models of synaptically stored memories. S. Fusi, P.J. Drew, and L.F. Abbott, Neuron 45:599-611, 2005.
[2] Dynamic learning in neural networks with material synapses. D.J. Amit and S. Fusi, Neural Computation 6:957-982, 1994.
[3] Memory capacity for sequences in a recurrent network with biological constraints. C. Leibold and R. Kempter, Neural Computation 18:904-941, 2006.
[4] Memory of sequential experience in the hippocampus during slow wave sleep. A.K. Lee and M.A. Wilson, Neuron 36:1183-1194, 2002.

Download Full Poster: PNG (484k) and Abstract: PDF (56k)