Focus

A Trick to Remember

Phys. Rev. Focus 13, 12
Researchers solve a serious problem with a computer model for how the brain stores memories.
Figure caption
Phys. Rev. Lett. 92, 108101 (2004)
About face. A “memory” pattern can be stored in these pixels, even though each one continues to oscillate from black to white and back again. An improvement in such neural networks allows them to “recall” memories.

The human brain has been called the most complex object in the Universe, so it’s no surprise that researchers have yet to understand how it stores memories. Yet physicists hope to learn something about memory by studying simplified computer models called neural networks, which have some properties in common with real brains. In the 12 March PRL researchers correct a serious flaw in a type of neural network that aims to better simulate real neuron behavior, and show that it can retrieve stable “memories.” Networks such as these demonstrate that complex, brain-like behavior can arise from a collection of simple components, and they remain the best computer model for the operation of memory.

In the early 1980s, John Hopfield, now at Princeton University, proposed a type of neural network with a capacity for recalling patterns, or “memories.” It consists of “neurons” that are assigned values of one or minus one, based on a weighted average of the values of all the other neurons. If you start by assigning values to each neuron and then let the system go on its own, the values change with time as the neurons continually respond to one another. But eventually the network settles down to an unchanging pattern of values. Each choice of weightings creates a network with several stable “template” patterns–dubbed “memories”–of neuron values. When you start the network in some other pattern, it moves to the closest template pattern–it “recalls” the closest memory.

A real neuron “at rest,” however, doesn’t maintain a fixed voltage value; it fires repeated voltage spikes at a fixed rate. So researchers have devised networks whose elements are coupled entities whose values oscillate in time. For example, a memory state might consist of neurons oscillating in phase or exactly out of phase, rather than remaining as a static pattern.

But these more realistic networks suffer from a stability problem, explains Takashi Nishikawa of Southern Methodist University in Dallas. The desired memory states are no more stable than other states that the network can fall into. When presented with some new pattern, the oscillatory network, unlike the original Hopfield model, does not automatically move toward the memory state closest to that pattern.

Nishikawa and his colleagues at Arizona State University in Tempe have found a simple way to correct this deficiency. They changed the weighting formula–which depends on phase differences between neurons–to give more weight to neurons when they are at certain phases of their cycles. Increasing the magnitude of this extra weight makes the memory patterns of the network stable, the researchers found. However, if the extra weight is too strong, other, random states of the network also become stable. When that happens, the network may not produce a memory state in response to an input pattern. But when the extra weight is adjusted correctly, the researchers concluded, the network shows a recall ability comparable to the classical Hopfield network.

How any of this relates to real neurological memory is hard to know, Nishikawa admits. Synchronous firing of biological neurons is only one possible way to encode memories, and the connections between neurons that underlie such phenomena are poorly understood. Zhaoping Li of University College London suggests that the techniques devised by Nishikawa’s team may in the end have more value for computer science and engineering than for neuroscience. Nevertheless, she adds, it will take ideas from all these disciplines to bring about a genuine understanding of memory and other brain functions.

–David Lindley

David Lindley is a freelance science writer in Alexandria, Virginia.


Subject Areas

Nonlinear Dynamics

Related Articles

Time Delays Improve Performance of Certain Neural Networks
Computational Physics

Time Delays Improve Performance of Certain Neural Networks

Both the predictive power and the memory storage capability of an artificial neural network called a reservoir computer increase when time delays are added into how the network processes signals, according to a new model. Read More »

The Neuron vs the Synapse: Which One Is in the Driving Seat?
Complex Systems

The Neuron vs the Synapse: Which One Is in the Driving Seat?

A new theoretical framework for plastic neural networks predicts dynamical regimes where synapses rather than neurons primarily drive the network’s behavior, leading to an alternative candidate mechanism for working memory in the brain. Read More »

Nonreciprocal Frustration Meets Geometrical Frustration
Nonlinear Dynamics

Nonreciprocal Frustration Meets Geometrical Frustration

New theoretical work establishes an analogy between systems that are dynamically frustrated, such as glasses, and thermodynamic systems whose members have conflicting goals, such as predator–prey ecosystems. Read More »

More Articles