The most pleasing musical chords have simple mathematical relationships between the different sound frequencies within them, but the source of this perception is mysterious. A recent mathematical model suggests that the key may be the rhythmically consistent firing of neurons in response to a harmonious pair of frequencies. Now the researchers who developed the model report 2 September in Physical Review Letters that they have quantified the effect by calculating the information content of their model’s neural signals and showed that it increases for tone pairs that are more pleasant sounding. The model may also provide insights into other sensations besides hearing.
Going back to Pythagoras in 500 BCE, people have noticed that pairs of notes with simple frequency ratios, such as tones separated by an octave (2:1) or a perfect fifth (3:2), produce a more tranquil sound than, say, a minor second (16:15). Hearing the difference doesn’t require musical training, as even infants and animals respond to it. Recent research suggests that the sensation of harmony, or “consonance,” is not simply the result of the way sound waves combine; it arises from the processing of sound into electrical signals. “The behavioral preference of consonant chords is due to some basic principles of neural functionality,” says Bernardo Spagnolo of the University of Palermo in Italy.
Some physicists have tried modeling some rather complex neural “circuits,” but Spagnolo and his colleagues have restricted themselves to a simple three-neuron system that likely reflects the way neural signals travel from the ear to the brain. Two of the neurons are considered “sensory” neurons, each of which is stimulated by a different audio frequency in the inner ear. These two neurons send their electrical signals into a third, so-called “interneuron,” which sends a signal to the brain.
In their mathematical model, each neuron obeys the well-known “leaky integrate-and-fire” equations, in which incoming stimuli drive up the voltage across the neural membrane until it reaches some threshold, at which point the neuron fires a voltage spike. After firing, the voltage resets to some initial value, but the neuron must wait a short period before it can fire again. This “down-time” results in a kind of interference: if the two sensory neurons fire at around the same time, then the interneuron will only be able to relay one spike instead of two.
In a previous paper, the researchers calculated the interneuron’s firing statistics for consonant and dissonant inputs in the presence of additional “noise,” representing random signals from other, nearby neurons . If the two sensory neurons were excited by a consonant pair of tones, the interneuron fired in well-spaced time intervals. But the firing pattern turned erratic when the pair of audio frequencies was dissonant.
These results, however, were not quantitative–there was no precise mathematical definition to distinguish “orderly” signals from erratic ones–which made it difficult to draw more general conclusions or apply the results to neurological data. Now the team has reanalyzed the results using information theory, which says that the less random a signal is, the more information it contains. They devised a precise measure, called regularity, which reflects this information content. Their theoretical and numerical calculations show that consonant notes produce higher regularity (or greater information) in the interneuron signal than do dissonant notes.
The regularity also behaves like a well-known psychological effect involving the perceived pitch. When subjects listen to a combination of two pure notes, they report hearing a low frequency that isn’t physically present in the sound waves. This perceived pitch increases with the ratio of the two frequencies, just as the regularity does. Spagnolo and his colleagues take this similarity as evidence that their model is capturing some hidden aspect of how sound is processed in our heads.
“This is progress,” says André Longtin of the University of Ottawa in Canada. “But I wouldn’t say that it has nailed the problem shut.” Dante Chialvo of UCLA says this is the first time that the physics and biology of neurons have been put together in a verifiable theory. Because the present model is so generic, he thinks it might apply to neurons tied to other senses. “If these authors are correct, then the basic mechanism of consonance is universal and has little to do with our ears,” Chialvo says.
Michael Schirber is a freelance science writer in Lyon, France.
- Y. V. Ushakov, A. A. Dubkov, and B. Spagnolo, Phys. Rev. E 81, 041911 (2010).