The extraordinary complexity of the brain makes it hard to identify its underlying organizational principles. For example, in many species the cerebral cortex is divided into columns of highly interconnected cells, but the functional significance of this arrangement has been debated for a century. In Physical Review Letters, Ralph Stoop, at the University of Basel, Switzerland, and colleagues used computational models of neural networks to deduce that the details of the organization within individual columns are not very important. Instead, what counts is how different columns are interconnected.
To study the importance of the “wiring” configuration within columns, the team arranged mathematical models of neurons into networks and compared configurations with different connectivities. Interestingly, the ability of these simulated columns to carry out a computational task, such as the classification of Arabic digits, did not significantly improve when the connection strengths or the layered arrangement were chosen to mimic those often seen in biological columns.
In contrast, the researchers found that the connections between columns in a side-by-side sheet made a big difference to the speed with which information propagated laterally to coordinate activity across the simulated cortex. The authors compared networks with different spatial distributions of connections between simplified columns. For example, in “scale-free” networks—including many real-world networks—the number of connections decreases with their length as a single power law, so there are few relatively long links. But Stoop and his colleagues found that, for the same total length of “wires,” signals spread more quickly in a network described by two power laws. This distribution, which was suggested by microscopy investigations in lab animals, includes a larger number of very long connections that help information to propagate quickly between distant columns. – Don Monroe