Scientists Meet in the Information Universe
In this era of Big Data, information has taken on a new identity. It no longer has to be about something. It is something, out of which new connections, and new realities, may emerge. To explore this trend, scientists organized the first-ever Information Universe conference in Groningen, Netherlands. The meeting, which ran from October 7 through 9, drew a hundred participants from the fields of physics, astronomy, computer science, and biology. They came with different notions of information: some defining it with statistical measures, others restricting it to meaningful patterns. Still others imagined information as a framework for understanding quantum mechanics, gravity, or the origin of life. In his opening address, Edwin Valentijn, an astronomer from the University of Groningen and head organizer, called the conference “an experiment.” Its goal, he said, was to mix together a wide range of experts and let their diverse viewpoints collide and coalesce into a richer appreciation of the role information plays in the Universe.
Presentations were held in the Infoversum (which connotes “information universe” in Dutch), a 3D dome theater that opened last year as a place where audiences could be fully immersed in scientific visualizations. During the conference, attendees were treated to a full-dome computer simulation of cosmic evolution, a dizzying journey through a map of our galactic neighborhood, and a 3D movie showing the inner architecture of a human brain.
Simulations like these start with large data sets and use powerful computation methods to produce a complex virtual reality. Maybe the “real world” works in a similar way, and what we see is the output of some giant computational machine? At the conference, the Nobel Laureate Gerard ’t Hooft from Utrecht University, Netherlands, presented his cellular automaton theory, which claims that the Universe at some deep level is described by discrete, classical bits. The values of these bits evolve according to definite rules set by the values of neighboring bits. ’t Hooft isn’t alone in this line of thinking. Edward Fredkin, from Carnegie Mellon University in Pittsburgh, presented his extensive work on cellular automata, demonstrating that they can reproduce the reversibility of nature at the microscopic level. Taken at face value, these classical models seem to butt heads with quantum indeterminacy. But ’t Hooft argued that quantum uncertainty arises because we don’t know the bit values. And even if we did, we couldn’t compute fast enough to predict, for example, where an electron will end up in a double-slit experiment. “Nature itself is the fastest calculator,” he said.
The notion that a “Matrix”-like machine might be running the show behind the scenes may seem a little out there, but the idea can be seen as part of a larger trend that treats information as a physical quantity, on par with mass and charge. This is not information as most people define it, but rather a kind of entropy (often called Shannon entropy) in which different quantum degrees of freedom are thought of as bits. These bits are an important element in black hole studies, where a major concern is the fate of information that falls into a black hole. Erik Verlinde from the University of Amsterdam has taken the information-black hole connection and effectively turned it around to re-define gravity. He claims that gravity is not a real force, but merely emerges from thermodynamic principles: two masses attract because their infall increases the information entropy. At the conference, Verlinde showed that this entropic force becomes stronger at galactic scales, thus removing the need to introduce dark matter to explain galaxy rotation data. Interestingly, other speakers at the conference provided models to explain cosmic acceleration without the need of dark energy. But theorists don’t have much wiggle room, according to Tamara Davis from the University of Queensland, Australia. She provided a brisk overview of the vast astronomical data (from supernovae, galaxy clustering, and gravitational lensing) that constrains dark energy and all its competing alternatives.
Other presentations treated information on a more familiar level, as something we humans create and store (in brains and hard drives). Charles Lineweaver from the Australian National University traced the genesis of information from cosmology to biology. He suggested that the value of information could be assessed through natural selection: a DNA code or cultural idea is meaningful when it is copied and passed along.
Frans van Lunteren of Leiden Observatory, Netherlands, gave a historical explanation for the current scientific interest in information. He argued that scientists tend to model the natural world based on the dominant technology of the time. Clocks in the 17th century influenced theories of “motion,” while 19th century steam engines propelled the concept of “energy.” The modern computer has scientists contemplating “information.”
But Alex Szalay of Johns Hopkins University in Baltimore offered some words of caution about our information obsession. He argued that scientists are in danger of collecting too much data—outstripping our storage and analysis capabilities. To avoid this, Szalay advocates methods, such as active learning principles, that can help identify which data is worth collecting. Szalay was also a little remorseful that science had become so digitized. He fondly remembers a time before robotic telescopes and CCD cameras when an astronomer would sit in an observatory and stare up at the real, nonpixelated Universe.
Assessing the conference, Valentijn considered the “experiment” a success, as evidenced by the lively discussions that stretched beyond the allotted time. “The conference did not fade out, but actually the feeling was that we should continue this in the future,” Valentijn said.