Sailors often took several clocks with them on voyages to try to minimize the error from any individual timepiece. Clocks are pretty reliable these days, but soon we may have to worry about defective microscopic machines. In the 8 July print issue of PRL, researchers take the first steps toward a plan for optimizing the performance of wildly uneven nanotech components, using statistical physics techniques. They find that defective parts can add up to perfectly good devices, with little or no waste.
Error-prone parts are the messy underbelly of modern technology. Computer circuits must have a certain redundancy built in or malfunctioning wires would hobble them. Hewlett-Packard’s Teramac supercomputer contains some 220,000 serious defects–a small fraction of the total machine–but was designed to work around them. In the emerging nanotech world, however, problems might go far beyond the occasional faulty wire. Random fluctuations in the manufacturing process could generate, say, quantum dots or nanoscale transistors that may have inherently variable performance. “We’re thinking about things that are pathological from the beginning,” says Neil Johnson of Oxford University.
As a first cut at figuring out how to successfully wire up such a nanotech menagerie, Johnson and Oxford colleague Damien Challet consider imperfect objects whose errors can be combined in a straightforward way, like clocks or thermometers. These could be analog devices spanning a range of errors, or digital ones that either work or don’t. “Naively,” Johnson explains, “you might think that more is better.” Just empty out your bag of defective parts, solder them all together, and you could make their errors cancel out. There’s a flaw in that logic, though, he points out. The more components you pull out of your bag, the more likely you are to get one with a really huge error, which could ruin everything.
For a more precise answer, the pair put the question this way: “Given N parts with a range of errors, how many of them should you use to minimize the total error?” In statistical mechanics, groupings of particles are weighted according to their free energies. The groupings with the least total free energy are those most likely to show up in the real world. So Johnson and Challet used similar equations but minimized total error rather than free energy. The answer, says Johnson, is that no matter how many defective parts you have, it’s best to take about half of them.
The possibility of applying this method to systems that interact in more complicated (so-called nonlinear) ways is what’s really interesting, says David Wolpert of the NASA Ames Research Center in Moffett Field, Calif. He’s envisioning future technologies in which many parts work together to achieve some predetermined goal, such as to fly an airplane from point A to B, with no human input other than to set the goal. Such devices would need procedures for more than just summing their errors. “For all anybody knows, N/2 might actually be a magic number” that applies to many other systems, he says, but what the paper clearly does is set a foundation. “They’re doing a very detailed analysis of this, and that’s the proper way to do science.”
JR Minkel is a freelance science writer in New York City.