Quantum Systems Modeled Without Prior Assumptions
Quantum systems are notoriously hard to study, control, and simulate. One key reason is that their full characterization requires a vast amount of information. Fortunately, in the past decade, scientists have shown that many physical properties of a quantum system can be efficiently predicted using much less information [1, 2]. Moreover, researchers have built quantum sensors that can measure these properties with a much smaller uncertainty compared with the best classical sensors [3]. Nevertheless, it has been difficult to achieve both efficient predictions and precise measurements at the same time. Now, building on previous breakthroughs in the field, Hong-Ye Hu at Harvard University and his colleagues have demonstrated a new algorithm that characterizes quantum systems of any size with optimal efficiency and precision [4]. Strikingly, the algorithm needs no prior information or assumptions about the system’s structure, making it suitable for analyzing arbitrary devices and phenomena.
In quantum mechanics, a system’s total energy is described by a mathematical function called the Hamiltonian. This function enables scientists to attain a complete understanding of a system’s static and dynamic properties. The Hamiltonian can always be expressed as a sum of basic terms—each representing a well-defined, measurable physical quantity—weighted by numerical coefficients. The type and number of terms in the Hamiltonian determine its structure, whereas each coefficient quantifies how relevant the associated physical quantity is for describing the system.
Given experimental access to a large and completely unknown quantum system, determining the structure and coefficients of its Hamiltonian is, in general, extremely expensive in terms of computational resources. One reason is that the sum includes so-called noncommuting terms. These correspond to physical quantities that cannot be measured in the same experiment, according to the Heisenberg uncertainty principle. The presence of such terms suggests the need to gather an amount of information that grows exponentially as the system’s size increases. In the past few years, scientists have achieved several breakthroughs in simplifying this task, known as Hamiltonian learning.
The first breakthroughs showed that the learning can be done efficiently by looking at the system’s short-term dynamics [5, 6]. By making simple measurements, one can obtain a set of easily solvable linear equations for the coefficients of the unknown Hamiltonian. These demonstrations proved that efficient learning can be achieved even for large systems. But the resulting precision on the coefficients remained limited to that of a classical sensor.
The next leap forward was the development of a learning algorithm that is not only efficient but can also estimate the coefficients with so-called Heisenberg-limited precision—the best possible precision allowed by quantum mechanics [7]. This advance was based on the insight that quantum-enhanced precision can be reached only by waiting for a longer time, at which point the equations for the coefficients get mixed up owing to the presence of noncommuting terms. One can prevent such mixing through a technique known as Hamiltonian reshaping, which isolates groups of commuting terms. Then, over the course of distinct experiments, the coefficients inside each group can be estimated.
As striking as these feats might be, they are all based on the rather strong assumption that the unknown Hamiltonian has just a few noncommuting terms. This assumption can suffice when studying or simulating small quantum systems with only a few basic components (such as small bunches of atoms) or large quantum systems with only short-range interactions (such as long chains of atoms). But it is not appropriate when considering, for example, an entire nanomaterial, in which tens, hundreds, or thousands of atoms might be interacting with one another.
The new study by Hu and his colleagues tackles exactly this issue. The researchers devised an algorithm that can learn a Hamiltonian efficiently and with Heisenberg-limited precision, without making any assumptions about the Hamiltonian’s structure. The team cleverly adapted previous techniques for coefficient learning and combined them with an original technique for structure learning. The algorithm alternates between such structure and coefficient learning until all the terms in the Hamiltonian have been identified and their coefficients estimated.
The structure-learning phase starts by letting an initial system evolve for a short time. The researchers’ insight was that, upon suitable initialization, the resulting system is in a quantum superposition of distinct states, each corresponding to a different term in the Hamiltonian and each weighted by the Hamiltonian’s coefficients. The algorithm exploits this fact to identify the terms with the largest coefficients in just a few runs. It then proceeds hierarchically and iteratively. First, it uses Hamiltonian reshaping to estimate the coefficients of the identified terms. Second, it runs the structure-learning phase again with slight modifications that allow it to identify terms with smaller coefficients. Those two steps are repeated until the whole Hamiltonian has been learned (Fig. 1).
Quantum technologies are, in some sense, a natural evolution of classical technologies, made possible by our understanding and control of atoms and subatomic particles. For this reason, they should be seen as a spectrum of advantageous tools for use in various contexts rather than as a single monolithic instrument. The Hamiltonian learning demonstrated by Hu and his colleagues is emblematic of this mindset. It is as precise as a quantum sensor and as efficient at extracting information as a machine-learning algorithm. Moreover, it can be used to study a wide variety of quantum systems—in virtually all cases, without prior knowledge of their Hamiltonian structure.
As quantum computers grow in size, tools for verifying their calculations become increasingly important. Hamiltonian learning enables scientists to check the inner workings of a quantum circuit and validate its components. As quantum simulators become adept at discovering and optimizing strange new materials, one needs to know what to simulate in the first place. Hamiltonian learning allows scientists to uncover the material’s Hamiltonian structure efficiently and precisely. And as networks of quantum sensors become available to measure physical quantities, larger and more complex quantum systems will be probed and characterized. Hamiltonian learning will also enable scientists to tackle the increased complexity of these systems without compromising the sensors’ precision. What discoveries these applications will lead to is hard to fathom.
References
- S. Aaronson, “Shadow tomography of quantum states,” Proc. 50th Ann. ACM SIGACT Symp. Theory of Computing 325 (2018).
- H.-Y. Huang et al., “Predicting many properties of a quantum system from very few measurements,” Nat. Phys. 16, 1050 (2020).
- V. Giovannetti et al., “Advances in quantum metrology,” Nat. Photonics 5, 222 (2011).
- H.-Y. Hu et al., “Ansatz-free Hamiltonian learning with Heisenberg-limited scaling,” PRX Quantum 6, 040315 (2025).
- A. Anshu et al., “Sample-efficient learning of interacting quantum systems,” Nat. Phys. 17, 931 (2021).
- J. Haah et al., “Optimal learning of quantum Hamiltonians from high-temperature Gibbs states,” 2022 IEEE 63rd Ann. Symp. Foundations of Computer Science (FOCS) 135 (2022).
- H.-Y. Huang et al., “Learning many-body Hamiltonians with Heisenberg-limited scaling,” Phys. Rev. Lett. 130, 200403 (2023).




