Quantum Leap for Quantum Primacy
In a dramatic tour de force, teams led by Jian-Wei Pan at the University of Science and Technology of China have shown, in two separate studies, remarkable progress toward the demonstration of quantum primacy [1, 2]. Quantum primacy is the goal of showing that a programmable quantum computer solves a computational problem that is currently infeasible for nonquantum, or “classical,” computers [3]. Impressive recent experiments led to claims that this point has been reached [4], but they prompted debates on whether the demonstrated quantum computation was truly beyond the reach of existing classical computers. It has been suggested, for example, that these experiments didn’t involve a comparison with the best possible classical algorithms or implementations [5]. The two major results by the Pan group push experimental quantum computing to far larger problem sizes, making it much harder to find classical algorithms and classical computers that can keep up. The results take us further toward trusting claims that we have indeed reached the age of computational quantum primacy.
In practice, the approach to demonstrating quantum primacy is based on “sampling problems”—computational problems whose solutions are random instances, or samples, of a given probability distribution [6]. The quantum advantage is established if generating these instances is infeasible for a classical computer but not for the quantum computer. For every claim of a quantum advantage, a healthy debate always arises as to whether the particular classical algorithm used is the best possible. This is the basis of IBM’s challenge to the claim of primacy made by Google, for example [5].
Pan and his colleagues may have established a hard-to-question advantage by demonstrating quantum primacy in two separate systems: one photonic, the other superconducting. In each case, the goal is to increase the number of particles (such as the number of photons in the interferometer or the number of qubits in the superconducting circuit) as well as the circuit depth (which is the maximum number of sequential operations between the computer’s input and its output) to the point that classically simulating the result becomes impossible. In so doing, these approaches make counterarguments to quantum primacy increasingly difficult to justify. They also point the way to ever larger quantum sampling experiments that could make the classical-vs-quantum debate truly obsolete.
The photonic experiment solves the problem of boson sampling. The original, rigorously formulated, problem (referred to as BosonSampling) involves constructing a many-channel interferometer and injecting either one photon or zero photons into each input port. Signals would then be characterized via a multiphoton coincidence measurement at the output ports after passing through the multichannel interferometer, which enacts a random signal transformation. The BosonSampling analysis shows that, subject to clear and plausible assumptions and conditions, the problem of sampling the circuit output is hard for classical machines but can be efficiently dealt with by quantum photonic interferometry.
Unfortunately, this ideal mathematical formulation is difficult to realize experimentally, so BosonSampling has been generalized to “scattershot” boson sampling [7] and, further, to “Gaussian” boson sampling [8], which is the subject of this current experiment. Gaussian boson sampling is experimentally viable, but proofs of computational primacy are more challenging to obtain. Instead, the community focuses on “spoofing” the quantum results, which means devising classical algorithms that would succeed in simulating the quantum results and thereby negate the claim of quantum primacy.
One way to keep the quantum sampling experiment well ahead of classical spoofing is to significantly increase the size of the quantum sampling problem. In their new Gaussian boson sampling experiment, which uses stimulated squeezed-light generation plus phase control to ensure that the superposition states are mutually coherent, Pan and colleagues detect up to 113 photons at the output of a 144-mode interferometer (Fig. 1). Based on combinatoric arguments for how many ways the photons can pass through the interferometer modes to yield multiphoton coincidences at the output, they claim to sample a 1043-dimensional Hilbert space. By making reasonable assumptions about the time required to perform arithmetic calculations on a nonquantum computer and the algorithm being employed, they show a factor-of-1024 speedup in computational time for boson sampling with respect to classical computation. These new results are an impressive advance over the state-of-the-art and make it increasingly unlikely that there could be efficient classical algorithmic alternatives for this sampling problem.
The team’s other experiment involves random circuit sampling with a superconducting quantum processor. The circuit can be regarded as a unitary transformation of the input qubits, all set to the logical zero state. The sampling problem consists of generating random instances of measurements of all output qubits, with the circuit chosen randomly. The belief is that, similarly to the photonic implementation, simulating the probability distribution of output-qubit readouts for a random circuit is hard classically but feasible quantumly. Again, the goal is to perform an experiment whose sampling problem has a large size, corresponding to many qubits and a large circuit depth, meaning many quantum logic cycles from input to output.
The team achieves random circuit sampling using 66 functional transmon qubits combined with 110 tunable couplers (Fig. 2). They then test quantum primacy on a subset of 56 of these superconducting qubits and up to 20 quantum logical cycles. This size reduction ensures sufficiently large numbers to claim a breakthrough while not making the task too hard to implement. Although a seemingly small increase over Google’s 53-qubit demonstration of quantum primacy [4], classically simulating the new 56-qubit test demands orders of magnitude more classical computational resources than simulating Google’s case because of the exponentially increasing computational-resource requirements from linear increases in the number of qubits.
These two experiments represent rapid advancement in experimental quantum sampling, making classical spoofing of these demonstrations increasingly unlikely and thus establishing more firmly that we are in an age of quantum primacy for computing. Given that such impressive, large sampling problems are solved by quantum machines in a way that far outperforms classical simulators, could we use these quantum samplers to solve useful computational problems? Researchers have claimed that there are meaningful problems to be tackled by such samplers, in particular in the field of quantum chemistry, but no convincing experimental demonstration has yet been reported. These experiments further motivate efforts to put quantum sampling to practical use.
References
- H.-S. Zhong et al., “Phase-programmable Gaussian boson sampling using stimulated squeezed light,” Phys. Rev. Lett. 127, 180502 (2021).
- Y. Wu et al., “Strong quantum computational advantage using a superconducting quantum processor,” Phys. Rev. Lett. 127, 180501 (2021).
- J. Preskill, “Why I called it ‘quantum supremacy’,” Quanta Magazine (2019); I. Durham et al., “Physicists need to be more careful with how they name things,” Sci. Am. (2021).
- F. Arute et al., “Quantum supremacy using a programmable superconducting processor,” Nature 574, 505 (2019).
- E. Pednault et al., “Leveraging secondary storage to simulate deep 54-qubit Sycamore circuits,” arXiv: 1910.09534.
- A. P. Lund et al., “Quantum sampling problems, Boson sampling and quantum supremacy,” npj Quantum Inf. 3, 15 (2017).
- A. P. Lund et al., “Boson sampling from a Gaussian state,” Phys. Rev. Lett. 113, 100502 (2014); M. Bentivegna et al., “Experimental scattershot boson sampling,” Sci. Adv. 1 (2015).
- C. S. Hamilton et al., “Gaussian boson sampling,” Phys. Rev. Lett. 119, 170501 (2017).