A Classical Machine Learning Algorithm Goes Quantum
In recent years, computer scientists have used machine learning algorithms known as generative adversarial networks (GANs) to manipulate data to startling effect. Applied to graphics, GANs can open closed eyes in photos and create forged videos of politicians speaking. Now, Seth Lloyd of the Massachusetts Institute of Technology, Cambridge, and Christian Weedbrook of the Canadian startup Xanadu have theoretically proven that the algorithm can be applied to quantum data sets. Similar to classical GANs, quantum GANs, or QGANs, could be used to generate realistic-looking quantum data on quantum computers.
To identify and replicate patterns in data, GANs employ two competing components—a “generator” and a “discriminator”—that compete in a kind of game. Fed with real data, the generator tries to produce numbers whose statistical distribution mimics that of the real data. The discriminator then looks at the generator’s data and guesses whether the numbers are real or fake. Using the discriminator’s feedback, the generator progressively produces more realistic-looking numbers. The game eventually ends when the discriminator can no longer tell the fake numbers from the real ones—the generator has completely replicated the statistics of the real data. The game works because the discriminator fails exactly when the generator achieves its goal.
Lloyd and Weedbrook mathematically proved that a QGAN should operate in a similar fashion: Just like the classical case, the quantum discriminator fails when the quantum generator reproduces the statistics of real data. In the near future, the duo says that QGANs could be used to perform quantum simulations of molecules faster than classical computers and could help improve other applications including drug discovery, algorithmic trading, and fraud detection.
This research is published in Physical Review Letters.
Sophia Chen is a freelance science writer in Tucson, Arizona.