Opinion

How AI and ML Will Affect Physics

    Sankar Das Sarma
    • Department of Physics, University of Maryland, College Park, College Park, MD
Physics 16, 166
The more physicists use artificial intelligence and machine learning, the more important it becomes for them to understand why the technology works and when it fails.
J. Horsthuis
Digital artist Julius Horsthuis created this work, Gravity Waves, using fractals and artificial intelligence.

The advent of ChatGPT, Bard, and other large language models (LLM) has naturally excited everybody, including the entire physics community. There are many evolving questions for physicists about LLMs in particular and artificial intelligence (AI) in general. What do these stupendous developments in large-data technology mean for physics? How can they be incorporated in physics? What will be the role of machine learning (ML) itself in the process of physics discovery?

Before I explore the implications of those questions, I should point out there is no doubt that AI and ML will become integral parts of physics research and education. Even so, similar to the role of AI in human society, we do not know how this new and rapidly evolving technology will affect physics in the long run, just as our predecessors did not know how transistors or computers would affect physics when the technologies were being developed in the early 1950s. What we do know is that the impact of AI/ML on physics will be profound and ever evolving as the technology develops.

The impact is already being felt. Just a cursory search of Physical Review journals for “machine learning” in articles’ titles, abstracts, or both returned 1456 hits since 2015 and only 64 for the entire period from Physical Review’s debut in 1893 to 2014! The second derivative of ML usage in articles is also increasing. The same search yielded 310 Physical Review articles in 2022 with ML in the title, abstract, or both; in the first 6 months of 2023, there are already 189 such publications.

ML is already being used extensively in physics, which is unsurprising since physics deals with data that are often very large, as is the case in some high-energy physics and astrophysics experiments. In fact, physicists have been using some forms of ML for a long time, even before the term ML became popular. Neural networks—the fundamental pillars of AI—also have a long history in theoretical physics, as is apparent from the fact that the term “neural networks” appears in hundreds of Physical Review articles’ titles and abstract since its first usage in 1985 in the context of models for understanding spin glasses. The AI/ML use of neural networks is quite different from the way neural networks appear in spin glass models, but the basic idea of representing a complex system using neural networks is shared by both cases. ML and neural networks have been woven into the fabric of physics going back 40 years or more.

What has changed is the availability of very large computer clusters with huge computing power, which enable ML to be applied in a practical manner to many physical systems. For my field, condensed-matter physics, these advances mean that ML is being increasingly used to analyze large datasets involving materials properties and predictions. In these complex situations, the use of AI/ML will become a routine tool for every professional physicist, just like vector calculus, differential geometry, and group theory. Indeed, the use of AI/ML will soon become so widespread that we simply will not remember why it was ever a big deal. At that point, this opinion piece of mine will look a bit naive, much like pontifications in the 1940s about using computers for doing physics.

But what about deeper usage of AI/ML in physics beyond using it as an everyday tool? Can they help us solve deep problems of great significance? Could physicists, for example, have used AI/ML to come up with the Bardeen-Cooper-Schrieffer theory of superconductivity in 1950 if they had been available? Can AI/ML revolutionize doing theoretical physics by finding ideas and concepts such as the general theory of relativity or the Schrödinger equation? Most physicists I talk to firmly believe that this would be impossible. Mathematicians feel this way too. I do not know of any mathematician who believes that AI/ML can prove, say, Riemann’s hypothesis or Goldbach’s conjecture. I, on the other hand, am not so sure. All ideas are somehow deeply rooted in accumulated knowledge, and I am unwilling to assert that I already know what AI/ML won’t ever be able to do. After all, I remember the time when there was a widespread feeling that AI could never beat the great champions of the complex game of Go. A scholarly example is the ability of DeepMind’s AlphaFold to predict what structure a protein’s string of amino acids will adopt, a feat that was thought impossible 20 years ago.

This brings me to my final point. Doing physics using AI/ML is happening, and it will become routine soon. But what about understanding the effectiveness of AI/ML and of LLMs in particular? If we think of a LLM as a complex system that suddenly becomes extremely predictive after it has trained on a huge amount of data, the natural question for a physicist to ask is what is the nature of that shift? Is it a true dynamical phase transition that occurs at some threshold training point? Or is it just the routine consequence of interpolations among known data, which just work empirically, sometimes even when extrapolated? The latter, which is what most professional statisticians seem to believe, involves no deep principle. But the former involves what could be called the physics of AI/ML and constitutes in my mind the most important intellectual question: Why does AI/ML work and when does it fail? Is there a phase transition at some threshold where the AI/ML algorithm simply predicts everything correctly? Or is the algorithm just some huge interpolation, which works because the amount of data being interpolated is so gigantic that most questions simply fall within its regime of validity? As physicists, we should not just be passive users of AI/ML but also dig into these questions. To paraphrase a famous quote from a former US president, we should not only ask what AI/ML can do for us (a lot actually), but also what we can do for AI/ML.

About the Author

Image of Sankar Das Sarma

Sankar Das Sarma is the Richard E. Prange Chair in Physics and a Distinguished University Professor at the University of Maryland, College Park. He is also a fellow of the Joint Quantum Institute and the director of the Condensed Matter Theory Center, both at the University of Maryland. Das Sarma received his PhD from Brown University, Rhode Island. He has belonged to the University of Maryland physics faculty since 1980. His research interests are condensed-matter physics, quantum computation, and statistical mechanics.


Recent Articles

Simulating Superconductivity in Optical Lattices
Atomic and Molecular Physics

Simulating Superconductivity in Optical Lattices

Researchers have devised a way to use atoms in optical lattices to model high-temperature superconductors, whose behavior is not yet fully understood. Read More »

Predicting the Behavior of Knitted Fabrics
Metamaterials

Predicting the Behavior of Knitted Fabrics

A simplified model of how yarns interact shows that a piece of knitted fabric can have many stable resting states depending on its history of deformation. Read More »

Simulations Suggest Flu Virus Vulnerability
Biological Physics

Simulations Suggest Flu Virus Vulnerability

Studies of influenza A’s unusual propulsion strategy suggest that drugs could target a critical protein. Read More »

More Articles