Physics students know from their late nights that working through the classic textbooks is hard. But can the anecdotal sense of “hardness” be given more rigorous foundation? More specifically, can the basic scientific problem of analyzing experimental data to ascertain the underlying equations be assessed for difficulty? In a paper in *Physical Review Letters*, Toby Cubitt at the Complutense University of Madrid, Spain, and colleagues have applied the tools used by computer science researchers to link notions of computational hardness to the difficulty of inferring dynamical equations from observational data.

Computer scientists classify computational hardness by how long it takes to solve a problem as a function of the size of the problem: cases in which the time is a polynomial function of size are in class P and are said to be “tractable” or “efficiently solvable” problems; problems for which the required time explodes exponentially with size are called NP-hard. The latter are something of a gold standard for computational complexity and can’t be solved efficiently.

Cubitt *et al.* prove theoretically that, regardless of experimental precision, deducing the dynamical equations that describe a system’s evolution is an NP-hard problem. The result applies to classical and quantum systems both, regardless of dynamical details. At the same time, the authors have also solved a $70$-year-old puzzle in probability theory called the embedding problem, which relates to how a Markov process (i.e., one in which the system carries no memory) can be represented by transitions from state to state. Although the results may not comfort students burning the midnight oil, nor dissuade physicists from analyzing their data, the finding does have implications for generic automated inference machines that might boil data down to the underlying physics. – *David Voss*