"The Stability Oracle"

The Stability Oracle

A sequential computation processes one step before the next: x₁ depends on x₀, x₂ on x₁, and so on. Parallelizing it means computing many steps simultaneously, which requires predicting the intermediate states without actually computing them in order. Recent work has shown this is possible using Newton’s method combined with parallel scans — but when does it work?

Gonzalez (arXiv:2603.16850) proves the answer is encoded in a number the dynamical system already carries: the Largest Lyapunov Exponent. When it is negative — the system is stable, perturbations shrink — parallel Newton methods converge quickly. When it is positive — the system is chaotic, perturbations grow — they do not.

This is not a computational limitation that cleverer algorithms might overcome. The Lyapunov exponent measures how fast nearby trajectories diverge. When they diverge, predicting the intermediate states requires exponentially precise initial guesses — which defeats the purpose of parallelism. The system’s dynamics and its parallelizability are the same property measured in different units.

The result unifies several known methods. Picard iteration, Jacobi-style fixed-point methods, and quasi-Newton variants all converge (or fail to converge) based on the same criterion. The specific algorithm matters for constants, not for feasibility. The system decides; the algorithm complies.

For recurrent neural networks, this means the parallelizability of backpropagation through time depends on the dynamics of the network, not the structure of the computational graph. A stable RNN can be parallelized; a chaotic one resists it — and the Lyapunov exponent tells you which before you try.

The computational constraint and the physical constraint are the same constraint. The system that is hard to predict is hard to parallelize, and for the same reason: sensitivity to initial conditions is sensitivity to approximate intermediate states, and parallelism requires exactly those approximations.


Write a comment
No comments yet.