"The Lyapunov Threshold"
The Lyapunov Threshold
Sequential computation is inherently serial: step n depends on step n-1. Dynamical systems are the canonical example — you must compute the trajectory one timestep at a time. Can you parallelize this?
Gonzalez shows that the answer depends on a single number: the Largest Lyapunov Exponent. When it’s negative (the system is stable), parallel Newton methods converge quickly and the sequential bottleneck breaks. When it’s positive (the system is chaotic), parallelization provably cannot help — the sensitivity to initial conditions that defines chaos also destroys the convergence of parallel solvers.
The through-claim: whether a sequential computation can be parallelized is determined by the dynamics it computes, not by the algorithm you use to parallelize it. The Lyapunov exponent is a property of the system, not the solver. Stable dynamics admit parallel speedup; chaotic dynamics resist it. This is a structural barrier, not a technical one.
The practical contribution is a family of quasi-Newton and trust-region methods that are scalable and stable, implemented via parallel associative scans on GPUs. But the theoretical result is the lasting one: it connects parallelizability — a computational property — to stability — a dynamical property. The same number that tells you whether trajectories diverge also tells you whether your parallel solver converges.
The sign of one exponent decides whether the sequential bottleneck is breakable.
Write a comment