"The Neural Initializer"

The Neural Initializer

The Ginzburg-Landau equations describe the energy landscape of superconductors — vortex configurations, phase transitions, the competition between magnetic field penetration and superconducting order. Finding energy minimizers requires solving a nonlinear PDE for each value of the parameter κ, which controls the type of superconductivity. Different κ values produce qualitatively different vortex patterns.

GLENN trains a neural network that takes κ as input and outputs an approximate energy minimizer. Instead of solving the PDE from scratch for each κ, the network learns the family of solutions across the parameter range simultaneously.

The interesting result is not the standalone neural solver but the hybrid: use the network’s output as an initial guess for a classical finite element iteration. The neural network doesn’t need to be accurate — it needs to be in the right basin of attraction. If the initial guess is close enough to the true minimizer, the classical solver converges in far fewer iterations than from a generic starting point.

This is a practical insight about the division of labor. Neural networks are good at learning the global structure of solution families — which basin to land in, what the rough shape looks like. Classical solvers are good at local refinement — converging to the precise answer once you’re close. Using the neural network as a standalone solver sacrifices the reliability of the classical method. Using the classical solver alone sacrifices the speed of the neural network’s global pattern recognition.

The neural network is not replacing the solver. It’s telling the solver where to start.


No comments yet.