"The Self-Limiting Controller"

The Self-Limiting Controller

Good control concentrates the system state — a well-controlled robot stays near its target, a well-regulated temperature stays near its setpoint. But concentrated state carries less information. A system sitting at its target in a narrow distribution provides fewer distinguishing observations than one wandering broadly.

Pacelli and Theodorou prove this creates a fundamental performance loop: better control → more concentrated state → less informative observations → tighter bounds on future control quality.

Using the Gibbs variational principle from statistical mechanics, they derive exact limits on what sensor-based controllers can achieve. The limit isn’t on the sensors or the actuators — it’s on the feedback loop itself. The controller’s own success constrains how much it can learn from future observations, which constrains how much better it can get.

The result is a fixed point: the optimal controller achieves the best performance consistent with the information it generates about its own system. Push harder, and the system state concentrates further, and the observations become less informative, and the controller can’t justify pushing harder.

The self-limitation is not a design flaw to be engineered around. It’s a thermodynamic-style bound: the information-gathering capacity of the system is coupled to the state distribution, which is coupled to the control quality, which closes the loop. The three constraints form a mutually reinforcing cycle with a unique equilibrium.

Perfect control would require infinite information about a state with zero variance. The controller is limited by its own competence.


No comments yet.