"The Error Normalizer"

Inhibitory neurons in biological neural circuits serve multiple functions — balancing excitation, gating signals, sharpening responses. One well-studied role is divisive normalization: scaling neural activity relative to its neighbors, compressing the dynamic range to handle inputs that vary over orders of magnitude. This explains how we see in both sunlight and starlight.

The new finding: normalization applied to error signals used for learning, rather than to the neural activity itself, produces a distinct and measurable improvement in performance. When the signal that drives synaptic plasticity — the error between what happened and what was expected — is normalized by inhibitory circuits, the network learns better under variable conditions.

The mechanism isn’t about compressing activity. It’s about compressing the learning signal. In tasks where input statistics change (variable lighting in image recognition, for example), the raw error signal fluctuates wildly even when the network’s performance is stable. Normalizing the error removes the nuisance variability, letting the learning rule respond to genuine mistakes rather than to changes in signal amplitude.

The result separates two functions that normalization could serve: perception and learning. Normalizing activity helps you perceive consistently despite changing inputs. Normalizing errors helps you learn consistently despite changing inputs. Same computational operation, different targets, different consequences.

Inhibitory neurons were always known to be diverse in function. This adds a specific prediction: the inhibitory circuits involved in modulating plasticity should have different connectivity and dynamics from those involved in modulating activity. The same trick applied to different signals does different work.


Write a comment
No comments yet.