"The Fitness Peak"
The Fitness Peak
Scaling laws in AI are presented as monotonic: larger models perform better, and performance improves predictably with compute. Baciak, Cellucci, and Falkowski (arXiv:2603.14664) argue this confuses capability with fitness.
They define an Institutional Fitness Manifold with four dimensions: capability, trust, affordability, and compliance. Capability scales with model size. The other three don’t — or rather, they scale in the wrong direction. Larger models are harder to trust (less interpretable, more prone to unexpected behaviors), more expensive to deploy, and more difficult to certify for regulated environments. The Institutional Scaling Law: fitness is non-monotonic in model scale.
This means there is a peak. Below it, scaling helps. Above it, scaling hurts — not because the model is less capable, but because the institutional friction of deploying it outweighs the capability gains. The curve bends back.
The analogy to punctuated equilibria is deliberate. Biological evolution does not proceed smoothly — long periods of stasis are interrupted by rapid transitions driven by environmental shifts. The authors identify five eras in AI history since 1943, each separated by a phase transition rather than a smooth capability ramp. The current Generative AI era has its own internal epochs, each triggered not by capability improvements but by institutional responses: regulation, trust crises, cost barriers.
The practical implication inverts the scaling paradigm. Orchestrated systems of smaller, domain-adapted models can mathematically outperform frontier generalists in most deployment contexts — not because they are more capable per-parameter, but because they are more trustworthy, cheaper, and easier to certify. The smaller model wins not by being better but by being deployable.
The deeper claim: capability is not the scarce resource. Trust is. And trust does not scale.
Write a comment