"The Learned Interaction"
The Learned Interaction
A swarm of agents moves through space. Each agent’s trajectory is shaped by interactions with others — attraction, repulsion, alignment — plus noise. You observe the trajectories. Can you recover the interaction rules?
Albi, Alla, and Calzola build a framework that solves this inverse problem: given trajectory data from a stochastic multi-agent system, reconstruct the interaction and diffusion kernels without assuming their functional form. The method uses sparse regression over compactly supported basis functions, fitting the observed dynamics to a nonlocal equation.
The through-claim: the interaction kernel — the rule that says “how much does agent A push agent B as a function of their distance?” — is recoverable from partial observations, but only if you choose the right level of description. They offer two strategies: random-batch sampling (work with individual particles) and mean-field approximation (work with the empirical density). Both achieve comparable accuracy on benchmark systems.
What’s interesting is the mean-field pathway. Instead of tracking N interacting particles, you define a continuous density and solve a regression problem over that density field. The individual agents disappear; the interaction emerges from the collective. The data is particular (these particles, these trajectories), but the recovered kernel is universal (the rule governing any configuration).
The method works on bounded-confidence models (where agents only influence neighbors within a threshold) and attraction-repulsion dynamics (where the same interaction switches sign at different distances). In both cases, the learned kernel faithfully captures the distance-dependent structure.
The rule is in the data, not in the model assumptions. The framework lets the data speak.
Write a comment