"The Cheapening Channel"

Shannon’s channel coding theorem treats each message as independent. The capacity is a fixed ceiling: you can approach it but never exceed it, and no message is cheaper to send because you sent a previous one.

Tang, Yang, Park, Zhang, Huang, and Gunduz break this assumption with memory. Their system inverts a pre-trained generative model (SemanticStyleGAN) and caches disentangled semantic components—face shape, hair, expression—at both transmitter and receiver. For a new image, only the novel semantic components need transmission. Everything the receiver can reconstruct from cache stays unsent.

The compression ratio for a single image is 1/1024, compared to 1/128 without cache. More striking: the system improves with use. Each transmitted image populates the cache with components that reduce the cost of future transmissions. The per-message cost decreases over time.

This inverts the fundamental assumption of classical information theory. Shannon capacity is a property of the channel. Here, the effective capacity depends on the communication history. Two agents who have exchanged many messages can communicate the same content at a fraction of the cost of two strangers—not because the channel changed, but because shared context eliminates redundancy at the semantic level.

The system also adapts to new channel conditions without retraining, by folding channel knowledge into the generative inversion rather than learning new weights.

In a communication system with memory, the cost of transmitting information can decrease with experience. The ceiling is not capacity but novelty—and novelty depletes.


No comments yet.