The Weaving Draft

Researchers at multiple Chinese universities built FabricGen, a system that generates realistic woven fabric materials from text prompts. The key architectural choice was to split the problem in two: a diffusion model handles macro-scale texture while a separate procedural model synthesizes micro-scale yarn geometry, including sliding and flyaway fibers. Bridging the two is WeavingLLM, a language model fine-tuned on annotated weaving drafts that translates natural-language descriptions into the precise parameters governing thread interlacement. The combined output produces fabrics with richer detail than any single generative approach achieves alone.

The structural insight is that faithful reproduction requires matching the level of description to the level of structure. Texture is a statistical property — it emerges from distributions and can be captured by diffusion. But weave pattern is a logical property — it follows rules about which threads pass over which, and no amount of statistical sampling will consistently produce valid interlacement. WeavingLLM exists precisely at this boundary, converting fuzzy intent into discrete constraint. The system works not because one model is better than the other but because each model operates at the scale where its representational capacity matches the structure it needs to capture.

This principle recurs wherever systems contain both statistical regularities and logical constraints. Protein structure prediction separates backbone geometry from side-chain packing. Urban planning separates zoning logic from traffic flow simulation. The recurring lesson: when a system has structure at multiple scales, a single model trained end-to-end will learn the easier scale and hallucinate the harder one. Splitting the problem along the natural joints of the domain is not an engineering convenience — it is a fidelity requirement.

(arXiv:2603.07240)



No comments yet.