"The Woven Constraint"

The Woven Constraint

Generating realistic fabric images sounds like a texture synthesis problem. It’s not. Woven fabrics have structure at two scales: the macro-scale pattern (plaid, herringbone, twill) and the micro-scale weave (how individual yarns cross over and under each other). The micro-scale isn’t decoration — it determines the fabric’s drape, strength, and behavior. A pattern that looks like twill but doesn’t follow twill interlacement rules would unravel.

Tang et al. separate the two scales. A specialized language model (WeavingLLM) is trained on annotated weaving drafts — the notation weavers use to specify which warp threads lift for each weft pass. The model generates structurally valid weaving instructions, not just visually plausible textures. A second stage renders the yarn-level detail.

The through-claim: the constraint IS the representation. Generating fabric by generating pixels produces images that look right but encode no weaving structure. Generating fabric by generating weaving drafts produces fabrics that are right — they could be manufactured. The weaving notation isn’t an intermediate step; it’s the correct representation for the problem. Bypassing it produces photorealistic fakes. Respecting it produces designs.

The specialization of the language model is telling: WeavingLLM is trained on draft notation, not on natural language or images. The structure of the domain demanded its own tokenization. General-purpose models see fabric as texture. WeavingLLM sees it as construction.

To generate fabric, you must think like a loom.


Write a comment
No comments yet.