Digital Cloth: You’re Doing It Wrong

One of the areas of interest to Fashion Research Institute is the development of an accurate digital representation of cloth that simulates how cloth moves and drapes.  It’s apparently the hot trend in the tech world too, judging by how many queries we receive and requests to ‘just look at’ whatever nifty new release some tech company has come up with.  Our usual response is “nice try, the gamers and techies will think it’s nice, but it’s not right.”

We’re fashion designers with a long history of creating garments. Our first interest and our true love in fashion is the high art of couture.  Couture differs from almost all other kind of design except for Runway in that a single individual is focused on all aspects of dressing another individual. It’s true one-to-one work, and a large part of what we do as couterieres is to drape cloth on our customers or physical representations (mannequins) of our customers.  A good couteriere can create trompe l’oeil effects and make a man’s legs look longer or a woman’s bust bigger or smaller.  We can slenderize a wearer’s silhouette or we can make him (usually) appear larger.  As we work in the field, we learn first hand about cloth: its drape, its handle, what it wants to do on the mannequin.

The longer we work, the more specific textile knowledge we acquire, until after 30 years our fingertips are the ultimate augmented reality device and have forgotten more about cloth than most people ever know. We can look at a bolt of cloth and know what it will feel like before we ever touch it.  We can determine the fiber percentage by running a finger nail over the weave of the cloth and listening to the resonance of the sound that is made.  Polyester for example, has a high singing note that is unmistakable and the more poly the higher the note.

This is real expertise and you don’t get it in design school or from doing anything but manipulating cloth all day every day to produce the precise results that you want until cloth becomes part of you.

This is why we’re always bewildered when our colleagues in the tech world want us to to be thrilled about their digital models of cloth.  After all, they’ve just created a dancing model with swirling cloth, how fabulous is that?  Well, except for us it’s not that fabulous.  We don’t really know how much hard work you’ve put into your model, we just know when we look at it, it’s not right, by which we mean it’s not accurate.  We’re not excited.  We look at the models dancing and we think ‘it’s not right.’ We don’t think about the people who had to code the underlying program and how they have to figure out how the soft body/hard body deformations are going to work, how occlusions will work, how lighting and ray tracing and all the rest of that stuff is going to have to get wrapped up and made to work, and how the coders got their simulated cloth to run in a semi-reasonable render time preferably on a PC and how excited they are and how this new digital cloth stunt is going to change the (gaming) industry. Fashion designers don’t think that way about cloth, we don’t know (usually) how much work they’ve put into their code, all we think is, ‘hm, ok’, and then it’s back to reading Women’s Wear Daily. We don’t think ‘how can we use this’ because frankly, we cannot.

Tech guys….and almost all of you are guys…has it occurred to you that you aren’t asking the right questions? Or that you aren’t asking the right people?  And if you keep getting the wrong answers from the wrong people, how do you expect to get the correct answer?

A colleague sent us this article to read on New Scientist: Game Characters to get authentically rumpled clothes.  Of course we went and read, and of course we watched the video, and of course we were disappointed.   And we got a chuckle out of these quotes in the article: “According to [Carsten] Stoll [of the Max Planck Institute for Informatics in Saarbrücken, Germany], the results are extremely realistic. When he and his team showed 52 people a video of a woman dancing in a skirt alongside a reconstruction that his software had produced, the majority of viewers said that the reconstruction was “almost the same” as the original.  We have to ask, did they bother to ask the actual experts who work in cloth every day? We think they didn’t or the responses would have been very different. Mr. Stoll, we’re sorry. You’ve done beautiful work. You should be thrilled and we’re sure game designers will be too.  Fashion designers…not so much. Your cloth model doesn’t look like cloth to our trained eyes. We can’t use it for anything real.

And then this, which we thought really summed up the whole thing but didn’t go far enough: ‘But to truly fool the eye,  [Andy] Lomas [The Foundry, London] would like to see a more sophisticated version of the software reconstruct more challenging items of clothing, like buttoned jackets and well-tailored suits.”

We’re designers. The only way we want to fool the eye (trompe l’oeil) is by making people look better in their clothing by changing our cut, styling and textiles we use in their garments.  And unfortunately, when we look at these digital models out there, no one is fooling our eye.

Lomas goes on to say “Right now, no one is going to trust a computer graphics expert with no experience of fashion to design a virtual suit,”  And we say, ‘well said, Mr. Lomas, you nailed it.’ So we have to ask why are all these tech folk and mathematicians trying to create digital cloth that looks like digital cloth without actually asking the real experts?

Digital cloth: guys, you are doing it wrong.

The ‘right’ answer is hard and it is expensive. We know that. We know how to do it, we know how much it will cost, and we know how long it’s going to take.  There are unfortunately no shortcuts to the right answer to creating realistic, accurate digital cloth simulations.  And anything less is not good enough for real apparel design.

Advertisements

4 thoughts on “Digital Cloth: You’re Doing It Wrong

  1. Worth noting, though, that part of the reason for this dialogue is the fact that we’ve all had, in virtual environments, experience of clothing that felt substantially ‘right’ in certain ways (e.g., perhaps only in one posture, perhaps in a photographed tableau) despite its contrived nature being obvious.

  2. fascinating article and while i would love clothes that look like clothes (and not just a poor body painting application), i don’t know any better and never pay much attention to this issue (i have been wearing the same outfit for a year – ewww) =D

    i suppose once i was exposed to what clothes “could be”, i would then have a new yardstick and “have to have” that type of clothing! =)

  3. John, thanks. We think this is a pretty important thing ourselves.

    There are no incremental steps to cloth. Any efforts to use graphic cheats are obvious to the trained eye. Most fashion designers aren’t going to be able to have the background necessary to discuss why it’s wrong, so talking to just any domain expert isn’t going to work.

    Imagine the scene: tech team hopefully shows their latest simulation to a generic fashion designer. Tech team hopefully asks, is it right? Designer says no. Tech team, crestfallen, asks why not? She says, it’s not right, and goes back to designing. Tech team drops heads in hands and dreams about a world where designers come with USB ports in their heads with factory-installed software so they can just plug her into their laptop and download her knowledge.

    A cloth simulation either is or isn’t right. There are no half measures. When the simulation model is correct, the digital cloth looks right. It is accurate, and the visualization meets our rather well-defined expectations. There are no graphics cheats involved.

    Also, the process of getting a model to simulate cloth accurately and correctly is incredibly painful and not an entertaining experience. Seriously.

  4. This is an excellent essay, and needs to be seen by a lot of people. It obviously has implications that go way beyond cloth, too — it reads to the whole way people tend to think about simulating physics and physical process, and in a larger sense, how folks are biased to think about simulating anything using computers.

    For what it’s worth, I think it also has implications to how we engineer online experience more mundane (or ostensibly more mundane) than visual simulation — for example, the way clothing is sold via online retail. Clearly, this is working better than it used to, but advances are incremental, and once again, biased by the limitations (and concern for the performance) of video displays and digital photography, without consideration of the other sensual aspects of worn objects — many of which are critical to the shopper’s making a confident determination of quality before buying.

    Overall, it makes me think at least one counter-intuitive thing — which is that I’d like to see and play with (for example) cloth simulations that are functionally-better, though visually less-good (for now). I suspect that’s the tradeoff, since in general, there are two ways to simulate anything: the high-performance way, which seeks to find efficiently-computable mathematical expressions that can be said to model properties; and the exhaustive way, which takes a tiny chunk of reality and seeks to model it rigorously, then networks it with similar chunks to (try to) achieve higher-order effects.

    I suspect the latter is more along the lines of what you’re talking about: a less-abstract approach that aims at substance and meaning, rather than rendering cost. But I’m cautioned that, historically, when they try to do stuff like this, they tend to hit the wall pretty fast. Back in the day, there was an offshoot of turbulence simulation that was working in this direction, trying to achieve usable results by processing a very, very small number of water or air molecules that were themselves modeled with hysterical attention to detail — the general notion was “What if we simulate the molecules real well, then make them real big and send only 100 or so down the pipe — do we get meaningful results?” Not sure what the result of that was, but it’s clearly not what the game-engine guys are doing.

    Be interesting to see with cloth though. Not saying it would be usable (or even look like cloth). But it would be interesting to see if professional cloth-users could see essential ‘realism’ in a graphically-poor simulation based on solidist principles.

Comments are closed.