Announcing Drape2
Mar 31, 2025 - by Uwear.ai
Today, we release Drape2, a next-gen adapter to generate on-model images of clothing products from a single packshot, in seconds. While Drape1 was trained on SDXL, Drape2 is a FLUX Schnell adapter, opening new opportunities for more human model diversity and better clothing fidelity. This release gets us closer to our vision of providing a fast and cheap virtual fashion camera for all.
A new Benchmark in Realism and Diversity
Drape2 makes an incredible leap in the quality and variety of the virtual human models it creates. Today's generative models can generate incredibly realistic humans, but tend to converge towards the same body shapes and face traits. A key requirement to build the ultimate virtual camera is to have the same breadth of diversity as humanity itself. Drape2 gets us closer to that goal.
Every Stitch Counts
Faithfully rendering clothing details from a single packshot in seconds is a core challenge, but equally important is achieving that fidelity consistently. With complex items in Drape1, nailing the perfect render could often require generating 5, 10, or even 20 images. Reducing this variability is key.
We stick firmly to our constraints – speed and single-image input – because they are fundamental to the accessible and scalable virtual camera we envision. Our path isn't to compromise these for easier quality gains.
This is where Drape2 makes a dramatic difference. While still generating in ~5 seconds (on an L40S GPU), it significantly improves the reliability of accurate generations. Intricate textures, seams, and embellishments are now captured with high fidelity much more consistently, drastically reducing the number of attempts needed to get a great result.
While perfect 'one-shot' generation for every item remains the ultimate goal, this breakthrough in reliability strongly validates our approach. It fuels our commitment to pushing both quality and consistency within these essential boundaries, making the process far more efficient for our users.
Four consecutives generations using Drape2 of detailed clothing, 20 seconds on a single L40S:
"little blonde boy, sitting on the floor of a classroom"
"female model standing in a paint shop, wearing a tshirt with blue jeans and yellow snickers"
"midshot of an athletic black man, studio photography"
Up Next: Consistent Models and Multi-Clothing
The launch of Drape2 marks significant progress, but we're already pushing towards the next frontiers to realize our full vision for a seamless virtual fashion camera. Here’s what’s on our roadmap:
First, we're enabling human models as inputs. This is crucial for giving users complete control – allowing you to select a specific model or even use a custom avatar, ensuring consistency across multiple generations. Most importantly, this capability unlocks the door to genuine virtual try-on functionalities, allowing shoppers to visualize clothing on models that represent them accurately in seconds.
Following that, the final key piece is developing native support for multi-clothing inputs. Our goal is for Drape to intelligently generate a complete, cohesive outfit when provided with multiple clothing items simultaneously. While common workarounds involve using inpainting to layer additional garments onto an initial image, we believe this approach inherently limits quality. True outfit realism comes from the AI understanding and generating all pieces together from the start, which is the capability we are building.
These two advancements are the next major steps in delivering the fast, accessible, and truly comprehensive virtual fashion camera we envision.
Try Drape2 today
You can try Drape2 today on our virtual studio or via api: