Compare commits
10 Commits
f28560300a
...
7065b3ef01
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7065b3ef01 | ||
|
|
a7e766705a | ||
|
|
ee3a0d13f6 | ||
|
|
99972601d5 | ||
|
|
d154092cf4 | ||
|
|
efe7472416 | ||
|
|
74802254a1 | ||
|
|
ebb21a3dd4 | ||
|
|
c704945a88 | ||
|
|
edb0d08c39 |
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
*.pth
|
||||
21
LICENSE
Normal file
21
LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2025 Style3D
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
105
README.md
105
README.md
@@ -1,23 +1,29 @@
|
||||
|
||||
# Design2GarmentCode: Turning Design Concepts to Tangible Garments Through Program Synthesis
|
||||
# Design2GarmentCode: Turning Design Concepts to Tangible Garments Through Program Synthesis
|
||||
|
||||
[](https://arxiv.org/abs/2412.08603)
|
||||
[](https://style3d.github.io/design2garmentcode/)
|
||||
[](https://www.youtube.com/xxx)
|
||||
|
||||
[arXiv](https://arxiv.org/abs/2412.08603) | [Project Page](https://style3d.github.io/design2garmentcode/)
|
||||
<span class="author-block"><a href="">Feng Zhou</a>, </span>
|
||||
<span class="author-block"><a href="https://walnut-ree.github.io/">Ruiyang Liu</a>, </span>
|
||||
<span class="author-block"><a href="">Chen Liu</a>, </span>
|
||||
<span class="author-block"><a href="">Gaofeng He</a>, </span>
|
||||
<span class="author-block"><a href="https://dirtyharrylyl.github.io/">Yong-Lu Li</a>, </span>
|
||||
<span class="author-block"><a href="http://www.cad.zju.edu.cn/home/jin/">Xiaogang Jin</a>, </span>
|
||||
<span class="author-block"><a href="https://wanghmin.github.io/">Huamin Wang</a></span>
|
||||
|
||||
Feng Zhou, Ruiyang Liu, Chen Liu, Gaofeng He, Yong‑Lu Li, Xiaogang Jin, Huamin Wang. *CVPR 2025 .*
|
||||
<p align="center">
|
||||
<img src="https://github.com/Style3D/design2garmentcode-impl/raw/main/assets/img/neural_symbolic-pipeline.png">
|
||||
</p>
|
||||
Official implementation for Design2GarmentCode, a motility-agnostic sewing pattern generation framework that leverages fine-tuned Large Multimodal Models to generate parametric pattern-making programs from multi-modal design concepts.
|
||||
|
||||

|
||||
we propose a novel
|
||||
sewing pattern generation approach Design2GarmentCode
|
||||
based on Large Multimodal Models (LMMs), to generate parametric pattern-making programs from multi-modal
|
||||
design concepts
|
||||
---
|
||||
|
||||
## Installation
|
||||
### 1. Clone the repository
|
||||
```bash
|
||||
git clone https://github.com/your-org/design2garmentcode.git # ← replace with the real URL
|
||||
cd design2garmentcode
|
||||
git clone https://github.com/Style3D/design2garmentcode-impl.git
|
||||
cd design2garmentcode-impl
|
||||
```
|
||||
|
||||
### 2. Create the Conda environment
|
||||
@@ -38,57 +44,30 @@ If you need local cloth simulation and 3‑D visualization, follow the installat
|
||||
`Design2GarmentCode` communicates with large multimodal models.
|
||||
Follow the steps **in the given order**:
|
||||
|
||||
1. **Provide API credentials**
|
||||
- **Environment variable (recommended)** – defaults to *ChatGPT‑4o*
|
||||
#### 1. **Provide API credentials for MMUA**
|
||||
- **Environment variable (recommended)** – defaults to *ChatGPT‑4o*
|
||||
```bash
|
||||
export OPENAI_API_KEY="sk‑..."
|
||||
```
|
||||
- **Edit `system.json`** (project root) – manually specify `api_key`, `base_url`, and `model` if you prefer a file‑based approach.
|
||||
2. **Download the required models**:
|
||||
- First, download the base model [Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct/tree/main).
|
||||
Place the entire folder at:
|
||||
`lmm_utils/Qwen/Qwen2-VL-2B-Instruct/`
|
||||
- **Edit `system.json`** (project root) – manually specify `api_key`, `base_url`, and `model` if you prefer a file‑based approach.
|
||||
|
||||
- Next, download the fine-tuned weights file [model.pth](lmm_utils/Qwen/qwen2vl_lora_mlp/model.pth),
|
||||
and place it in:
|
||||
`lmm_utils/Qwen/qwen2vl_lora_mlp/`
|
||||
#### 2. **Set up the parameter projector**:
|
||||
- Download the base model [Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct/tree/main) and place the modal to `lmm_utils/Qwen/Qwen2-VL-2B-Instruct/`.
|
||||
|
||||
- Download the fine-tuned weights file from [Google Drive](https://drive.google.com/file/d/1CL7OLUq6fYcwoDuLRkBxtKNxJ0_G73U-/view?usp=sharing), and place it in `lmm_utils/Qwen/qwen2vl_lora_mlp/`.
|
||||
---
|
||||
|
||||
## Quick GUI Demo
|
||||
## Testing with GUI
|
||||
|
||||
```bash
|
||||
python gui.py
|
||||
```
|
||||

|
||||
- Input: Free‑form prompt or an image/sketch
|
||||
- Output: GarmentCode JSON, preview image, and (optionally) physics simulation
|
||||
Setting up the GUI with `python gui.py` where you will see the following interface (modified from GarmentCode)
|
||||
|
||||
---
|
||||
<p align="center">
|
||||
<img src="https://github.com/Style3D/design2garmentcode-impl/raw/main/assets/img/gui_example.png">
|
||||
</p>
|
||||
|
||||
### 1. Text-Guided Pattern Generation
|
||||
Switching to the `PARSE DESIGN` tab, and input your design input, either text description, photograph or sketch, to the chatbox. The generated sewing pattern will appear on the right side after parsing.
|
||||
|
||||
- Go to the PARSE DESIGN tab.
|
||||
- In the input box at the bottom ("Describe your design..."), type a natural language description of the garment.
|
||||
e.g., a T-shirt
|
||||
- Click SEND to generate patterns based on your description.
|
||||
|
||||
---
|
||||
|
||||
### 2. Image-Guided Pattern Generation
|
||||
|
||||
- Click the upload icon inside the input box to upload a reference image or sketch.
|
||||
- Once the image is uploaded, click SEND to parse the design and generate corresponding patterns.
|
||||
|
||||
---
|
||||
|
||||
### 3. Modify Patterns in the GUI
|
||||
|
||||
Once a pattern is generated, you can refine it directly inside the GUI:
|
||||
|
||||
1. Focus the input box at the bottom.
|
||||
2. Type `modify: <your-instruction>`
|
||||
- e.g., `modify: make sleeves shorter`
|
||||
3. Press Enter – the system will regenerate the pattern accordingly.
|
||||
Once a pattern is generated, you can modify the result by typing `modify: <your-instruction>` in the chatbox.
|
||||
|
||||
---
|
||||
## Batch Inference
|
||||
@@ -126,27 +105,23 @@ python lmm_utils/test_picture_batch.py \
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Get 3D Garment Patterns
|
||||
## Simulate 3D Garment
|
||||
### 1. Generate from a pattern.json
|
||||
After generating the pattern data, you can simulate the corresponding 3D output directly from the pattern's JSON file.
|
||||
After generating the pattern data, you can simulate the corresponding 3D output directly from the pattern's JSON file with
|
||||
```bash
|
||||
python test_garment_sim.py --pattern_spec $INPUT_JSON
|
||||
```
|
||||
### 2. Generate from gui
|
||||
You can also run the simulation directly on the GUI to obtain 3D data.
|
||||
```bash
|
||||
python gui.py
|
||||
```
|
||||
Or run the simulation directly in the `3D View` GUI tab.
|
||||
|
||||
### Citation
|
||||
```bash
|
||||
If you find this work useful, please cite:
|
||||
|
||||
```bibtex
|
||||
@article{zhou2024design2garmentcode,
|
||||
@inproceedings{zhou2025design2garmentcode,
|
||||
title={Design2GarmentCode: Turning Design Concepts to Tangible Garments Through Program Synthesis},
|
||||
author={Zhou, Feng and Liu, Ruiyang and Liu, Chen and He, Gaofeng and Li, Yong-Lu and Jin, Xiaogang and Wang, Huamin},
|
||||
journal={arXiv preprint arXiv:2412.08603},
|
||||
year={2024}
|
||||
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
|
||||
pages={23712--23722},
|
||||
year={2025}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
BIN
assets/img/gui_example.png
Normal file
BIN
assets/img/gui_example.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 650 KiB |
BIN
assets/img/sim_result.png
Normal file
BIN
assets/img/sim_result.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 611 KiB |
29
requirements.txt
Normal file
29
requirements.txt
Normal file
@@ -0,0 +1,29 @@
|
||||
- --extra-index-url https://download.pytorch.org/whl/cu121
|
||||
- numpy==1.26.4
|
||||
- scipy==1.13.1
|
||||
- pyyaml==6.0.2
|
||||
- svgwrite==1.4.3
|
||||
- psutil==6.0.0
|
||||
- matplotlib
|
||||
- svgpathtools
|
||||
- cairosvg==2.7.1
|
||||
- nicegui==2.15.0
|
||||
- trimesh
|
||||
- cgal
|
||||
- torch==2.4.0+cu121
|
||||
- torchvision==0.19.0+cu121
|
||||
- torchaudio==2.4.0+cu121
|
||||
- transformers==4.46.2
|
||||
- tokenizers==0.20.3
|
||||
- accelerate==1.1.1
|
||||
- datasets==2.18.0
|
||||
- huggingface-hub==0.29.2
|
||||
- safetensors==0.5.3
|
||||
- tiktoken==0.9.0
|
||||
- peft==0.13.2
|
||||
- qwen-vl-utils==0.0.8
|
||||
- modelscope==1.18.0
|
||||
- pyrender==0.1.45
|
||||
- libigl==2.5.1
|
||||
- cgal==6.0.1.post202410241521
|
||||
- openai==1.54.4
|
||||
Reference in New Issue
Block a user