Update README

This commit is contained in:
Ree Liu
2025-07-04 13:20:19 +08:00
parent ebb21a3dd4
commit 74802254a1

View File

@@ -44,20 +44,17 @@ If you need local cloth simulation and 3D visualization, follow the installat
`Design2GarmentCode` communicates with large multimodal models. `Design2GarmentCode` communicates with large multimodal models.
Follow the steps **in the given order**: Follow the steps **in the given order**:
1. **Provide API credentials** #### 1. **Provide API credentials for MMUA**
- **Environment variable (recommended)** defaults to *ChatGPT4o* - **Environment variable (recommended)** defaults to *ChatGPT4o*
```bash ```bash
export OPENAI_API_KEY="sk..." export OPENAI_API_KEY="sk..."
``` ```
- **Edit `system.json`** (project root) manually specify `api_key`, `base_url`, and `model` if you prefer a filebased approach. - **Edit `system.json`** (project root) manually specify `api_key`, `base_url`, and `model` if you prefer a filebased approach.
2. **Download the required models**:
- First, download the base model [Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct/tree/main).
Place the entire folder at:
`lmm_utils/Qwen/Qwen2-VL-2B-Instruct/`
- Next, download the fine-tuned weights file [model.pth](lmm_utils/Qwen/qwen2vl_lora_mlp/model.pth), #### 2. **Set up the parameter projector**:
and place it in: - Download the base model [Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct/tree/main) and place the modal to `lmm_utils/Qwen/Qwen2-VL-2B-Instruct/`.
`lmm_utils/Qwen/qwen2vl_lora_mlp/`
- Download the fine-tuned weights file [model.pth](lmm_utils/Qwen/qwen2vl_lora_mlp/model.pth), and place it in `lmm_utils/Qwen/qwen2vl_lora_mlp/`.
--- ---
## Testing with GUI ## Testing with GUI