39 Commits

Author SHA1 Message Date
zcr
893f5e87b4 3D 打板部署
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
2026-04-28 17:17:29 +08:00
zcr
c73bfa7e2a 3D 打板部署
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
2026-04-28 17:03:04 +08:00
zcr
ad4db736de 新增nacos 配置 测试
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
2026-04-24 10:17:42 +08:00
zcr
cfbd9e47ac 新增nacos 配置 测试
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
2026-04-23 17:10:22 +08:00
zcr
6892361050 修复design印花部分 overall 模式印花平铺起始从印花图片中心开始
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
2026-04-15 17:36:29 +08:00
zcr
f0b73d5fc1 修复design印花部分 mask_inv_print 提取错误
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
2026-04-15 17:23:00 +08:00
zcr
7543d6b346 feat: 更新flux2 klein 的输出示例 ; fix:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
2026-04-14 10:16:30 +08:00
zcr
3ca4003e30 feat: 更新flux2 klein 的输出示例 ; fix: 2026-03-30 17:22:14 +08:00
zcr
59e8a88a01 feat: 更新flux2 klein 的输出示例 ; fix: 2026-03-30 17:14:18 +08:00
zcr
3414f2c1aa feat: 更新分割模型参数 ; fix: 2026-03-27 14:59:27 +08:00
zcr
160bf1a6b1 feat: 更新分割模型参数 ; fix: 2026-03-27 14:56:32 +08:00
zcr
a4d55fdb14 feat: flux2 增加状态码 ; fix: 2026-03-25 10:29:03 +08:00
zcr
7f2f79d029 feat: flux2 增加状态码 ; fix: 2026-03-24 14:35:39 +08:00
zcr
6d9e96305b feat: brand dna logo生成替换flux2klein ; fix: 2026-03-23 11:21:50 +08:00
zcr
d93c50ce2b feat: 新增flux2klein作为moodboard的localbase 模型 ; fix: 2026-03-23 10:46:16 +08:00
zcr
e25f49a776 feat:
fix: 删除计数中间件
2026-03-13 11:22:12 +08:00
zcr
33b4dd4a7f feat:
fix: 翻译 模型ip更换
2026-03-05 15:20:40 +08:00
zcr
7e48420ba7 feat:
fix: sam 模型ip更换
2026-03-05 15:06:19 +08:00
zcr
09e25f423e feat:
fix:  others 旋转功能修复
2026-03-05 14:01:29 +08:00
zcr
dcc88adfc0 feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:  替换项目中所有mmcv的依赖
2026-02-27 15:26:07 +08:00
zcr
c03b7e263e feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:  替换项目中所有mmcv的依赖
2026-02-10 11:17:31 +08:00
zcr
200414e5ad feat: 停用flux2 img2product 复用sdxl img2product
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:
2026-02-09 17:33:07 +08:00
zcr
4656eeee91 feat: 印花逻辑修改 默认不处理除overall以外所有印花类型
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:
2026-02-03 16:43:33 +08:00
zcr
fe25f5878b feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix: 修复sketch类型为others时 跳过 上印花 导致的尺寸与分割尺寸不一致问题, 修复others分割出后片的问题
2026-02-03 16:22:47 +08:00
zcr
2cc17a1210 feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix: 队列名修复
2026-02-02 15:37:01 +08:00
zcr
be92d48abb feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix: 回溯镜像旋转逻辑
2026-01-30 15:45:57 +08:00
zcr
f8382f280f feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:  修复类别为other时出现的pipeline item缺失
2026-01-29 16:25:43 +08:00
zcr
c24862507f feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:  slogan 服务迁移
2026-01-28 15:37:03 +08:00
zcr
e02ca351b6 feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:  印花overall 角度异常
2026-01-27 13:42:34 +08:00
zcr
c987f498bc feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:
2026-01-27 11:28:36 +08:00
zcr
3aa8dfa0f4 feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix: 移除打印
2026-01-27 10:12:23 +08:00
zcr
265f4de50e feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix: 更新端口
2026-01-26 16:32:30 +08:00
zcr
a996a1853d feat:
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix: 更新端口
2026-01-26 16:11:10 +08:00
zcr
1cbd019ffd feat: 更新翻译模型
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:
2026-01-26 15:56:42 +08:00
zcr
e2a49e2f3a feat: 新增to product img flux2 版,停用sdxl版
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:
2026-01-26 15:26:15 +08:00
zcr
66037c94e6 feat: 新增to product img flux2 版,停用sdxl版
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:
2026-01-26 15:23:49 +08:00
zcr
754e8d7735 feat: 新增to product img flux2 版,停用sdxl版
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:
2026-01-26 15:21:51 +08:00
zcr
cdaeb6daac feat: 新增to product img flux2 版,停用sdxl版
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
fix:
2026-01-26 15:19:28 +08:00
zcr
863d9287dc fix: 参数对齐
All checks were successful
git commit AiDA python develop 分支构建部署 / scheduled_deploy (push) Has been skipped
(cherry picked from commit ddef6af1cf)
2026-01-26 14:56:49 +08:00
41 changed files with 966 additions and 565 deletions

View File

@@ -1,2 +1,6 @@
seg_cache seg_cache
test test
.venv
__pycache__/
*.pyc
.git/

View File

@@ -20,7 +20,6 @@
$ conda activate trinity_client_aida $ conda activate trinity_client_aida
$ pip install -r requirements.txt $ pip install -r requirements.txt
$ conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia -y $ conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia -y
$ pip install mmcv==1.4.2 -f https://download.openmmlab.com/mmcv/dist/cu117/torch1.13/index.html
1. 启动服务器 1. 启动服务器

View File

@@ -395,7 +395,8 @@ async def seg_anything(request_data: SAMRequestModel):
通过传入图片路径和点击的点坐标,返回分割后的掩码数据。 通过传入图片路径和点击的点坐标,返回分割后的掩码数据。
### 参数说明: ### 参数说明:
- **user_id**:用户id 用于存储分割图 - **bucket**: minio bucket name
- **object_name**: minio object name
- **image_path**: 图片在服务器或云端的相对路径。 - **image_path**: 图片在服务器或云端的相对路径。
- **type**: 推理类型 - **type**: 推理类型
- **box**: 框选矩形点位信息 - **box**: 框选矩形点位信息
@@ -408,7 +409,8 @@ async def seg_anything(request_data: SAMRequestModel):
```json ```json
point point
{ {
"user_id": 1, "bucket": "test",
"object_name": "7068-400a-ac94-c01647fa5f6f.png",
"image_path": "aida-users/89/sketch/4e8fe37d-7068-400a-ac94-c01647fa5f6f.png", "image_path": "aida-users/89/sketch/4e8fe37d-7068-400a-ac94-c01647fa5f6f.png",
"type":"point", "type":"point",
"points": [[310, 403], [493, 375], [261, 266], [404, 484]], "points": [[310, 403], [493, 375], [261, 266], [404, 484]],
@@ -417,7 +419,8 @@ async def seg_anything(request_data: SAMRequestModel):
box box
{ {
"user_id": 1, "bucket": "test",
"object_name": "7068-400a-ac94-c01647fa5f6f.png",
"image_path": "aida-users/89/sketch/4e8fe37d-7068-400a-ac94-c01647fa5f6f.png", "image_path": "aida-users/89/sketch/4e8fe37d-7068-400a-ac94-c01647fa5f6f.png",
"type":"box", "type":"box",
"box": [350, 286, 544, 520] "box": [350, 286, 544, 520]
@@ -426,7 +429,7 @@ async def seg_anything(request_data: SAMRequestModel):
""" """
try: try:
logger.info(f"seg_anything request item is : @@@@@@:{json.dumps(request_data.dict(), indent=4)}") logger.info(f"seg_anything request item is : @@@@@@:{json.dumps(request_data.dict(), indent=4)}")
data = requests.post(f"http://{settings.A6000_SERVICE_HOST}:10075/predict", json=request_data.dict()) data = requests.post(f"http://{settings.B_4_X_4090_SERVICE_HOST}:10075/predict", json=request_data.dict())
logger.info(f"seg_anything response @@@@@@:{json.dumps(json.loads(data.content), indent=4)}") logger.info(f"seg_anything response @@@@@@:{json.dumps(json.loads(data.content), indent=4)}")
return ResponseModel(data=json.loads(data.content)) return ResponseModel(data=json.loads(data.content))
except Exception as e: except Exception as e:

View File

@@ -1,9 +1,12 @@
import json import json
import logging import logging
import httpx
import requests
from fastapi import APIRouter, BackgroundTasks, HTTPException from fastapi import APIRouter, BackgroundTasks, HTTPException
from app.schemas.generate_image import GenerateImageModel, GenerateProductImageModel, GenerateSingleLogoImageModel, GenerateRelightImageModel, GenerateMultiViewModel, BatchGenerateProductImageModel, BatchGenerateRelightImageModel, AgentTollGenerateImageModel from app.core.config import settings
from app.schemas.generate_image import GenerateImageModel, GenerateProductImageModel, GenerateSingleLogoImageModel, GenerateRelightImageModel, GenerateMultiViewModel, BatchGenerateProductImageModel, BatchGenerateRelightImageModel, AgentTollGenerateImageModel, Flux2ToProductImgModel, GenerateSloganImageModel, GenerateImageFlux2KleinModel
from app.schemas.pose_transform import BatchPoseTransformModel from app.schemas.pose_transform import BatchPoseTransformModel
from app.schemas.response_template import ResponseModel from app.schemas.response_template import ResponseModel
from app.service.generate_batch_image.service import start_product_batch_generate, start_relight_batch_generate, start_pose_transform_batch_generate from app.service.generate_batch_image.service import start_product_batch_generate, start_relight_batch_generate, start_pose_transform_batch_generate
@@ -20,6 +23,61 @@ logger = logging.getLogger()
'''generate image''' '''generate image'''
# flux2 klein
@router.post("/generate_image_flux2_klein")
async def generate_image_flux2_klein(request_item: GenerateImageFlux2KleinModel):
"""
创建一个具有以下参数的请求体:
- **bucket_name**: OSS桶名 (必填)
- **object_name**: OSS对象名文件路径(必填)
- **width**: 图片宽度默认1024像素 (非必填,1024)
- **height**: 图片高度默认1024像素 (非必填,默认1024)
- **prompt**: 文本提示词,用于模型推理等场景 (非必填,默认"")
- **steps**: 推理步数,控制模型生成过程的迭代次数 (非必填,默认4)
- **guidance**: 引导系数,调节提示词对生成结果的影响程度 (非必填,默认 4.0 )
### 示例参数:
```
{
"bucket_name": "aida-users",
"object_name": "89/moodboard/5fdc698c-cb9b-4b36-afa9ce4-1-89.png",
"prompt": "a single item of sketch of dress, 4k, white background"
}
```
### 输出示例:
```
{
"code": 200,
"msg": "OK!",
"data": {
"output_path": "aida-users/89/moodboard/5fdc698c-cb9b-4b36-afa9ce4-1-89.png"
}
}
```
"""
try:
logger.info(f"generate_image_flux2_gen_img request: {json.dumps(request_item.model_dump(), indent=4)}")
async with httpx.AsyncClient(timeout=120) as client:
resp = await client.post(
f"http://{settings.FLUX2_GEN_IMG_MODEL_URL}/predict",
json=request_item.model_dump(),
)
if resp.status_code == 200:
result = resp.json()
logger.info(f"flux2_gen_img response: {json.dumps(result, indent=4)}")
return ResponseModel(data=result)
else:
error = resp.json()
logger.info(f"flux2_gen_img response: {json.dumps(error, indent=4)}")
return ResponseModel(data=error, msg="ERROR!", code=500)
except Exception as e:
logger.warning(f"generate_image_flux2_gen_img Run Exception @@@@@@:{e}")
raise HTTPException(status_code=404, detail=str(e))
# sdxl
@router.post("/generate_image") @router.post("/generate_image")
def generate_image(request_item: GenerateImageModel, background_tasks: BackgroundTasks): def generate_image(request_item: GenerateImageModel, background_tasks: BackgroundTasks):
""" """
@@ -154,6 +212,62 @@ def generate_single_logo_image(tasks_id: str):
return ResponseModel(data=data['data']) return ResponseModel(data=data['data'])
"""slogan """
@router.post("/generate_slogan")
async def generate_slogan(request_data: GenerateSloganImageModel):
"""
### 请求体示例:
```json
{
"num_point": 16,
"image_url": "aida-slogan/6886785f-0aac-4052-b6fd-7ae20a841d8d.png",
"prompt": "123",
"tasks_id": "string-89"
}
```
"""
try:
logger.info(f"generate_slogan request item is : @@@@@@:{json.dumps(request_data.dict(), indent=4)}")
data = requests.post(f"http://{settings.A6000_SERVICE_HOST}:10020/api/slogan", json=request_data.dict())
logger.info(f"generate_slogan response @@@@@@:{json.dumps(json.loads(data.content), indent=4)}")
return ResponseModel(data=json.loads(data.content))
except Exception as e:
logger.warning(f"generate_slogan Run Exception @@@@@@:{e}")
"""product image flux2.0"""
# @router.post("/img_to_product")
# async def img_to_product(request_data: Flux2ToProductImgModel):
# """
# 创建一个具有以下参数的请求体:
# - **tasks_id**: 任务id 用于取消生成任务和获取生成结果
# - **prompt**: 想要生成图片的描述词
# - **image_path**: 被生成图片的S3或minio url地址
# - **infer_step**: 推理步数
#
# ### 请求体示例:
# ```json
# point
# {
# "prompt": "Create realistic studio photo with real people model standing and wearing this garment, in white studio, Keep original model if present, or generate appropriate model, Standing pose, facing camera.",
# "image_path":"aida-results/result_38151e0a-f83b-11f0-89f6-0242ac130002.png",
# "infer_step":4,
# "tasks_id":"123456-123"
# }
# ```
# """
# try:
# logger.info(f"img_to_product request item is : @@@@@@:{json.dumps(request_data.dict(), indent=4)}")
# data = requests.post(f"http://{settings.A6000_SERVICE_HOST}:10090/api/v1/to_product", json=request_data.dict())
# logger.info(f"img_to_product response @@@@@@:{json.dumps(json.loads(data.content), indent=4)}")
# return ResponseModel(data=json.loads(data.content))
# except Exception as e:
# logger.warning(f"img_to_product Run Exception @@@@@@:{e}")
'''product image''' '''product image'''

View File

@@ -11,6 +11,7 @@ from app.api import api_precompute
from app.api import api_prompt_generation from app.api import api_prompt_generation
from app.api import api_recommendation from app.api import api_recommendation
from app.api import api_test from app.api import api_test
from app.api import api_sketch_to_garment
router = APIRouter() router = APIRouter()
@@ -26,6 +27,7 @@ router.include_router(api_precompute.router, tags=['api_precompute'], prefix="/a
router.include_router(api_mannequins_edit.router, tags=['api_mannequins_edit'], prefix="/api") router.include_router(api_mannequins_edit.router, tags=['api_mannequins_edit'], prefix="/api")
router.include_router(api_pose_transform.router, tags=['api_pose_transform'], prefix="/api") router.include_router(api_pose_transform.router, tags=['api_pose_transform'], prefix="/api")
router.include_router(api_clothing_seg.router, tags=['api_clothing_seg'], prefix="/api") router.include_router(api_clothing_seg.router, tags=['api_clothing_seg'], prefix="/api")
router.include_router(api_sketch_to_garment.router, tags=['sketch_to_garment'], prefix="/api")
"""停用""" """停用"""
# from app.api import api_chat_robot # from app.api import api_chat_robot

View File

@@ -0,0 +1,104 @@
import json
import logging
from fastapi import APIRouter, HTTPException
from app.schemas.response_template import ResponseModel
from app.schemas.sketch_to_garment_schemas import SketchToGarmentModel
from app.service.sketch2garment.server import submit_sketch_to_garment_task
logger = logging.getLogger()
router = APIRouter()
@router.post("/sketch_to_garment")
def sketch_to_garment_api(request_item: SketchToGarmentModel):
"""
### 接口说明:
将图片转换为3D模型异步处理。接口接收请求后立即返回任务ID后台通过 Celery 处理,处理完成后结果会通过 RabbitMQ 发送。
### 参数说明:
- **input_image_path**: 输入图片路径
- **bucket_name**: bucket name
- **user_id**: 用户id
- **callback_url**: 回调url
- **task_id**: 任务id
- **model**: 转换模式 文本和图片 ,默认只有图片
### 请求体示例:
**单张图片模式:**
```json
{
"input_image_path": "test/53d38bd5-f77b-4034-ada2-45f1e2ebe00c.png",
"bucket_name": "test",
"user_id": "string-456",
"callback_url": "http://18.167.251.121:10015/api/image/webhook/img-to-3d",
"task_id": "string12",
"model": "picture"
}
```
### 输出示例:
```json
{
"code": 200,
"msg": "OK!",
"data": {
"state": "success",
"task_id": "string12",
"message": "任务已成功提交,正在后台处理..."
}
}
```
### 错误输出
参考文档: https://platform.tripo3d.ai/docs/error-handling
```json
{
"code": 500,
"message": "You dont have enough credit to create this task",
"data": {
"status": "fail",
"task_id": "123",
"message": "You dont have enough credit to create this task",
"error": str(e)
}
}
```
回调请求参数例子:
```json
{
"task_id": "string12",
"status": "success",
"result": {
"pattern": "test/string-456/pattern_making/now_string-456_pattern.png",
"texture": "test/string-456/pattern_making/now_string-456_texture.png",
"glb": "test/string-456/pattern_making/now_string-456_sim.glb",
"texture_fabric": "test/string-456/pattern_making/now_string-456_texture_fabric.png"
}
}
```
"""
try:
logger.info(f"sketch_to_garment request item is : @@@@@@:{json.dumps(request_item.model_dump(), indent=4)}")
result = submit_sketch_to_garment_task(
task_id=request_item.task_id,
callback_url=request_item.callback_url,
bucket_name=request_item.bucket_name,
input_image_path=request_item.input_image_path,
user_id=request_item.user_id,
model=request_item.model
)
result = {
"state": "success",
"task_id": request_item.task_id,
"message": "任务已成功提交,正在后台处理...",
}
state_code = 200
return ResponseModel(data=result, code=state_code)
except Exception as e:
logger.warning(f"super_resolution Run Exception @@@@@@:{e}")
raise HTTPException(status_code=404, detail=str(e))

View File

@@ -1,235 +0,0 @@
import os
import pika
from dotenv import load_dotenv
from pydantic import BaseSettings
BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../'))
load_dotenv(os.path.join(BASE_DIR, '.env'))
class Settings(BaseSettings):
PROJECT_NAME: str = 'FASTAPI BASE'
SECRET_KEY: str = ''
API_PREFIX: str = ''
BACKEND_CORS_ORIGINS: list[str] = ['*']
DATABASE_URL: str = ''
ACCESS_TOKEN_EXPIRE_SECONDS: int = 60 * 60 * 24 * 7 # Token expired after 7 days
SECURITY_ALGORITHM: str = 'HS256'
LOGGING_CONFIG_FILE: str = os.path.join(BASE_DIR, 'logging_env.py')
OSS = "minio"
DEBUG = False
if DEBUG:
LOGS_PATH = "logs/"
CATEGORY_PATH = "service/attribute/config/descriptor/category/category_dis.csv"
SEG_CACHE_PATH = "../seg_cache/"
POSE_TRANSFORM_VIDEO_PATH = "../pose_transform_video/"
RECOMMEND_PATH_PREFIX = "service/recommend/"
CHROMADB_PATH = "./chromadb/"
else:
LOGS_PATH = "app/logs/"
CATEGORY_PATH = "app/service/attribute/config/descriptor/category/category_dis.csv"
SEG_CACHE_PATH = "/seg_cache/"
POSE_TRANSFORM_VIDEO_PATH = "/pose_transform_video/"
RECOMMEND_PATH_PREFIX = "app/service/recommend/"
CHROMADB_PATH = "/chromadb/"
# RABBITMQ_ENV = "" # 生产环境
RABBITMQ_ENV = os.getenv("RABBITMQ_ENV", "-dev")
# RABBITMQ_ENV = "-local" # 本地测试环境
if RABBITMQ_ENV == "-dev":
JAVA_STREAM_API_URL = f"https://develop.api.aida.com.hk/api/third/party/receiveDesignResults"
elif RABBITMQ_ENV == "-prod":
JAVA_STREAM_API_URL = f"https://api.aida.com.hk/api/third/party/receiveDesignResults"
settings = Settings()
# minio 配置
MINIO_URL = "www.minio-api.aida.com.hk"
MINIO_ACCESS = 'vXKFLSJkYeEq2DrSZvkB'
MINIO_SECRET = 'uKTZT3x7C43WvPN9QTc99DiRkwddWZrG9Uh3JVlR'
MINIO_SECURE = True
# S3 配置
S3_ACCESS_KEY = "AKIAVD3OJIMF6UJFLSHZ"
S3_AWS_SECRET_ACCESS_KEY = "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"
S3_REGION_NAME = "ap-east-1"
# redis 配置
REDIS_HOST = "10.1.1.240"
REDIS_PORT = "6379"
REDIS_DB = "2"
# rabbitmq config
RABBITMQ_PARAMS = {
"host": "18.167.251.121",
"port": 5672,
"credentials": pika.credentials.PlainCredentials(username='rabbit', password='123456'),
"virtual_host": "/"
}
# milvus 配置
MILVUS_URL = "http://10.1.1.240:19530"
MILVUS_TOKEN = "root:Milvus"
MILVUS_ALIAS = "default"
MILVUS_TABLE_KEYPOINT = "keypoint_cache_2"
MILVUS_TABLE_SEG = "seg_cache"
# Mysql 配置
DB_HOST = '18.167.251.121' # 数据库主机地址
# DB_PORT = int( 33006)
DB_PORT = 33008 # 数据库端口
DB_USERNAME = 'aida_con_python' # 数据库用户名
DB_PASSWORD = '123456' # 数据库密码
DB_NAME = 'aida' # 数据库库名
# openai
os.environ['SERPAPI_API_KEY'] = "a793513017b0718db7966207c31703d280d12435c982f1e67bbcbffa52e7632c"
OPENAI_STREAM = True
BUFFER_THRESHOLD = 6 # must be even number
SINGLE_TOKEN_THRESHOLD = 200
TOKEN_THRESHOLD = 600
OPENAI_TEMPERATURE = 0
# OPENAI_API_KEY = "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"
OPENAI_API_KEY = "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"
OPENAI_MODEL = "gpt-3.5-turbo-0613"
OPENAI_MODEL_LIST = {"gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k-0613",
"gpt-4-0314",
"gpt-4-32k-0314",
"gpt-4-0613",
"gpt-4-32k-0613", }
# SR service config
SR_MODEL_NAME = "super_resolution"
SR_TRITON_URL = "10.1.1.240:10031"
SR_MINIO_BUCKET = "aida-users"
SR_RABBITMQ_QUEUES = f"SuperResolution{RABBITMQ_ENV}"
# GenerateImage service config
FAST_GI_MODEL_URL = '10.1.1.243:10011'
FAST_GI_MODEL_NAME = 'stable_diffusion_xl'
GI_MODEL_URL = '10.1.1.240:10061'
GI_MODEL_NAME = 'flux'
GMV_MODEL_URL = '10.1.1.243:10081'
GMV_MODEL_NAME = 'multi_view'
GMV_RABBITMQ_QUEUES = f"GenerateMultiView{RABBITMQ_ENV}"
GI_MINIO_BUCKET = "aida-users"
GI_RABBITMQ_QUEUES = f"GenerateImage{RABBITMQ_ENV}"
GI_SYS_IMAGE_URL = "aida-sys-image/generate_image/white_image.jpg"
# SLOGAN service config
SLOGAN_RABBITMQ_QUEUES = f"Slogan{RABBITMQ_ENV}"
# Generate Single Logo service config
GSL_MODEL_URL = '10.1.1.243:10041'
GSL_MINIO_BUCKET = "aida-users"
GSL_MODEL_NAME = 'stable_diffusion_xl_transparent'
GEN_SINGLE_LOGO_RABBITMQ_QUEUES = f"GenSingleLogo{RABBITMQ_ENV}"
# Generate Product service config
# GPI_RABBITMQ_QUEUES = os.getenv("GEN_PRODUCT_IMAGE_RABBITMQ_QUEUES", f"ToProductImage{RABBITMQ_ENV}")
# GPI_MODEL_NAME_OVERALL = 'sdxl_ensemble_all'
# GPI_MODEL_URL = '10.1.1.243:10051'
# Generate Product service config 旧版product img 模型
GPI_RABBITMQ_QUEUES = f"ToProductImage{RABBITMQ_ENV}"
BATCH_GPI_RABBITMQ_QUEUES = f"BatchToProductImage{RABBITMQ_ENV}"
GPI_MODEL_NAME_OVERALL = 'diffusion_ensemble_all'
GPI_MODEL_NAME_SINGLE = 'stable_diffusion_1_5_cnet'
GPI_MODEL_URL = '10.1.1.243:10051'
# Generate Single Logo service config
GRI_RABBITMQ_QUEUES = f"Relight{RABBITMQ_ENV}"
BATCH_GRI_RABBITMQ_QUEUES = f"BatchRelight{RABBITMQ_ENV}"
GRI_MODEL_NAME_OVERALL = 'diffusion_relight_ensemble'
GRI_MODEL_NAME_SINGLE = 'stable_diffusion_1_5_relight'
GRI_MODEL_URL = '10.1.1.240:10051'
# Pose Transform service config
PS_RABBITMQ_QUEUES = f"PoseTransform{RABBITMQ_ENV}"
BATCH_PS_RABBITMQ_QUEUES = f"BatchPoseTransform{RABBITMQ_ENV}"
PT_MODEL_URL = '10.1.1.243:10061'
# SEG service config
SEGMENTATION = {
"new_model_name": "seg_knet",
"name": "seg_ocrnet_hr18",
"input": "seg_input__0",
"output": "seg_output__0",
}
# ollama config
OLLAMA_URL = "http://10.1.1.240:11434/api/embeddings"
# design batch
BATCH_DESIGN_RABBITMQ_QUEUES = f"DesignBatch{RABBITMQ_ENV}"
# DESIGN config
DESIGN_MODEL_URL = '10.1.1.240:10000'
AIDA_CLOTHING = "aida-clothing"
KEYPOINT_RESULT_TABLE_FIELD_SET = ('neckline_left', 'neckline_right', 'shoulder_left', 'shoulder_right', 'armpit_left', 'armpit_right',
'cuff_left_in', 'cuff_left_out', 'cuff_right_in', 'cuff_right_out', 'waistband_left', 'waistband_right')
# DESIGN 预处理
IF_DEBUG_SHOW = False
# 优先级
PRIORITY_DICT = {
'earring_front': 99,
'bag_front': 98,
'hairstyle_front': 97,
'outwear_front': 20,
'tops_front': 19,
'dress_front': 18,
'blouse_front': 17,
'skirt_front': 16,
'trousers_front': 15,
'bottoms_front': 14,
'shoes_right': 1,
'shoes_left': 1,
'body': 0,
'bottoms_back': -14,
'trousers_back': -15,
'skirt_back': -16,
'blouse_back': -17,
'dress_back': -18,
'tops_back': -19,
'outwear_back': -20,
'hairstyle_back': -97,
'bag_back': -98,
'earring_back': -99,
}
QWEN_API_KEY = "sk-f31c29e61ac2498ba5e307aaa6dc10e0"
DB_CONFIG = {
"host": "18.167.251.121",
"port": 3306,
"user": "root",
"password": "QWa998345",
"database": "aida",
"charset": "utf8mb4"
}
TABLE_CATEGORIES = {
"female_dress": "female/dress",
"female_outwear": "female/outwear",
"female_trousers": "female/trousers",
"female_skirt": "female/skirt",
"female_blouse": "female/blouse",
"male_tops": "male/tops",
"male_bottoms": "male/bottoms",
"male_outwear": "male/outwear"
}
# --- ComfyUI 配置信息 ---
COMFYUI_SERVER_ADDRESS = "10.1.2.227:8080" # 替换为您的 ComfyUI 服务器地址

View File

@@ -1,5 +1,21 @@
import logging
from typing import Dict, Any
import yaml
from pydantic import Field from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict from pydantic_settings import BaseSettings, SettingsConfigDict
from v2.nacos import ClientConfigBuilder, GRPCConfig, NacosConfigService, ConfigParam, NacosNamingService, RegisterInstanceParam, DeregisterInstanceParam
logger = logging.getLogger(__name__)
# ====================== Nacos 配置 ======================
NACOS_SERVER_ADDRESSES = "18.167.251.121:28848"
NACOS_NAMESPACE = "zcr"
NACOS_USERNAME = "nacos"
NACOS_PASSWORD = "Aidlab123123!"
NACOS_GROUP = "LOCAL"
NACOS_DATA_ID = "aida.python"
SERVICE_NAME = "fastapi-service" # ←←← 必须修改!建议格式:项目名-环境,例如 ai-image-service-dev
class Settings(BaseSettings): class Settings(BaseSettings):
@@ -36,7 +52,7 @@ class Settings(BaseSettings):
# --- mysql 配置信息 --- # --- mysql 配置信息 ---
MYSQL_HOST: str = Field(default='', description="") MYSQL_HOST: str = Field(default='', description="")
MYSQL_PORT: int = Field(default='', description="") MYSQL_PORT: int = Field(default=3306, description="")
MYSQL_USER: str = Field(default='', description="") MYSQL_USER: str = Field(default='', description="")
MYSQL_PASSWORD: str = Field(default='', description="") MYSQL_PASSWORD: str = Field(default='', description="")
MYSQL_DB: str = Field(default='', description="") MYSQL_DB: str = Field(default='', description="")
@@ -64,10 +80,16 @@ class Settings(BaseSettings):
# --- Design Callback Java 接口 --- # --- Design Callback Java 接口 ---
JAVA_STREAM_API_URL: str = Field(default='', description="") JAVA_STREAM_API_URL: str = Field(default='', description="")
# --- flux2 klein model url ---
FLUX2_GEN_IMG_MODEL_URL: str = Field(default='', description="")
# --- 服务器IP --- # --- 服务器IP ---
A6000_SERVICE_HOST: str = Field(default='', description="") A6000_SERVICE_HOST: str = Field(default='', description="")
B_4_X_4090_SERVICE_HOST: str = Field(default='', description="") B_4_X_4090_SERVICE_HOST: str = Field(default='', description="")
# --- sketch to garment 模型url ---
SKETCH_TO_GARMENT_URL: str = Field(default='', description="")
# --- 其他配置信息 以下均为Docker容器内配置--- # --- 其他配置信息 以下均为Docker容器内配置---
LOGS_PATH: str = Field(default="/logs/", description="") LOGS_PATH: str = Field(default="/logs/", description="")
CATEGORY_PATH: str = Field(default="/app/service/attribute/config/descriptor/category/category_dis.csv", description="") CATEGORY_PATH: str = Field(default="/app/service/attribute/config/descriptor/category/category_dis.csv", description="")
@@ -128,6 +150,8 @@ OLLAMA_URL = f"http://{settings.A6000_SERVICE_HOST}:11434/api/embeddings"
# Design # Design
DESIGN_MODEL_URL = f'{settings.A6000_SERVICE_HOST}:10000' DESIGN_MODEL_URL = f'{settings.A6000_SERVICE_HOST}:10000'
DESIGN_MODEL_NAME = 'seg_knet' DESIGN_MODEL_NAME = 'seg_knet'
# Seg Product
SEG_PRODUCT_MODEL_URL = f'{settings.B_4_X_4090_SERVICE_HOST}:30000'
# Generate Image # Generate Image
GI_MODEL_URL = f'{settings.A6000_SERVICE_HOST}:10061' GI_MODEL_URL = f'{settings.A6000_SERVICE_HOST}:10061'
GI_MODEL_NAME = 'flux' GI_MODEL_NAME = 'flux'

View File

@@ -1,5 +1,8 @@
# 1. 这里的顺序至关重要!必须在最顶端 # 1. 这里的顺序至关重要!必须在最顶端
import sys import sys
from contextlib import asynccontextmanager
# from app.core.nacos_config import load_nacos_config, register_server, deregister_server
try: try:
import asyncore import asyncore
@@ -16,7 +19,7 @@ from fastapi.responses import JSONResponse
from app.api.api_route import router from app.api.api_route import router
from app.core.config import settings from app.core.config import settings
from app.core.record_api_count import count_api_calls # from app.core.record_api_count import count_api_calls
from app.schemas.response_template import ResponseModel from app.schemas.response_template import ResponseModel
from logging_env import LOGGER_CONFIG_DICT from logging_env import LOGGER_CONFIG_DICT
from dotenv import load_dotenv from dotenv import load_dotenv
@@ -30,8 +33,21 @@ logger = logging.getLogger(__name__)
load_dotenv() load_dotenv()
# @asynccontextmanager
# async def lifespan(app: FastAPI):
# try:
# load_nacos_config()
# register_server()
#
# yield
# finally:
# deregister_server()
# logger.info("lifespan down")
def get_application() -> FastAPI: def get_application() -> FastAPI:
application = FastAPI( application = FastAPI(
# lifespan=lifespan,
docs_url="/docs", docs_url="/docs",
redoc_url='/re-docs', redoc_url='/re-docs',
openapi_url=f"/openapi.json", openapi_url=f"/openapi.json",
@@ -48,7 +64,7 @@ def get_application() -> FastAPI:
allow_methods=["*"], allow_methods=["*"],
allow_headers=["*"], allow_headers=["*"],
) )
application.middleware("http")(count_api_calls) # application.middleware("http")(count_api_calls)
application.include_router(router=router) application.include_router(router=router)
return application return application
@@ -64,5 +80,11 @@ async def http_exception_handler(exc: HTTPException):
) )
@app.get("/health", operation_id="health")
async def health():
logger.info("health check")
return {"ok": True, "env": settings.APP_ENV}
if __name__ == '__main__': if __name__ == '__main__':
uvicorn.run(app, host="0.0.0.0", port=settings.PORT) uvicorn.run(app, host="0.0.0.0", port=settings.PORT)

View File

@@ -4,12 +4,13 @@ from pydantic import BaseModel, Field
class SAMRequestModel(BaseModel): class SAMRequestModel(BaseModel):
user_id: int = Field(..., description="用户id, 必填字段") bucket: str = Field(..., description="minio bucket name ")
object_name: str = Field(..., description="minio object name ")
image_path: str = Field(..., description="图片路径,必填字段") image_path: str = Field(..., description="图片路径,必填字段")
type: str = Field(..., description="推理类型,必填字段") type: str = Field(..., description="推理类型,必填字段")
points: Optional[List[List[float]]] = None points: Optional[List[List[float]]] | None = None
labels: Optional[List[int]] = None labels: Optional[List[int]] | None = None
box: Optional[List[int]] = None box: Optional[List[int]] | None = None
class DesignModel(BaseModel): class DesignModel(BaseModel):

View File

@@ -1,6 +1,6 @@
from typing import List from typing import List, Optional
from pydantic import BaseModel from pydantic import BaseModel, Field
class GenerateMultiViewModel(BaseModel): class GenerateMultiViewModel(BaseModel):
@@ -8,6 +8,17 @@ class GenerateMultiViewModel(BaseModel):
image_url: str image_url: str
class GenerateImageFlux2KleinModel(BaseModel):
bucket_name: str = Field(..., description="OSS桶名不传则为None")
object_name: str = Field(..., description="OSS对象名文件路径不传则为None")
# input_image_paths: Optional[List[str]] = Field(default=[], description="输入图片路径列表")
width: Optional[int] = Field(default=1024, description="图片宽度默认512像素")
height: Optional[int] = Field(default=1024, description="图片高度默认512像素")
prompt: Optional[str] = Field(default="", description="文本提示词,用于模型推理等场景")
steps: Optional[int] = Field(default=4, description="推理步数,控制模型生成过程的迭代次数")
guidance: Optional[float] = Field(default=4.0, description="引导系数,调节提示词对生成结果的影响程度")
class GenerateImageModel(BaseModel): class GenerateImageModel(BaseModel):
tasks_id: str tasks_id: str
prompt: str prompt: str
@@ -24,6 +35,13 @@ class GenerateSingleLogoImageModel(BaseModel):
seed: str seed: str
class GenerateSloganImageModel(BaseModel):
num_point: int
tasks_id: str
prompt: str
image_url: str
class GenerateProductImageModel(BaseModel): class GenerateProductImageModel(BaseModel):
tasks_id: str tasks_id: str
prompt: str prompt: str
@@ -32,6 +50,13 @@ class GenerateProductImageModel(BaseModel):
product_type: str product_type: str
class Flux2ToProductImgModel(BaseModel):
tasks_id: str
prompt: str
image_path: str
infer_step: int | None = None
class GenerateRelightImageModel(BaseModel): class GenerateRelightImageModel(BaseModel):
tasks_id: str tasks_id: str
prompt: str prompt: str

View File

@@ -0,0 +1,12 @@
from typing import List
from pydantic import BaseModel, Field
class SketchToGarmentModel(BaseModel):
input_image_path: str = Field(..., description="输入图片路径列表")
bucket_name: str = Field(..., description="输入图片路径列表")
user_id: str = Field(..., description="用户id")
callback_url: str # 必填,客户端提供的回调地址
task_id: str = Field()
model: str = Field(default="single", description="模型类型: single 或 multi")

View File

@@ -3,7 +3,6 @@
from pprint import pprint from pprint import pprint
import cv2 import cv2
import mmcv
import numpy as np import numpy as np
import pandas as pd import pandas as pd
import torch import torch
@@ -12,6 +11,7 @@ from minio import Minio
from app.core.config import settings, DESIGN_MODEL_URL from app.core.config import settings, DESIGN_MODEL_URL
from app.schemas.attribute_retrieve import AttributeRecognitionModel from app.schemas.attribute_retrieve import AttributeRecognitionModel
from app.service.utils.image_normalize import my_imnormalize
from app.service.utils.new_oss_client import oss_get_image from app.service.utils.new_oss_client import oss_get_image
minio_client = Minio(settings.MINIO_URL, access_key=settings.MINIO_ACCESS, secret_key=settings.MINIO_SECRET, secure=settings.MINIO_SECURE) minio_client = Minio(settings.MINIO_URL, access_key=settings.MINIO_ACCESS, secret_key=settings.MINIO_SECRET, secure=settings.MINIO_SECURE)
@@ -109,10 +109,9 @@ class AttributeRecognition:
@staticmethod @staticmethod
def preprocess(img): def preprocess(img):
img = mmcv.imread(img)
img_scale = (224, 224) img_scale = (224, 224)
img = cv2.resize(img, img_scale) img = cv2.resize(img, img_scale)
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img return preprocessed_img

View File

@@ -10,7 +10,6 @@
from minio import Minio from minio import Minio
from skimage import transform from skimage import transform
import cv2 import cv2
import mmcv
import numpy as np import numpy as np
import pandas as pd import pandas as pd
import tritonclient.http as httpclient import tritonclient.http as httpclient
@@ -18,6 +17,7 @@ import torch
from app.core.config import settings, DESIGN_MODEL_URL from app.core.config import settings, DESIGN_MODEL_URL
from app.schemas.attribute_retrieve import CategoryRecognitionModel from app.schemas.attribute_retrieve import CategoryRecognitionModel
from app.service.utils.image_normalize import my_imnormalize
from app.service.utils.new_oss_client import oss_get_image from app.service.utils.new_oss_client import oss_get_image
minio_client = Minio(settings.MINIO_URL, access_key=settings.MINIO_ACCESS, secret_key=settings.MINIO_SECRET, secure=settings.MINIO_SECURE) minio_client = Minio(settings.MINIO_URL, access_key=settings.MINIO_ACCESS, secret_key=settings.MINIO_SECRET, secure=settings.MINIO_SECURE)
@@ -39,11 +39,10 @@ class CategoryRecognition:
@staticmethod @staticmethod
def preprocess(img): def preprocess(img):
img = mmcv.imread(img)
# ori_shape = img.shape[:2] # ori_shape = img.shape[:2]
img_scale = (224, 224) img_scale = (224, 224)
img = cv2.resize(img, img_scale) img = cv2.resize(img, img_scale)
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img return preprocessed_img

View File

@@ -1,7 +1,6 @@
import logging import logging
import cv2 import cv2
import mmcv
import numpy as np import numpy as np
import pandas as pd import pandas as pd
import torch import torch
@@ -9,11 +8,12 @@ import torch.nn.functional as F
import tritonclient.http as httpclient import tritonclient.http as httpclient
from minio import Minio from minio import Minio
from app.core.config import DESIGN_MODEL_URL from app.core.config import DESIGN_MODEL_URL, SEG_PRODUCT_MODEL_URL
from app.core.config import settings from app.core.config import settings
from app.schemas.brand_dna import BrandDnaModel from app.schemas.brand_dna import BrandDnaModel
from app.service.attribute.config import const from app.service.attribute.config import const
from app.service.utils.generate_uuid import generate_uuid from app.service.utils.generate_uuid import generate_uuid
from app.service.utils.image_normalize import my_imnormalize
from app.service.utils.new_oss_client import oss_upload_image, oss_get_image from app.service.utils.new_oss_client import oss_upload_image, oss_get_image
minio_client = Minio(settings.MINIO_URL, access_key=settings.MINIO_ACCESS, secret_key=settings.MINIO_SECRET, secure=settings.MINIO_SECURE) minio_client = Minio(settings.MINIO_URL, access_key=settings.MINIO_ACCESS, secret_key=settings.MINIO_SECRET, secure=settings.MINIO_SECURE)
@@ -29,7 +29,7 @@ class BrandDna:
self.attr_type = pd.read_csv(settings.CATEGORY_PATH) self.attr_type = pd.read_csv(settings.CATEGORY_PATH)
# self.attr_type = pd.read_csv(r"E:\workspace\trinity_client_aida\app\service\attribute\config\descriptor\category\category_dis.csv") # self.attr_type = pd.read_csv(r"E:\workspace\trinity_client_aida\app\service\attribute\config\descriptor\category\category_dis.csv")
self.att_client = httpclient.InferenceServerClient(url=DESIGN_MODEL_URL) self.att_client = httpclient.InferenceServerClient(url=DESIGN_MODEL_URL)
self.seg_client = httpclient.InferenceServerClient(url='10.1.1.243:30000') self.seg_client = httpclient.InferenceServerClient(url=SEG_PRODUCT_MODEL_URL)
self.const = const self.const = const
# self.const = local_debug_const # self.const = local_debug_const
@@ -202,7 +202,7 @@ class BrandDna:
# 服装分割预处理 # 服装分割预处理
@staticmethod @staticmethod
def seg_product_preprocess(image): def seg_product_preprocess(image):
img = mmcv.imread(image) img = image
ori_shape = img.shape[:2] ori_shape = img.shape[:2]
img_scale_w, img_scale_h = ori_shape img_scale_w, img_scale_h = ori_shape
if ori_shape[0] > 1024: if ori_shape[0] > 1024:
@@ -211,9 +211,9 @@ class BrandDna:
img_scale_h = 1024 img_scale_h = 1024
# 如果图片size任意一边 大于 1024 则会resize 成1024 # 如果图片size任意一边 大于 1024 则会resize 成1024
if ori_shape != (img_scale_w, img_scale_h): if ori_shape != (img_scale_w, img_scale_h):
# mmcv.imresize(img, img_scale_h, img_scale_w) # 老代码 引以为戒!哈哈哈~ h和w写反了 # my_imnormalize(img, img_scale_h, img_scale_w) # 老代码 引以为戒!哈哈哈~ h和w写反了
img = cv2.resize(img, (img_scale_h, img_scale_w)) img = cv2.resize(img, (img_scale_h, img_scale_w))
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img, ori_shape return preprocessed_img, ori_shape
@@ -227,11 +227,10 @@ class BrandDna:
# 类别检测模型预处理 # 类别检测模型预处理
@staticmethod @staticmethod
def category_preprocess(img): def category_preprocess(img):
img = mmcv.imread(img)
# ori_shape = img.shape[:2] # ori_shape = img.shape[:2]
img_scale = (224, 224) img_scale = (224, 224)
img = cv2.resize(img, img_scale) img = cv2.resize(img, img_scale)
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img return preprocessed_img

View File

@@ -1,19 +1,10 @@
import logging import uuid
import httpx
import cv2
import numpy as np
import tritonclient.grpc as grpcclient
from langchain_classic.output_parsers import ResponseSchema, StructuredOutputParser from langchain_classic.output_parsers import ResponseSchema, StructuredOutputParser
from langchain_community.chat_models import ChatTongyi from langchain_community.chat_models import ChatTongyi
from langchain_core.prompts import PromptTemplate from langchain_core.prompts import PromptTemplate
from minio import Minio from minio import Minio
from tritonclient.utils import np_to_triton_dtype
from app.core.config import GI_MODEL_URL, GI_MODEL_NAME
from app.schemas.brand_dna import GenerateBrandModel from app.schemas.brand_dna import GenerateBrandModel
from app.service.utils.generate_uuid import generate_uuid
from app.service.utils.new_oss_client import oss_upload_image
from app.core.config import settings from app.core.config import settings
@@ -26,14 +17,9 @@ class GenerateBrandInfo:
# user info init # user info init
self.user_id = request_data.user_id self.user_id = request_data.user_id
self.category = "brand_logo" self.category = "brand_logo"
# generate logo init
self.grpc_client = grpcclient.InferenceServerClient(url=GI_MODEL_URL)
self.image = np.random.randint(0, 256, (1024, 1024, 3), dtype=np.uint8)
self.batch_size = 1
self.mode = 'txt2img'
# llm generate brand info init # llm generate brand info init
self.model = ChatTongyi(model="qwen2.5-14b-instruct", api_key="sk-7658298c6b99443c98184a5e634fe6ab") self.model = ChatTongyi(model="qwen2.5-14b-instruct", api_key=settings.QWEN_API_KEY)
self.response_schemas = [ self.response_schemas = [
ResponseSchema(name="brand_name", description="Brand name."), ResponseSchema(name="brand_name", description="Brand name."),
@@ -63,38 +49,20 @@ class GenerateBrandInfo:
self.generate_logo_prompt = brand_data['brand_logo_prompt'] self.generate_logo_prompt = brand_data['brand_logo_prompt']
def generate_brand_logo(self): def generate_brand_logo(self):
prompts = [self.generate_logo_prompt] * self.batch_size request_item = {
modes = [self.mode] * self.batch_size "bucket_name": "aida-users",
images = [self.image.astype(np.float16)] * self.batch_size "object_name": f'{self.user_id}/{self.category}/{uuid.uuid4().hex}.png',
"prompt": self.generate_logo_prompt,
text_obj = np.array(prompts, dtype="object").reshape((-1, 1)) "height": 1024,
mode_obj = np.array(modes, dtype="object").reshape((-1, 1)) "width": 1024
image_obj = np.array(images, dtype=np.float16).reshape((-1, 1024, 1024, 3)) }
with httpx.Client(timeout=120) as client:
input_text = grpcclient.InferInput("prompt", text_obj.shape, np_to_triton_dtype(text_obj.dtype)) resp = client.post(
input_image = grpcclient.InferInput("input_image", image_obj.shape, np_to_triton_dtype(image_obj.dtype)) f"http://{settings.FLUX2_GEN_IMG_MODEL_URL}/predict",
input_mode = grpcclient.InferInput("mode", mode_obj.shape, np_to_triton_dtype(mode_obj.dtype)) json=request_item,
)
input_text.set_data_from_numpy(text_obj) result = resp.json()
input_image.set_data_from_numpy(image_obj) self.result_data['brand_logo'] = result.get("output_path", "")
input_mode.set_data_from_numpy(mode_obj)
inputs = [input_text, input_image, input_mode]
result = self.grpc_client.infer(model_name=GI_MODEL_NAME, inputs=inputs)
image = result.as_numpy("generated_image")
image_result = cv2.cvtColor(np.squeeze(image.astype(np.uint8)), cv2.COLOR_RGB2BGR)
logo_url = self.upload_logo_image(image_result, generate_uuid())
self.result_data['brand_logo'] = logo_url
def upload_logo_image(self, image, object_name):
try:
_, img_byte_array = cv2.imencode('.jpg', image)
object_name = f'{self.user_id}/{self.category}/{object_name}.jpg'
oss_upload_image(oss_client=self.minio_client, bucket="aida-users", object_name=object_name, image_bytes=img_byte_array)
image_url = f"aida-users/{object_name}"
return image_url
except Exception as e:
logging.warning(f"upload_png_mask runtime exception : {e}")
if __name__ == '__main__': if __name__ == '__main__':

View File

@@ -23,7 +23,7 @@ class ClothingSeg:
def __init__(self, request_data): def __init__(self, request_data):
self.image_data = request_data.image_data self.image_data = request_data.image_data
self.user_id = request_data.user_id self.user_id = request_data.user_id
self.triton_client = grpcclient.InferenceServerClient(url="10.1.1.243:10071") self.triton_client = grpcclient.InferenceServerClient(url=f"{settings.B_4_X_4090_SERVICE_HOST}:10071")
@RunTime @RunTime
def get_result(self): def get_result(self):
@@ -139,7 +139,7 @@ def get_bounding_box(mask):
if __name__ == "__main__": if __name__ == "__main__":
test_data = ClothingSegModel( test_data = ClothingSegModel(
user_id=89, user_id="89",
image_data=[ image_data=[
# { # {
# "image_url": "test/clothing_seg/dress.jpg", # "image_url": "test/clothing_seg/dress.jpg",

View File

@@ -13,7 +13,7 @@ from PIL import Image
from minio import Minio, S3Error from minio import Minio, S3Error
from moviepy.video.io.VideoFileClip import VideoFileClip from moviepy.video.io.VideoFileClip import VideoFileClip
from app.core.config import settings from app.core.config import settings, PS_RABBITMQ_QUEUES
from app.schemas.comfyui_i2v import ComfyuiPose2VModel from app.schemas.comfyui_i2v import ComfyuiPose2VModel
from app.service.generate_image.utils.mq import publish_status from app.service.generate_image.utils.mq import publish_status
@@ -622,9 +622,9 @@ class ComfyUIServerPose2V:
# 推送消息 # 推送消息
if not settings.DEBUG: if not settings.DEBUG:
publish_status(json.dumps(self.pose_transform_data), settings.COMFYUI_SERVER_ADDRESS) publish_status(json.dumps(self.pose_transform_data), PS_RABBITMQ_QUEUES)
logger.info( logger.info(
f" [x] Sent to {settings.COMFYUI_SERVER_ADDRESS} data@@@@ {json.dumps(self.pose_transform_data, indent=4)}") f" [x] Sent to {PS_RABBITMQ_QUEUES} data@@@@ {json.dumps(self.pose_transform_data, indent=4)}")
return "\n🎉 所有任务完成!" return "\n🎉 所有任务完成!"

View File

@@ -10,13 +10,13 @@
import logging import logging
import cv2 import cv2
import mmcv
import numpy as np import numpy as np
import torch import torch
import torch.nn.functional as F import torch.nn.functional as F
import tritonclient.http as httpclient import tritonclient.http as httpclient
from app.core.config import DESIGN_MODEL_URL, DESIGN_MODEL_NAME from app.core.config import DESIGN_MODEL_URL, DESIGN_MODEL_NAME
from app.service.utils.image_normalize import my_imnormalize
""" """
keypoint keypoint
@@ -25,13 +25,13 @@ from app.core.config import DESIGN_MODEL_URL, DESIGN_MODEL_NAME
def keypoint_preprocess(img_path): def keypoint_preprocess(img_path):
img = mmcv.imread(img_path) img = img_path
img_scale = (256, 256) img_scale = (256, 256)
h, w = img.shape[:2] h, w = img.shape[:2]
img = cv2.resize(img, img_scale) img = cv2.resize(img, img_scale)
w_scale = img_scale[0] / w w_scale = img_scale[0] / w
h_scale = img_scale[1] / h h_scale = img_scale[1] / h
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img, (w_scale, h_scale) return preprocessed_img, (w_scale, h_scale)
@@ -74,7 +74,7 @@ def keypoint_postprocess(output, scale_factor):
# KNet # KNet
def seg_preprocess(img_path): def seg_preprocess(img_path):
img = mmcv.imread(img_path) img = img_path
ori_shape = img.shape[:2] ori_shape = img.shape[:2]
img_scale_w, img_scale_h = ori_shape img_scale_w, img_scale_h = ori_shape
if ori_shape[0] > 1024: if ori_shape[0] > 1024:
@@ -83,9 +83,9 @@ def seg_preprocess(img_path):
img_scale_h = 1024 img_scale_h = 1024
# 如果图片size任意一边 大于 1024 则会resize 成1024 # 如果图片size任意一边 大于 1024 则会resize 成1024
if ori_shape != (img_scale_w, img_scale_h): if ori_shape != (img_scale_w, img_scale_h):
# mmcv.imresize(img, img_scale_h, img_scale_w) # 老代码 引以为戒!哈哈哈~ h和w写反了 # my_imnormalize(img, img_scale_h, img_scale_w) # 老代码 引以为戒!哈哈哈~ h和w写反了
img = cv2.resize(img, (img_scale_h, img_scale_w)) img = cv2.resize(img, (img_scale_h, img_scale_w))
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img, ori_shape return preprocessed_img, ori_shape

View File

@@ -16,6 +16,9 @@ class OthersItem(BaseItem):
self.Others_pipeline = [ self.Others_pipeline = [
LoadImage(minio_client), LoadImage(minio_client),
Segmentation(minio_client), Segmentation(minio_client),
Color(minio_client),
NoSegPrintPainting(minio_client),
PrintPainting(minio_client),
Scaling(), Scaling(),
Split(minio_client) Split(minio_client)
] ]

View File

@@ -12,9 +12,13 @@ class NoSegPrintPainting:
self.minio_client = minio_client self.minio_client = minio_client
def __call__(self, result): def __call__(self, result):
single_print = result['print']['single'] # single_print = [result['print']['single']]
overall_print = result['print']['overall'] overall_print = result['print']['overall']
element_print = result['print']['element'] # element_print = result['print']['element'
single_print = None
element_print = None
result['single_image'] = None result['single_image'] = None
result['print_image'] = None result['print_image'] = None
@@ -23,9 +27,9 @@ class NoSegPrintPainting:
# 获取平铺 + 旋转 的overall print # 获取平铺 + 旋转 的overall print
painting_dict = self.painting_collection(painting_dict, overall_print) painting_dict = self.painting_collection(painting_dict, overall_print)
result['no_seg_sketch_overall'] = result['no_seg_sketch_print'] = self.printpaint(result, painting_dict, print_=True) result['no_seg_sketch_overall'] = result['no_seg_sketch_print'] = self.printpaint(result, painting_dict, print_=True)
result['pattern_image'] = result['no_seg_sketch_overall'] # result['pattern_image'] = result['no_seg_sketch_overall']
if single_print['print_path_list']: if single_print:
print_background = np.zeros((result['pattern_image'].shape[0], result['pattern_image'].shape[1], 3), dtype=np.uint8) print_background = np.zeros((result['pattern_image'].shape[0], result['pattern_image'].shape[1], 3), dtype=np.uint8)
mask_background = np.zeros((result['pattern_image'].shape[0], result['pattern_image'].shape[1], 3), dtype=np.uint8) mask_background = np.zeros((result['pattern_image'].shape[0], result['pattern_image'].shape[1], 3), dtype=np.uint8)
for i in range(len(single_print['print_path_list'])): for i in range(len(single_print['print_path_list'])):
@@ -65,7 +69,7 @@ class NoSegPrintPainting:
single_image = cv2.add(tmp1, tmp2) single_image = cv2.add(tmp1, tmp2)
result['no_seg_sketch_print'] = single_image result['no_seg_sketch_print'] = single_image
if element_print['element_path_list']: if element_print:
print_background = np.zeros((result['final_image'].shape[0], result['final_image'].shape[1], 3), dtype=np.uint8) print_background = np.zeros((result['final_image'].shape[0], result['final_image'].shape[1], 3), dtype=np.uint8)
mask_background = np.zeros((result['final_image'].shape[0], result['final_image'].shape[1], 3), dtype=np.uint8) mask_background = np.zeros((result['final_image'].shape[0], result['final_image'].shape[1], 3), dtype=np.uint8)
for i in range(len(element_print['element_path_list'])): for i in range(len(element_print['element_path_list'])):
@@ -162,15 +166,17 @@ class NoSegPrintPainting:
dim_max = max(painting_dict['dim_image_h'], painting_dict['dim_image_w']) dim_max = max(painting_dict['dim_image_h'], painting_dict['dim_image_w'])
dim_pattern = (int(dim_max * print_['scale'] / 5), int(dim_max * print_['scale'] / 5)) dim_pattern = (int(dim_max * print_['scale'] / 5), int(dim_max * print_['scale'] / 5))
gap = print_dict.get('gap', [[0, 0]])[0] gap = print_dict.get('gap', [[0, 0]])[0]
painting_dict['tile_print'] = tile_image(pattern=print_['image'], painting_dict['tile_print'], painting_dict['mask_inv_print'] = tile_image(pattern=print_['image'],
mask=print_['mask'],
dim=dim_pattern, dim=dim_pattern,
gap_x=gap[0], gap_x=gap[0],
gap_y=gap[1], gap_y=gap[1],
canvas_h=painting_dict['dim_image_h'], canvas_h=painting_dict['dim_image_h'],
canvas_w=painting_dict['dim_image_w'], canvas_w=painting_dict['dim_image_w'],
location=painting_dict['location'], location=painting_dict['location'],
angle=45) angle=int(print_.get('print_angle_list', [0])[0]))
painting_dict['mask_inv_print'] = np.zeros(painting_dict['tile_print'].shape[:2], dtype=np.uint8) # painting_dict['mask_inv_print'] = np.zeros(painting_dict['tile_print'].shape[:2], dtype=np.uint8)
# painting_dict['mask_inv_print'] = self.get_mask_inv(painting_dict['tile_print'])
return painting_dict return painting_dict
def tile_image(self, pattern, dim, scale, dim_image_h, dim_image_w, location, trigger=False): def tile_image(self, pattern, dim, scale, dim_image_h, dim_image_w, location, trigger=False):
@@ -251,10 +257,15 @@ class NoSegPrintPainting:
image = oss_get_image(oss_client=self.minio_client, bucket=bucket_name, object_name=object_name, data_type="PIL") image = oss_get_image(oss_client=self.minio_client, bucket=bucket_name, object_name=object_name, data_type="PIL")
# 判断图片格式如果是RGBA 则贴在一张纯白图片上 防止透明转黑 # 判断图片格式如果是RGBA 则贴在一张纯白图片上 防止透明转黑
if image.mode == "RGBA": if image.mode == "RGBA":
mask_pil = image.split()[3]
new_background = Image.new('RGB', image.size, (255, 255, 255)) new_background = Image.new('RGB', image.size, (255, 255, 255))
new_background.paste(image, mask=image.split()[3]) new_background.paste(image, mask=image.split()[3])
image = new_background image = new_background
else:
mask_pil = Image.new('L', image.size, 255) # L=灰度图255=纯白
print_dict['image'] = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR) print_dict['image'] = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)
print_dict['mask'] = cv2.threshold(np.array(mask_pil), 127, 255, cv2.THRESH_BINARY)[1]
return print_dict return print_dict
def crop_image(self, image, image_size_h, image_size_w, location, print_shape): def crop_image(self, image, image_size_h, image_size_w, location, print_shape):
@@ -404,9 +415,12 @@ class NoSegPrintPainting:
return cropped_img return cropped_img
def tile_image(pattern, dim, gap_x, gap_y, canvas_h, canvas_w, location, angle=0): def tile_image(pattern, mask, dim, gap_x, gap_y, canvas_h, canvas_w, location, angle=0):
""" """
按照指定的 X/Y 间距平铺印花,并支持旋转 按照指定的 X/Y 间距平铺印花,并支持旋转
【修改版】以被平铺图案的【中心】作为平铺基准点
:param location: [[center_y, center_x]] → 第一个图案中心的坐标
:param angle: 旋转角度 (度数, 逆时针) :param angle: 旋转角度 (度数, 逆时针)
""" """
# 1. 确保输入是 RGBA # 1. 确保输入是 RGBA
@@ -418,35 +432,54 @@ def tile_image(pattern, dim, gap_x, gap_y, canvas_h, canvas_w, location, angle=0
rotated_p = rotate_image(resized_p, angle) rotated_p = rotate_image(resized_p, angle)
p_h, p_w = rotated_p.shape[:2] p_h, p_w = rotated_p.shape[:2]
# 3. 创建透明单元格 # 3. 创建透明单元格(图案放在单元格中心)
cell_h, cell_w = p_h + gap_y, p_w + gap_x cell_h = p_h + gap_y
cell_w = p_w + gap_x
unit_cell = np.zeros((cell_h, cell_w, 4), dtype=np.uint8) unit_cell = np.zeros((cell_h, cell_w, 4), dtype=np.uint8)
unit_cell[:p_h, :p_w, :] = rotated_p
# 计算图案在单元格中的左上角位置(让图案居中)
start_y = (cell_h - p_h) // 2
start_x = (cell_w - p_w) // 2
unit_cell[start_y:start_y + p_h, start_x:start_x + p_w, :] = rotated_p
# 4. 执行平铺 # 4. 执行平铺
tiles_y = (canvas_h // cell_h) + 2 tiles_y = (canvas_h // cell_h) + 3 # 多加一点余量更安全
tiles_x = (canvas_w // cell_w) + 2 tiles_x = (canvas_w // cell_w) + 3
full_tiled = np.tile(unit_cell, (tiles_y, tiles_x, 1)) full_tiled = np.tile(unit_cell, (tiles_y, tiles_x, 1))
# 5. 裁剪平铺层 # 5. 计算偏移(关键修改:以中心为基准)
offset_x = int(location[0][1] % cell_w) center_y, center_x = location[0][0], location[0][1] # 第一个图案的中心位置
offset_y = int(location[0][0] % cell_h)
# 计算从哪个位置开始裁剪,才能让中心落在指定坐标
offset_y = int((center_y - (p_h // 2)) % cell_h)
offset_x = int((center_x - (p_w // 2)) % cell_w)
tiled_layer = full_tiled[offset_y: offset_y + canvas_h, tiled_layer = full_tiled[offset_y: offset_y + canvas_h,
offset_x: offset_x + canvas_w] offset_x: offset_x + canvas_w]
# 6. 创建纯白色背景并合成 # 6. 创建纯白色背景并合成(保持你原来的风格)
# 创建一个纯白色的 BGR 画布
white_background = np.full((canvas_h, canvas_w, 3), 255, dtype=np.uint8) white_background = np.full((canvas_h, canvas_w, 3), 255, dtype=np.uint8)
# 分离平铺层的颜色通道和 Alpha 通道
tiled_bgr = tiled_layer[:, :, :3] tiled_bgr = tiled_layer[:, :, :3]
alpha_mask = tiled_layer[:, :, 3] / 255.0 # 归一化到 0-1 alpha_mask = tiled_layer[:, :, 3] / 255.0
alpha_mask = cv2.merge([alpha_mask, alpha_mask, alpha_mask]) # 扩展到 3 通道 alpha_mask = cv2.merge([alpha_mask, alpha_mask, alpha_mask])
# 执行 Alpha 混合:结果 = 平铺层 * alpha + 背景 * (1 - alpha) tiled_print = (tiled_bgr * alpha_mask + white_background * (1 - alpha_mask)).astype(np.uint8)
result = (tiled_bgr * alpha_mask + white_background * (1 - alpha_mask)).astype(np.uint8)
return result # ====================== 处理 Mask ======================
# Mask 也同样居中处理
resized_mask = cv2.resize(mask, dim, interpolation=cv2.INTER_NEAREST)
rotated_mask = rotate_image(resized_mask, angle) # 注意mask也需要旋转
unit_mask = np.zeros((cell_h, cell_w), dtype=np.uint8)
unit_mask[start_y:start_y + p_h, start_x:start_x + p_w] = rotated_mask
full_mask_tiled = np.tile(unit_mask, (tiles_y, tiles_x))
tiled_mask = full_mask_tiled[offset_y: offset_y + canvas_h,
offset_x: offset_x + canvas_w]
return tiled_print, cv2.bitwise_not(tiled_mask)
def rotate_image(image, angle): def rotate_image(image, angle):

View File

@@ -12,10 +12,14 @@ class PrintPainting:
self.minio_client = minio_client self.minio_client = minio_client
def __call__(self, result): def __call__(self, result):
single_print = result['print']['single'] # single_print = result['print']['single']
overall_print = result['print']['overall'] overall_print = result['print']['overall']
element_print = result['print']['element'] # element_print = result['print']['element']
partial_path = result['print']['partial'] if 'partial' in result['print'] else None # partial_path = result['print']['partial'] if 'partial' in result['print'] else None
single_print = None
element_print = None
partial_path = None
result['single_image'] = None result['single_image'] = None
result['print_image'] = None result['print_image'] = None
# TODO 给result['pattern_image'] resize 到resize_scale的大小 # TODO 给result['pattern_image'] resize 到resize_scale的大小
@@ -37,13 +41,13 @@ class PrintPainting:
if overall_print['print_path_list']: if overall_print['print_path_list']:
overall_print['location'][0] = [x * y for x, y in zip(overall_print['location'][0], result['resize_scale'])] overall_print['location'][0] = [x * y for x, y in zip(overall_print['location'][0], result['resize_scale'])]
painting_dict = {'dim_image_h': result['pattern_image'].shape[0], 'dim_image_w': result['pattern_image'].shape[1]} painting_dict = {'dim_image_h': result['pattern_image'].shape[0], 'dim_image_w': result['pattern_image'].shape[1]}
result['print_image'] = result['pattern_image'] result['print_image'] = result['pattern_image'].copy()
# 获取平铺 + 旋转 的overall print # 获取平铺 + 旋转 的overall print
painting_dict = self.painting_collection(painting_dict, overall_print) painting_dict = self.painting_collection(painting_dict, overall_print)
result['print_image'] = self.printpaint(result, painting_dict, print_=True) result['print_image'] = self.printpaint(result, painting_dict, print_=True)
result['single_image'] = result['final_image'] = result['pattern_image'] = result['print_image'] result['single_image'] = result['final_image'] = result['pattern_image'] = result['print_image']
if single_print['print_path_list']: if single_print:
# 2025-9-19 印花调整 印花坐标按照sketch的缩放比调整 # 2025-9-19 印花调整 印花坐标按照sketch的缩放比调整
sketch_resize_scale = result['resize_scale'] sketch_resize_scale = result['resize_scale']
print_background = np.zeros((result['pattern_image'].shape[0], result['pattern_image'].shape[1], 3), dtype=np.uint8) print_background = np.zeros((result['pattern_image'].shape[0], result['pattern_image'].shape[1], 3), dtype=np.uint8)
@@ -84,7 +88,7 @@ class PrintPainting:
tmp2 = (result['final_image'] * (temp_fg / 255)).astype(np.uint8) tmp2 = (result['final_image'] * (temp_fg / 255)).astype(np.uint8)
result['single_image'] = cv2.add(tmp1, tmp2) result['single_image'] = cv2.add(tmp1, tmp2)
if element_print['element_path_list']: if element_print:
# 2025-9-19 印花调整 印花坐标按照sketch的缩放比调整 # 2025-9-19 印花调整 印花坐标按照sketch的缩放比调整
sketch_resize_scale = result['resize_scale'] sketch_resize_scale = result['resize_scale']
print_background = np.zeros((result['final_image'].shape[0], result['final_image'].shape[1], 3), dtype=np.uint8) print_background = np.zeros((result['final_image'].shape[0], result['final_image'].shape[1], 3), dtype=np.uint8)
@@ -225,15 +229,15 @@ class PrintPainting:
dim_max = max(painting_dict['dim_image_h'], painting_dict['dim_image_w']) dim_max = max(painting_dict['dim_image_h'], painting_dict['dim_image_w'])
dim_pattern = (int(dim_max * print_['scale'] / 5), int(dim_max * print_['scale'] / 5)) dim_pattern = (int(dim_max * print_['scale'] / 5), int(dim_max * print_['scale'] / 5))
gap = print_dict.get('gap', [[0, 0]])[0] gap = print_dict.get('gap', [[0, 0]])[0]
painting_dict['tile_print'] = tile_image(pattern=print_['image'], painting_dict['tile_print'], painting_dict['mask_inv_print'] = tile_image(pattern=print_['image'],
mask=print_['mask'],
dim=dim_pattern, dim=dim_pattern,
gap_x=gap[0], gap_x=gap[0],
gap_y=gap[1], gap_y=gap[1],
canvas_h=painting_dict['dim_image_h'], canvas_h=painting_dict['dim_image_h'],
canvas_w=painting_dict['dim_image_w'], canvas_w=painting_dict['dim_image_w'],
location=painting_dict['location'], location=painting_dict['location'],
angle=45) angle=int(print_.get('print_angle_list', [0])[0]))
painting_dict['mask_inv_print'] = np.zeros(painting_dict['tile_print'].shape[:2], dtype=np.uint8)
return painting_dict return painting_dict
def tile_image(self, pattern, dim, scale, dim_image_h, dim_image_w, location, trigger=False): def tile_image(self, pattern, dim, scale, dim_image_h, dim_image_w, location, trigger=False):
@@ -314,10 +318,15 @@ class PrintPainting:
image = oss_get_image(oss_client=self.minio_client, bucket=bucket_name, object_name=object_name, data_type="PIL") image = oss_get_image(oss_client=self.minio_client, bucket=bucket_name, object_name=object_name, data_type="PIL")
# 判断图片格式如果是RGBA 则贴在一张纯白图片上 防止透明转黑 # 判断图片格式如果是RGBA 则贴在一张纯白图片上 防止透明转黑
if image.mode == "RGBA": if image.mode == "RGBA":
mask_pil = image.split()[3]
new_background = Image.new('RGB', image.size, (255, 255, 255)) new_background = Image.new('RGB', image.size, (255, 255, 255))
new_background.paste(image, mask=image.split()[3]) new_background.paste(image, mask=image.split()[3])
image = new_background image = new_background
else:
mask_pil = Image.new('L', image.size, 255) # L=灰度图255=纯白
print_dict['image'] = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR) print_dict['image'] = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)
print_dict['mask'] = cv2.threshold(np.array(mask_pil), 127, 255, cv2.THRESH_BINARY)[1]
return print_dict return print_dict
def crop_image(self, image, image_size_h, image_size_w, location, print_shape): def crop_image(self, image, image_size_h, image_size_w, location, print_shape):
@@ -467,9 +476,12 @@ class PrintPainting:
return cropped_img return cropped_img
def tile_image(pattern, dim, gap_x, gap_y, canvas_h, canvas_w, location, angle=0): def tile_image(pattern, mask, dim, gap_x, gap_y, canvas_h, canvas_w, location, angle=0):
""" """
按照指定的 X/Y 间距平铺印花,并支持旋转 按照指定的 X/Y 间距平铺印花,并支持旋转
【修改版】以被平铺图案的【中心】作为平铺基准点
:param location: [[center_y, center_x]] → 第一个图案中心的坐标
:param angle: 旋转角度 (度数, 逆时针) :param angle: 旋转角度 (度数, 逆时针)
""" """
# 1. 确保输入是 RGBA # 1. 确保输入是 RGBA
@@ -481,35 +493,54 @@ def tile_image(pattern, dim, gap_x, gap_y, canvas_h, canvas_w, location, angle=0
rotated_p = rotate_image(resized_p, angle) rotated_p = rotate_image(resized_p, angle)
p_h, p_w = rotated_p.shape[:2] p_h, p_w = rotated_p.shape[:2]
# 3. 创建透明单元格 # 3. 创建透明单元格(图案放在单元格中心)
cell_h, cell_w = p_h + gap_y, p_w + gap_x cell_h = p_h + gap_y
cell_w = p_w + gap_x
unit_cell = np.zeros((cell_h, cell_w, 4), dtype=np.uint8) unit_cell = np.zeros((cell_h, cell_w, 4), dtype=np.uint8)
unit_cell[:p_h, :p_w, :] = rotated_p
# 计算图案在单元格中的左上角位置(让图案居中)
start_y = (cell_h - p_h) // 2
start_x = (cell_w - p_w) // 2
unit_cell[start_y:start_y + p_h, start_x:start_x + p_w, :] = rotated_p
# 4. 执行平铺 # 4. 执行平铺
tiles_y = (canvas_h // cell_h) + 2 tiles_y = (canvas_h // cell_h) + 3 # 多加一点余量更安全
tiles_x = (canvas_w // cell_w) + 2 tiles_x = (canvas_w // cell_w) + 3
full_tiled = np.tile(unit_cell, (tiles_y, tiles_x, 1)) full_tiled = np.tile(unit_cell, (tiles_y, tiles_x, 1))
# 5. 裁剪平铺层 # 5. 计算偏移(关键修改:以中心为基准)
offset_x = int(location[0][1] % cell_w) center_y, center_x = location[0][0], location[0][1] # 第一个图案的中心位置
offset_y = int(location[0][0] % cell_h)
# 计算从哪个位置开始裁剪,才能让中心落在指定坐标
offset_y = int((center_y - (p_h // 2)) % cell_h)
offset_x = int((center_x - (p_w // 2)) % cell_w)
tiled_layer = full_tiled[offset_y: offset_y + canvas_h, tiled_layer = full_tiled[offset_y: offset_y + canvas_h,
offset_x: offset_x + canvas_w] offset_x: offset_x + canvas_w]
# 6. 创建纯白色背景并合成 # 6. 创建纯白色背景并合成(保持你原来的风格)
# 创建一个纯白色的 BGR 画布
white_background = np.full((canvas_h, canvas_w, 3), 255, dtype=np.uint8) white_background = np.full((canvas_h, canvas_w, 3), 255, dtype=np.uint8)
# 分离平铺层的颜色通道和 Alpha 通道
tiled_bgr = tiled_layer[:, :, :3] tiled_bgr = tiled_layer[:, :, :3]
alpha_mask = tiled_layer[:, :, 3] / 255.0 # 归一化到 0-1 alpha_mask = tiled_layer[:, :, 3] / 255.0
alpha_mask = cv2.merge([alpha_mask, alpha_mask, alpha_mask]) # 扩展到 3 通道 alpha_mask = cv2.merge([alpha_mask, alpha_mask, alpha_mask])
# 执行 Alpha 混合:结果 = 平铺层 * alpha + 背景 * (1 - alpha) tiled_print = (tiled_bgr * alpha_mask + white_background * (1 - alpha_mask)).astype(np.uint8)
result = (tiled_bgr * alpha_mask + white_background * (1 - alpha_mask)).astype(np.uint8)
return result # ====================== 处理 Mask ======================
# Mask 也同样居中处理
resized_mask = cv2.resize(mask, dim, interpolation=cv2.INTER_NEAREST)
rotated_mask = rotate_image(resized_mask, angle) # 注意mask也需要旋转
unit_mask = np.zeros((cell_h, cell_w), dtype=np.uint8)
unit_mask[start_y:start_y + p_h, start_x:start_x + p_w] = rotated_mask
full_mask_tiled = np.tile(unit_mask, (tiles_y, tiles_x))
tiled_mask = full_mask_tiled[offset_y: offset_y + canvas_h,
offset_x: offset_x + canvas_w]
return tiled_print, cv2.bitwise_not(tiled_mask)
def rotate_image(image, angle): def rotate_image(image, angle):

View File

@@ -51,6 +51,8 @@ class Segmentation:
if not _ or result["image"].shape[:2] != seg_result.shape: if not _ or result["image"].shape[:2] != seg_result.shape:
# 推理获得seg 结果 # 推理获得seg 结果
seg_result = get_seg_result(result['image']) seg_result = get_seg_result(result['image'])
if result['name'] == 'others':
seg_result = seg_result.clip(max=1)
self.save_seg_result(seg_result, result['image_id']) self.save_seg_result(seg_result, result['image_id'])
result['seg_result'] = seg_result result['seg_result'] = seg_result

View File

@@ -10,12 +10,12 @@
import logging import logging
import cv2 import cv2
import mmcv
import numpy as np import numpy as np
import torch import torch
import tritonclient.http as httpclient import tritonclient.http as httpclient
from app.core.config import DESIGN_MODEL_URL, DESIGN_MODEL_NAME from app.core.config import DESIGN_MODEL_URL, DESIGN_MODEL_NAME
from app.service.utils.image_normalize import my_imnormalize
""" """
keypoint keypoint
@@ -24,14 +24,14 @@ from app.core.config import DESIGN_MODEL_URL, DESIGN_MODEL_NAME
def keypoint_preprocess(img_path): def keypoint_preprocess(img_path):
img = mmcv.imread(img_path) img = img_path
img = cv2.copyMakeBorder(img, 25, 25, 25, 25, cv2.BORDER_CONSTANT, value=[255, 255, 255]) img = cv2.copyMakeBorder(img, 25, 25, 25, 25, cv2.BORDER_CONSTANT, value=[255, 255, 255])
img_scale = (256, 256) img_scale = (256, 256)
h, w = img.shape[:2] h, w = img.shape[:2]
img = cv2.resize(img, img_scale) img = cv2.resize(img, img_scale)
w_scale = img_scale[0] / w w_scale = img_scale[0] / w
h_scale = img_scale[1] / h h_scale = img_scale[1] / h
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img, (w_scale, h_scale) return preprocessed_img, (w_scale, h_scale)
@@ -78,7 +78,7 @@ def keypoint_postprocess(output, scale_factor):
# KNet # KNet
def seg_preprocess(img_path): def seg_preprocess(img_path):
img = mmcv.imread(img_path) img = img_path
ori_shape = img.shape[:2] ori_shape = img.shape[:2]
img_scale_w, img_scale_h = ori_shape img_scale_w, img_scale_h = ori_shape
if ori_shape[0] > 1024: if ori_shape[0] > 1024:
@@ -87,12 +87,12 @@ def seg_preprocess(img_path):
img_scale_h = 1024 img_scale_h = 1024
# 如果图片size任意一边 大于 1024 则会resize 成1024 # 如果图片size任意一边 大于 1024 则会resize 成1024
if ori_shape != (img_scale_w, img_scale_h): if ori_shape != (img_scale_w, img_scale_h):
# mmcv.imresize(img, img_scale_h, img_scale_w) # 老代码 引以为戒!哈哈哈~ h和w写反了 # my_imnormalize(img, img_scale_h, img_scale_w) # 老代码 引以为戒!哈哈哈~ h和w写反了
img = cv2.resize(img, (img_scale_h, img_scale_w)) img = cv2.resize(img, (img_scale_h, img_scale_w))
# 扩充25的白边 # 扩充25的白边
img = cv2.copyMakeBorder(img, 25, 25, 25, 25, cv2.BORDER_CONSTANT, value=[255, 255, 255]) img = cv2.copyMakeBorder(img, 25, 25, 25, 25, cv2.BORDER_CONSTANT, value=[255, 255, 255])
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img, ori_shape return preprocessed_img, ori_shape

View File

@@ -79,7 +79,7 @@ def organize_others(layer):
front_layer = dict(priority=layer['priority'] if layer.get("layer_order", False) else PRIORITY_DICT.get(f'{layer["name"].lower()}_front', None), front_layer = dict(priority=layer['priority'] if layer.get("layer_order", False) else PRIORITY_DICT.get(f'{layer["name"].lower()}_front', None),
name=f'{layer["name"].lower()}_front', name=f'{layer["name"].lower()}_front',
image=layer["front_image"], image=layer["front_image"],
mask_image=layer['front_mask_image'], # mask_image=layer['front_mask_image'],
image_url=layer['front_image_url'], image_url=layer['front_image_url'],
mask_url=layer.get('mask_url', None), mask_url=layer.get('mask_url', None),
sacle=layer['scale'], sacle=layer['scale'],
@@ -92,12 +92,14 @@ def organize_others(layer):
pattern_print_image_url=layer.get('pattern_print_image_url', None), pattern_print_image_url=layer.get('pattern_print_image_url', None),
pattern_image=layer.get('pattern_image', None), pattern_image=layer.get('pattern_image', None),
# back_perspective_url=layer['back_perspective_url'] if 'back_perspective_url' in layer.keys() else "" # back_perspective_url=layer['back_perspective_url'] if 'back_perspective_url' in layer.keys() else ""
transpose=layer.get("transpose", [1, 1]), # 默认为1, 1代表不镜像
rotate=layer.get('rotate', 0),
) )
# 后片数据 # 后片数据
back_layer = dict(priority=-layer.get("priority", 0) if layer.get("layer_order", False) else PRIORITY_DICT.get(f'{layer["name"].lower()}_back', None), back_layer = dict(priority=-layer.get("priority", 0) if layer.get("layer_order", False) else PRIORITY_DICT.get(f'{layer["name"].lower()}_back', None),
name=f'{layer["name"].lower()}_back', name=f'{layer["name"].lower()}_back',
image=layer["back_image"], image=layer["back_image"],
mask_image=layer['back_mask_image'], # mask_image=layer['back_mask_image'],
image_url=layer['back_image_url'], image_url=layer['back_image_url'],
mask_url=layer.get('mask_url', None), mask_url=layer.get('mask_url', None),
sacle=layer['scale'], sacle=layer['scale'],
@@ -109,6 +111,8 @@ def organize_others(layer):
pattern_overall_image_url=layer.get('pattern_overall_image_url', None), pattern_overall_image_url=layer.get('pattern_overall_image_url', None),
pattern_print_image_url=layer.get('pattern_print_image_url', None), pattern_print_image_url=layer.get('pattern_print_image_url', None),
# back_perspective_url=layer['back_perspective_url'] if 'back_perspective_url' in layer.keys() else "" # back_perspective_url=layer['back_perspective_url'] if 'back_perspective_url' in layer.keys() else ""
transpose=layer.get("transpose", [1, 1]), # 默认为1, 1代表不镜像
rotate=layer.get('rotate', 0),
) )
return front_layer, back_layer return front_layer, back_layer

View File

@@ -342,83 +342,33 @@ def update_base_size_priority(layers):
def transpose_rotate(layer, image): def transpose_rotate(layer, image):
""" # transpose[0]是左右 transpose[1]是上下
融合镜像transpose和旋转rotate逻辑计算实际旋转角度后执行图像变换 transpose = layer.get('transpose', [1, 1]) # 默认为1, 1代表不镜像
并调整粘贴位置以保持视觉中心一致
参数: rotate = layer.get('rotate', 0)
layer: 包含transpose、rotate、adaptive_position等属性的字典
image: PIL Image对象待处理的图像
返回:
tuple: (处理后的Image对象, 新的粘贴坐标(x, y))
"""
# 获取镜像状态transpose[0]=左右transpose[1]=上下1=正常,-1=镜像)
transpose = layer.get('transpose', [1, 1])
is_mirrored_x = transpose[0] # 左右镜像状态
is_mirrored_y = transpose[1] # 上下镜像状态
# 获取原始旋转角度和粘贴位置
original_rotate = layer.get('rotate', 0)
paste_x, paste_y = layer['adaptive_position'][1], layer['adaptive_position'][0] paste_x, paste_y = layer['adaptive_position'][1], layer['adaptive_position'][0]
original_w = image.width original_w = image.width
original_h = image.height original_h = image.height
# transpose左右是1 上下是-1
if transpose[0] != 1:
# 左右
image = image.transpose(0)
# ------------------- 核心修改:计算实际旋转角度 ------------------- if transpose[1] != 1:
# 结合镜像状态,计算需要实际执行的旋转角度 # 上下
actual_rotate = calculate_actual_rotate(original_rotate, is_mirrored_x, is_mirrored_y) image = image.transpose(1)
print(f"actual_rotate:{actual_rotate}")
# ------------------- 执行镜像变换 -------------------
# 左右镜像transpose[0] != 1 即-1表示镜像
if is_mirrored_x != 1:
image = image.transpose(0) # 假设transpose(0)对应左右翻转需匹配你的PIL版本
# 上下镜像transpose[1] != 1 即-1表示镜像 if rotate:
if is_mirrored_y != 1: image = image.rotate(-rotate, expand=True)
image = image.transpose(1) # 假设transpose(1)对应上下翻转 # 4. 计算粘贴位置以保持视觉中心一致
# 原本 (15, 36) 是 288*288 的左上角,我们计算其中心点
# ------------------- 执行旋转并调整粘贴位置 -------------------
if actual_rotate != 0: # 只有实际旋转角度非0时才执行旋转
# 注意原代码中是rotate(-rotate),这里同步调整符号
image = image.rotate(-actual_rotate, expand=True)
# 计算粘贴位置以保持视觉中心一致
# 原位置的中心点
target_center_x = paste_x + original_w // 2 target_center_x = paste_x + original_w // 2
target_center_y = paste_y + original_h // 2 target_center_y = paste_y + original_h // 2
# 旋转后图像的新尺寸 # 获取旋转后图像的新尺寸
new_w, new_h = image.size new_w, new_h = image.size
# 新的左上角坐标(保证中心不变) # 计算新的左上角坐标,使得旋转后的图像中心依然在原定的中心位置
paste_x = target_center_x - new_w // 2 paste_x = target_center_x - new_w // 2
paste_y = target_center_y - new_h // 2 paste_y = target_center_y - new_h // 2
return image, (paste_x, paste_y) return image, (paste_x, paste_y)
def calculate_actual_rotate(before_rotate, is_mirrored_x, is_mirrored_y):
"""
根据X/Y轴镜像状态计算实际的旋转角度并标准化到0-360度
参数:
before_rotate: 原始旋转角度(数值类型)
is_mirrored_x: X轴镜像状态-1表示镜像1表示正常
is_mirrored_y: Y轴镜像状态-1表示镜像1表示正常
返回:
float/int: 标准化后的实际旋转角度0-360度
"""
actual_rotate = before_rotate
# 根据镜像状态调整旋转角度
if is_mirrored_x == -1 and is_mirrored_y == 1:
actual_rotate = -before_rotate
elif is_mirrored_x == 1 and is_mirrored_y == -1:
actual_rotate = -before_rotate
# elif is_mirrored_x == -1 and is_mirrored_y == -1:
# actual_rotate = before_rotate + 180
# 角度标准化到0-360度
normalized_rotate = ((actual_rotate % 360) + 360) % 360
return normalized_rotate

View File

@@ -11,7 +11,6 @@ import logging
import uuid import uuid
import cv2 import cv2
import mmcv
import numpy as np import numpy as np
import pandas as pd import pandas as pd
import torch import torch
@@ -21,6 +20,7 @@ from minio import Minio
from tritonclient.utils import np_to_triton_dtype from tritonclient.utils import np_to_triton_dtype
from app.core.config import settings, FAST_GI_MODEL_URL, GI_MODEL_URL, DESIGN_MODEL_URL, FAST_GI_MODEL_NAME, GI_MODEL_NAME from app.core.config import settings, FAST_GI_MODEL_URL, GI_MODEL_URL, DESIGN_MODEL_URL, FAST_GI_MODEL_NAME, GI_MODEL_NAME
from app.service.utils.image_normalize import my_imnormalize
from app.service.utils.new_oss_client import oss_upload_image from app.service.utils.new_oss_client import oss_upload_image
logger = logging.getLogger() logger = logging.getLogger()
@@ -86,10 +86,9 @@ class AgentToolGenerateImage:
@staticmethod @staticmethod
def preprocess(img): def preprocess(img):
img = mmcv.imread(img)
img_scale = (224, 224) img_scale = (224, 224)
img = cv2.resize(img, img_scale) img = cv2.resize(img, img_scale)
img = mmcv.imnormalize( img = my_imnormalize(
img, img,
mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]),
to_rgb=True) to_rgb=True)

View File

@@ -189,10 +189,10 @@ if __name__ == '__main__':
tasks_id="123-89", tasks_id="123-89",
prompt="a single item of sketch of dress, 4k, white background", prompt="a single item of sketch of dress, 4k, white background",
image_url="aida-collection-element/89/Sketchboard/95f20cdc-e059-435c-b8b1-d04cc9e80c3d.png", image_url="aida-collection-element/89/Sketchboard/95f20cdc-e059-435c-b8b1-d04cc9e80c3d.png",
mode='img2img', mode='txt2img',
category="sketch", category="sketch",
gender="Female", gender="Female",
version="fast" version="hight"
) )
server = GenerateImage(rd) server = GenerateImage(rd)
print(server.get_result()) print(server.get_result())

View File

@@ -2,23 +2,23 @@ import logging
import time import time
import cv2 import cv2
import mmcv
import numpy as np import numpy as np
import torch import torch
import tritonclient.http as httpclient import tritonclient.http as httpclient
from app.core.config import settings, DESIGN_MODEL_URL, DESIGN_MODEL_NAME from app.core.config import settings, DESIGN_MODEL_URL, DESIGN_MODEL_NAME
from app.service.generate_image.utils.upload_sd_image import upload_stain_png_sd, upload_face_png_sd from app.service.generate_image.utils.upload_sd_image import upload_stain_png_sd, upload_face_png_sd
from app.service.utils.image_normalize import my_imnormalize
logger = logging.getLogger() logger = logging.getLogger()
def seg_preprocess(img_path): def seg_preprocess(img_path):
img = mmcv.imread(img_path) img = img_path
ori_shape = img.shape[:2] ori_shape = img.shape[:2]
img_scale = ori_shape img_scale = ori_shape
img = cv2.resize(img, img_scale) img = cv2.resize(img, img_scale)
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img, ori_shape return preprocessed_img, ori_shape
@@ -242,10 +242,9 @@ def stain_detection(image, user_id, category, tasks_id, spot_size=100):
def generate_category_recognition(image, gender): def generate_category_recognition(image, gender):
def preprocess(img): def preprocess(img):
img = mmcv.imread(img)
img_scale = (224, 224) img_scale = (224, 224)
img = cv2.resize(img, img_scale) img = cv2.resize(img, img_scale)
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img return preprocessed_img

View File

@@ -1,7 +1,6 @@
import logging import logging
import cv2 import cv2
import mmcv
import numpy as np import numpy as np
import torch import torch
import torch.nn.functional as F import torch.nn.functional as F
@@ -10,6 +9,7 @@ from minio import Minio
from app.core.config import settings from app.core.config import settings
from app.core.config import DESIGN_MODEL_URL from app.core.config import DESIGN_MODEL_URL
from app.schemas.image2sketch import Image2SketchModel from app.schemas.image2sketch import Image2SketchModel
from app.service.utils.image_normalize import my_imnormalize
from app.service.utils.new_oss_client import oss_get_image, oss_upload_image from app.service.utils.new_oss_client import oss_get_image, oss_upload_image
logger = logging.getLogger() logger = logging.getLogger()
@@ -67,7 +67,7 @@ class LineArtService:
@staticmethod @staticmethod
def line_art_preprocess(image): def line_art_preprocess(image):
img = mmcv.imread(image) img = image
ori_shape = img.shape[:2] ori_shape = img.shape[:2]
img_scale_w, img_scale_h = ori_shape img_scale_w, img_scale_h = ori_shape
if ori_shape[0] > 1024: if ori_shape[0] > 1024:
@@ -76,9 +76,9 @@ class LineArtService:
img_scale_h = 1024 img_scale_h = 1024
# 如果图片size任意一边 大于 1024 则会resize 成1024 # 如果图片size任意一边 大于 1024 则会resize 成1024
if ori_shape != (img_scale_w, img_scale_h): if ori_shape != (img_scale_w, img_scale_h):
# mmcv.imresize(img, img_scale_h, img_scale_w) # 老代码 引以为戒!哈哈哈~ h和w写反了 # my_imnormalize(img, img_scale_h, img_scale_w) # 老代码 引以为戒!哈哈哈~ h和w写反了
img = cv2.resize(img, (img_scale_h, img_scale_w)) img = cv2.resize(img, (img_scale_h, img_scale_w))
img = mmcv.imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True) img = my_imnormalize(img, mean=np.array([123.675, 116.28, 103.53]), std=np.array([58.395, 57.12, 57.375]), to_rgb=True)
preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0) preprocessed_img = np.expand_dims(img.transpose(2, 0, 1), axis=0)
return preprocessed_img, ori_shape return preprocessed_img, ori_shape

View File

@@ -90,7 +90,7 @@ def get_response(messages):
def get_translation_from_llama3(text): def get_translation_from_llama3(text):
start_time = time.time() start_time = time.time()
url = f"http://{settings.A6000_SERVICE_HOST}:11434/api/generate" url = f"http://{settings.A6000_SERVICE_HOST}:12434/api/generate"
# url = "http://10.1.1.240:1143/api/generate" # url = "http://10.1.1.240:1143/api/generate"
# prompt = f"System: {prefix_for_llama}\nUser:[{text}]" # prompt = f"System: {prefix_for_llama}\nUser:[{text}]"
@@ -103,8 +103,8 @@ def get_translation_from_llama3(text):
# 创建请求的负载 translator是自定义的翻译模型 # 创建请求的负载 translator是自定义的翻译模型
payload = { payload = {
"model": "translator", "model": "AiDA-translator:latest",
"prompt": f"[{text}]", "prompt": text,
"stream": False "stream": False
} }
# 将负载转换为 JSON 格式 # 将负载转换为 JSON 格式
@@ -148,7 +148,7 @@ def get_translation_from_llama3(text):
def get_prompt_from_image(image_path, text): def get_prompt_from_image(image_path, text):
start_time = time.time() start_time = time.time()
# url = "http://localhost:11434/api/generate" # url = "http://localhost:11434/api/generate"
url = "http://10.1.1.243:11434/api/generate" url = f"http://{settings.B_4_X_4090_SERVICE_HOST}:11434/api/generate"
image_base64 = minio_util.minio_url_to_base64(image_path.img) image_base64 = minio_util.minio_url_to_base64(image_path.img)
# image_base64 = minio_url_to_base64(image_path) # image_base64 = minio_url_to_base64(image_path)
@@ -180,7 +180,7 @@ def get_prompt_from_image(image_path, text):
def main(): def main():
"""Main function""" """Main function"""
text = get_translation_from_llama3("[火焰]") text = get_translation_from_llama3("火焰")
print(text) print(text)

View File

@@ -0,0 +1,35 @@
import logging
import httpx
logger = logging.getLogger("app")
async def notify_callback(callback_url: str, task_id: str, status: str, result: dict, ):
"""
调用客户端提供的回调接口
"""
try:
payload = {
"task_id": task_id,
"status": status,
"result": result
}
logger.info(payload)
async with httpx.AsyncClient(timeout=30.0) as client:
resp = await client.post(
str(callback_url),
json=payload,
headers={"Content-Type": "application/json"}
)
if 200 <= resp.status_code < 300:
logger.info(f"回调成功 | task_id: {task_id} | status: {status} | url: {callback_url}")
return True
else:
logger.warning(f"回调返回非2xx状态码 | task_id: {task_id} | status: {resp.status_code} | url: {callback_url}")
return False
except Exception as e:
logger.error(f"回调失败 | task_id: {task_id} | url: {callback_url} | error: {e}", exc_info=True)
return False

View File

@@ -0,0 +1,46 @@
from celery import Celery
from kombu import Queue, Exchange
from app.core.config import settings
celery_app = Celery(
"sketch_to_garment",
broker=f"redis://{settings.REDIS_HOST}:{settings.REDIS_PORT}/2",
backend=f"redis://{settings.REDIS_HOST}:{settings.REDIS_PORT}/{settings.REDIS_DB}",
include=["app.service.sketch2garment.tasks"]
)
print(f"redis://{settings.REDIS_HOST}:{settings.REDIS_PORT}/3")
print(f"celery_app: {celery_app}")
celery_app.conf.update(
task_serializer="json",
accept_content=["json"],
result_serializer="json",
timezone="Asia/Hong_Kong",
enable_utc=True,
task_track_started=True,
task_time_limit=300, # 单个任务最长 5 分钟
task_soft_time_limit=280,
# 定义队列
task_queues=(
Queue("sketch_to_garment_queue",
exchange=Exchange("sketch_to_garment_exchange", type="direct"),
durable=True),
),
task_routes={
'app.service.sketch2garment.tasks.sketch_to_garment':
{
'queue': 'sketch_to_garment_queue',
'exchange': 'sketch_to_garment_exchange', # ← 修改这里
},
},
task_default_queue="sketch_to_garment_queue",
worker_concurrency=1,
worker_prefetch_multiplier=1,
worker_max_tasks_per_child=1,
task_acks_late=True,
task_reject_on_worker_lost=True,
)

View File

@@ -0,0 +1,44 @@
import logging
from app.service.sketch2garment.tasks import sketch_to_garment
logger = logging.getLogger(__name__)
def submit_sketch_to_garment_task(model: str = "single", task_id: str = "", callback_url: str = "", bucket_name: str = "test", user_id: str = "123", input_image_path: str = ""):
"""提交 img_to_3D 任务(带队列长度限制)"""
queue_name = "img_to_3d_queue"
max_queue_length = 10
try:
# current_length = get_queue_length(queue_name)
# if current_length >= max_queue_length:
# return {
# "state": "queue_full",
# "message": "当前 3D 生成请求较多,请稍后重试。",
# "queue_length": current_length,
# "max_length": max_queue_length
# }
# 提交任务
task = sketch_to_garment.apply_async(
args=(task_id, callback_url, bucket_name, input_image_path, user_id, model),
task_id=task_id,
queue="sketch_to_garment_queue")
# logger.info(f"img_to_3d_task 已提交 | task_id: {task_id} | 当前队列长度: {current_length}")
return {
"state": "success",
"task_id": task_id,
"message": "任务已成功提交,正在后台处理...",
}
except Exception as e:
logger.error(f"提交 img_to_3d_task 失败: {e}", exc_info=True)
return {
"state": "fail",
"message": "提交失败,请稍后重试。",
"error": str(e)
}

View File

@@ -0,0 +1,57 @@
import asyncio
import logging
from app.core.config import settings
from app.service.sketch2garment.callback import notify_callback
import httpx
from app.service.sketch2garment.celery_app import celery_app
logger = logging.getLogger(__name__)
@celery_app.task(bind=True, queue="sketch_to_garment_queue", max_retries=3, name='app.service.sketch2garment.tasks.sketch_to_garment')
def sketch_to_garment(self, task_id: str, callback_url: str, bucket_name: str, input_image_path: str, user_id: str, category: str = None):
payload = {
"bucket_name": bucket_name,
"category": category or settings.DEFAULT_CATEGORY,
"input_image_path": input_image_path,
"user_id": user_id
}
logger.info(f"payload: {payload}")
try:
with httpx.Client(timeout=300.0) as client: # 注意这里用 AsyncClient 配合 Celery
# 如果你的 LitServe 是同步 endpoint也可以用 httpx.Client()
response = client.post(settings.SKETCH_TO_GARMENT_URL, json=payload)
if response.status_code == 200:
result = response.json()
result_json = {
"pattern": result[1],
"texture": result[2],
"glb": result[3],
"texture_fabric": result[4]
}
asyncio.run(
notify_callback(callback_url=callback_url, task_id=task_id, result=result_json, status="success")
)
else:
asyncio.run(
notify_callback(
callback_url=callback_url,
task_id=task_id,
result={
"status": "fail",
"task_id": task_id,
"message": "fail",
"error": "fail"
},
status="fail")
)
except Exception as e:
return {
"status": "failed",
"task_id": task_id,
"input": payload,
"error": str(e)
}

View File

@@ -0,0 +1,27 @@
import cv2
import numpy as np
def my_imnormalize(img, mean, std, to_rgb=True):
"""Inplace normalize an image with mean and std.
Args:
img (ndarray): Image to be normalized.
mean (ndarray): The mean to be used for normalize.
std (ndarray): The std to be used for normalize.
to_rgb (bool): Whether to convert to rgb.
Returns:
ndarray: The normalized image.
"""
# cv2 inplace normalization does not accept uint8
img = img.copy().astype(np.float32)
assert img.dtype != np.uint8
mean = np.float64(mean.reshape(1, -1))
stdinv = 1 / np.float64(std.reshape(1, -1))
if to_rgb:
cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) # inplace
cv2.subtract(img, mean, img) # inplace
cv2.multiply(img, stdinv, img) # inplace
return img

View File

@@ -1,5 +1,6 @@
services: services:
aida_server: aida_server:
container_name: "AiDA_${SERVE_ENV}_Server"
build: build:
context: . context: .
dockerfile: Dockerfile dockerfile: Dockerfile
@@ -11,3 +12,9 @@ services:
- ./seg_cache:/seg_cache - ./seg_cache:/seg_cache
ports: ports:
- "${SERVE_PORT}:80" - "${SERVE_PORT}:80"
networks:
- aida_app_net
networks:
aida_app_net:
external: true
name: aida_app_net

View File

@@ -23,8 +23,9 @@ dependencies = [
"load-dotenv>=0.1.0", "load-dotenv>=0.1.0",
"loguru>=0.7.3", "loguru>=0.7.3",
"minio>=7.2.20", "minio>=7.2.20",
"mmcv>=2.2.0",
"moviepy==1.0.3", "moviepy==1.0.3",
"nacos-sdk-python==2.0.1",
"np>=1.0.2",
"numpy<2", "numpy<2",
"ollama>=0.6.1", "ollama>=0.6.1",
"opencv-python>=4.11.0.86", "opencv-python>=4.11.0.86",

Binary file not shown.

Binary file not shown.

283
uv.lock generated
View File

@@ -8,15 +8,6 @@ resolution-markers = [
"(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')", "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')",
] ]
[[package]]
name = "addict"
version = "2.4.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/85/ef/fd7649da8af11d93979831e8f1f8097e85e82d5bfeabc8c68b39175d8e75/addict-2.4.0.tar.gz", hash = "sha256:b3b2210e0e067a281f5646c8c5db92e99b7231ea8b0eb5f74dbdf9e259d4e494", size = 9186, upload-time = "2020-11-21T16:21:31.416Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6a/00/b08f23b7d7e1e14ce01419a467b583edbb93c6cdb8654e54a9cc579cd61f/addict-2.4.0-py3-none-any.whl", hash = "sha256:249bb56bbfd3cdc2a004ea0ff4c2b6ddc84d53bc2194761636eb314d5cfa5dfc", size = 3832, upload-time = "2020-11-21T16:21:29.588Z" },
]
[[package]] [[package]]
name = "agentaction" name = "agentaction"
version = "0.1.7" version = "0.1.7"
@@ -71,6 +62,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/7c/91/513971861d845d28160ecb205ae2cfaf618b16918a9cd4e0b832b5360ce7/aio_pika-9.5.8-py3-none-any.whl", hash = "sha256:f4c6cb8a6c5176d00f39fd7431e9702e638449bc6e86d1769ad7548b2a506a8d", size = 54397, upload-time = "2025-11-12T10:37:08.374Z" }, { url = "https://files.pythonhosted.org/packages/7c/91/513971861d845d28160ecb205ae2cfaf618b16918a9cd4e0b832b5360ce7/aio_pika-9.5.8-py3-none-any.whl", hash = "sha256:f4c6cb8a6c5176d00f39fd7431e9702e638449bc6e86d1769ad7548b2a506a8d", size = 54397, upload-time = "2025-11-12T10:37:08.374Z" },
] ]
[[package]]
name = "aiofiles"
version = "25.1.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/41/c3/534eac40372d8ee36ef40df62ec129bee4fdb5ad9706e58a29be53b2c970/aiofiles-25.1.0.tar.gz", hash = "sha256:a8d728f0a29de45dc521f18f07297428d56992a742f0cd2701ba86e44d23d5b2", size = 46354, upload-time = "2025-10-09T20:51:04.358Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bc/8a/340a1555ae33d7354dbca4faa54948d76d89a27ceef032c8c3bc661d003e/aiofiles-25.1.0-py3-none-any.whl", hash = "sha256:abe311e527c862958650f9438e859c1fa7568a141b22abcd015e120e86a85695", size = 14668, upload-time = "2025-10-09T20:51:03.174Z" },
]
[[package]] [[package]]
name = "aiohappyeyeballs" name = "aiohappyeyeballs"
version = "2.6.1" version = "2.6.1"
@@ -140,6 +140,154 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fb/76/641ae371508676492379f16e2fa48f4e2c11741bd63c48be4b12a6b09cba/aiosignal-1.4.0-py3-none-any.whl", hash = "sha256:053243f8b92b990551949e63930a839ff0cf0b0ebbe0597b0f3fb19e1a0fe82e", size = 7490, upload-time = "2025-07-03T22:54:42.156Z" }, { url = "https://files.pythonhosted.org/packages/fb/76/641ae371508676492379f16e2fa48f4e2c11741bd63c48be4b12a6b09cba/aiosignal-1.4.0-py3-none-any.whl", hash = "sha256:053243f8b92b990551949e63930a839ff0cf0b0ebbe0597b0f3fb19e1a0fe82e", size = 7490, upload-time = "2025-07-03T22:54:42.156Z" },
] ]
[[package]]
name = "alibabacloud-credentials"
version = "0.3.6"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "alibabacloud-tea" },
]
sdist = { url = "https://files.pythonhosted.org/packages/fc/92/7cb0807d6d380fa09cbad6d4fe983781e657dcc16d60fc559d6575bf98be/alibabacloud_credentials-0.3.6.tar.gz", hash = "sha256:caa82cf258648dcbe1ca14aeba50ba21845567d6ac3cd48d318e0a445fff7f96", size = 18771, upload-time = "2024-10-28T03:40:03.806Z" }
[[package]]
name = "alibabacloud-darabonba-array"
version = "0.1.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/50/be/1813d7553e11e20a1422ffaaead392cfa7239a855c7e67c6a6b5776cfa64/alibabacloud_darabonba_array-0.1.0.tar.gz", hash = "sha256:7f9a7c632518ff4f0cebb0d4e825a48c12e7cf0b9016ea25054dd73732e155aa", size = 2306, upload-time = "2022-11-01T06:32:47.928Z" }
[[package]]
name = "alibabacloud-darabonba-encode-util"
version = "0.0.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b6/d8/22543b2ade9aa68fef028a9f0c4154bfdb970926f918f63d7b85bae527a9/alibabacloud_darabonba_encode_util-0.0.2.tar.gz", hash = "sha256:f1c484f276d60450fa49b4b2987194e741fcb2f7faae7f287c0ae65abc85fd4d", size = 3990, upload-time = "2022-12-10T04:43:48.086Z" }
[[package]]
name = "alibabacloud-darabonba-map"
version = "0.0.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d5/bc/f11d56adffffade9a0d33ccca155ce82ca950b97cdce27a75228715c4639/alibabacloud_darabonba_map-0.0.1.tar.gz", hash = "sha256:adb17384658a1a8f72418f1838d4b6a5fd2566bfd392a3ef06d9dbb0a595a23f", size = 2056, upload-time = "2021-12-04T03:41:20.369Z" }
[[package]]
name = "alibabacloud-darabonba-signature-util"
version = "0.0.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cryptography" },
]
sdist = { url = "https://files.pythonhosted.org/packages/13/09/2118a2df631eaa06a291013ea61f31e449ba7a3cc3d6061477a43420e93a/alibabacloud_darabonba_signature_util-0.0.4.tar.gz", hash = "sha256:71d79b2ae65957bcfbf699ced894fda782b32f9635f1616635533e5a90d5feb0", size = 4170, upload-time = "2022-12-10T04:44:42.979Z" }
[[package]]
name = "alibabacloud-darabonba-string"
version = "0.0.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f9/d4/3d22bd2ff88985f970a10f8cedf2ea326d11d4d95e244f7665dc83d26465/alibabacloud-darabonba-string-0.0.4.tar.gz", hash = "sha256:ec6614c0448dadcbc5e466485838a1f8cfdd911135bea739e20b14511270c6f7", size = 2852, upload-time = "2021-12-13T13:30:06.114Z" }
[[package]]
name = "alibabacloud-endpoint-util"
version = "0.0.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/92/7d/8cc92a95c920e344835b005af6ea45a0db98763ad6ad19299d26892e6c8d/alibabacloud_endpoint_util-0.0.4.tar.gz", hash = "sha256:a593eb8ddd8168d5dc2216cd33111b144f9189fcd6e9ca20e48f358a739bbf90", size = 2813, upload-time = "2025-06-12T07:20:52.572Z" }
[[package]]
name = "alibabacloud-gateway-pop"
version = "0.0.9"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "alibabacloud-credentials" },
{ name = "alibabacloud-darabonba-array" },
{ name = "alibabacloud-darabonba-encode-util" },
{ name = "alibabacloud-darabonba-map" },
{ name = "alibabacloud-darabonba-signature-util" },
{ name = "alibabacloud-darabonba-string" },
{ name = "alibabacloud-endpoint-util" },
{ name = "alibabacloud-gateway-spi" },
{ name = "alibabacloud-openapi-util" },
{ name = "alibabacloud-tea-util" },
]
sdist = { url = "https://files.pythonhosted.org/packages/18/7d/d521d803ee227499aa5a3044a0ab8bd4ba139a455d10c1a070e745d26b0c/alibabacloud_gateway_pop-0.0.9.tar.gz", hash = "sha256:50aec34abc47b3adc6e43da6fa036bbbd04477a0047435f3728129ede7641628", size = 5981, upload-time = "2025-07-23T07:06:06.717Z" }
[[package]]
name = "alibabacloud-gateway-spi"
version = "0.0.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "alibabacloud-credentials" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ab/98/d7111245f17935bf72ee9bea60bbbeff2bc42cdfe24d2544db52bc517e1a/alibabacloud_gateway_spi-0.0.3.tar.gz", hash = "sha256:10d1c53a3fc5f87915fbd6b4985b98338a776e9b44a0263f56643c5048223b8b", size = 4249, upload-time = "2025-02-23T16:29:54.222Z" }
[[package]]
name = "alibabacloud-kms20160120"
version = "2.2.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "alibabacloud-endpoint-util" },
{ name = "alibabacloud-gateway-pop" },
{ name = "alibabacloud-openapi-util" },
{ name = "alibabacloud-tea-openapi" },
{ name = "alibabacloud-tea-util" },
]
sdist = { url = "https://files.pythonhosted.org/packages/18/39/dfb1043f2995523507b03bb23e5db6291508eccbb4f78ea02930ff95f137/alibabacloud_kms20160120-2.2.3.tar.gz", hash = "sha256:fa7991185e92d85f9d224ead0bf82e5673fcfd022714e6c3cd2b1894b59555bf", size = 77350, upload-time = "2024-08-30T10:18:44.012Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cc/04/0668cbc62f3d9239e86d3d97b3de40b92e66730a90fc4c58f0ee38a81399/alibabacloud_kms20160120-2.2.3-py3-none-any.whl", hash = "sha256:51d3d04c75ba93c574ff4e368e51097f180fc05922fbd6336d290ea8113da99e", size = 76701, upload-time = "2024-08-30T10:18:42.524Z" },
]
[[package]]
name = "alibabacloud-openapi-util"
version = "0.2.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "alibabacloud-tea-util" },
{ name = "cryptography" },
]
sdist = { url = "https://files.pythonhosted.org/packages/f6/51/be5802851a4ed20ac2c6db50ac8354a6e431e93db6e714ca39b50983626f/alibabacloud_openapi_util-0.2.4.tar.gz", hash = "sha256:87022b9dcb7593a601f7a40ca698227ac3ccb776b58cb7b06b8dc7f510995c34", size = 7981, upload-time = "2026-01-15T08:05:03.947Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/08/46/9b217343648b366eb93447f5d93116e09a61956005794aed5ef95a2e9e2e/alibabacloud_openapi_util-0.2.4-py3-none-any.whl", hash = "sha256:a2474f230b5965ae9a8c286e0dc86132a887928d02d20b8182656cf6b1b6c5bd", size = 7661, upload-time = "2026-01-15T08:05:01.374Z" },
]
[[package]]
name = "alibabacloud-tea"
version = "0.4.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/9a/7d/b22cb9a0d4f396ee0f3f9d7f26b76b9ed93d4101add7867a2c87ed2534f5/alibabacloud-tea-0.4.3.tar.gz", hash = "sha256:ec8053d0aa8d43ebe1deb632d5c5404339b39ec9a18a0707d57765838418504a", size = 8785, upload-time = "2025-03-24T07:34:42.958Z" }
[[package]]
name = "alibabacloud-tea-openapi"
version = "0.3.14"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "alibabacloud-credentials" },
{ name = "alibabacloud-gateway-spi" },
{ name = "alibabacloud-openapi-util" },
{ name = "alibabacloud-tea-util" },
{ name = "alibabacloud-tea-xml" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ff/f5/c7823490a1574d1e3c27c9641aa395710e89d0c15c5a436c96e999e6e2fe/alibabacloud_tea_openapi-0.3.14.tar.gz", hash = "sha256:1e0a67ab3450cf09e26ccc0fb5b0622a6b58fdde25dc3ccb99b45e167c5db717", size = 12993, upload-time = "2025-04-15T12:20:03.363Z" }
[[package]]
name = "alibabacloud-tea-util"
version = "0.3.14"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "alibabacloud-tea" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e9/ee/ea90be94ad781a5055db29556744681fc71190ef444ae53adba45e1be5f3/alibabacloud_tea_util-0.3.14.tar.gz", hash = "sha256:708e7c9f64641a3c9e0e566365d2f23675f8d7c2a3e2971d9402ceede0408cdb", size = 7515, upload-time = "2025-11-19T06:01:08.504Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/72/9e/c394b4e2104766fb28a1e44e3ed36e4c7773b4d05c868e482be99d5635c9/alibabacloud_tea_util-0.3.14-py3-none-any.whl", hash = "sha256:10d3e5c340d8f7ec69dd27345eb2fc5a1dab07875742525edf07bbe86db93bfe", size = 6697, upload-time = "2025-11-19T06:01:07.355Z" },
]
[[package]]
name = "alibabacloud-tea-xml"
version = "0.0.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "alibabacloud-tea" },
]
sdist = { url = "https://files.pythonhosted.org/packages/32/eb/5e82e419c3061823f3feae9b5681588762929dc4da0176667297c2784c1a/alibabacloud_tea_xml-0.0.3.tar.gz", hash = "sha256:979cb51fadf43de77f41c69fc69c12529728919f849723eb0cd24eb7b048a90c", size = 3466, upload-time = "2025-07-01T08:04:55.144Z" }
[[package]] [[package]]
name = "amqp" name = "amqp"
version = "5.3.1" version = "5.3.1"
@@ -1671,43 +1819,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/3e/9a/b697530a882588a84db616580f2ba5d1d515c815e11c30d219145afeec87/minio-7.2.20-py3-none-any.whl", hash = "sha256:eb33dd2fb80e04c3726a76b13241c6be3c4c46f8d81e1d58e757786f6501897e", size = 93751, upload-time = "2025-11-27T00:37:13.993Z" }, { url = "https://files.pythonhosted.org/packages/3e/9a/b697530a882588a84db616580f2ba5d1d515c815e11c30d219145afeec87/minio-7.2.20-py3-none-any.whl", hash = "sha256:eb33dd2fb80e04c3726a76b13241c6be3c4c46f8d81e1d58e757786f6501897e", size = 93751, upload-time = "2025-11-27T00:37:13.993Z" },
] ]
[[package]]
name = "mmcv"
version = "2.2.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "addict" },
{ name = "mmengine" },
{ name = "numpy" },
{ name = "opencv-python" },
{ name = "packaging" },
{ name = "pillow" },
{ name = "pyyaml" },
{ name = "regex", marker = "sys_platform == 'win32'" },
{ name = "yapf" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e9/a2/57a733e7e84985a8a0e3101dfb8170fc9db92435c16afad253069ae3f9df/mmcv-2.2.0.tar.gz", hash = "sha256:ac479247e808d8802f89eadf04d4118de86bdfe81361ec5aed0cc1bf731c67c9", size = 479121, upload-time = "2024-04-24T14:24:28.064Z" }
[[package]]
name = "mmengine"
version = "0.10.7"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "addict" },
{ name = "matplotlib" },
{ name = "numpy" },
{ name = "opencv-python" },
{ name = "pyyaml" },
{ name = "regex", marker = "sys_platform == 'win32'" },
{ name = "rich" },
{ name = "termcolor" },
{ name = "yapf" },
]
sdist = { url = "https://files.pythonhosted.org/packages/17/14/959360bbd8374e23fc1b720906999add16a3ac071a501636db12c5861ff5/mmengine-0.10.7.tar.gz", hash = "sha256:d20ffcc31127567e53dceff132612a87f0081de06cbb7ab2bdb7439125a69225", size = 378090, upload-time = "2025-03-04T12:23:09.568Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/98/8e/f98332248aad102511bea4ae19c0ddacd2f0a994f3ca4c82b7a369e0af8b/mmengine-0.10.7-py3-none-any.whl", hash = "sha256:262ac976a925562f78cd5fd14dd1bc9b680ed0aa81f0d85b723ef782f99c54ee", size = 452720, upload-time = "2025-03-04T12:23:06.339Z" },
]
[[package]] [[package]]
name = "mmh3" name = "mmh3"
version = "5.2.0" version = "5.2.0"
@@ -1792,6 +1903,26 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/79/7b/2c79738432f5c924bef5071f933bcc9efd0473bac3b4aa584a6f7c1c8df8/mypy_extensions-1.1.0-py3-none-any.whl", hash = "sha256:1be4cccdb0f2482337c4743e60421de3a356cd97508abadd57d47403e94f5505", size = 4963, upload-time = "2025-04-22T14:54:22.983Z" }, { url = "https://files.pythonhosted.org/packages/79/7b/2c79738432f5c924bef5071f933bcc9efd0473bac3b4aa584a6f7c1c8df8/mypy_extensions-1.1.0-py3-none-any.whl", hash = "sha256:1be4cccdb0f2482337c4743e60421de3a356cd97508abadd57d47403e94f5505", size = 4963, upload-time = "2025-04-22T14:54:22.983Z" },
] ]
[[package]]
name = "nacos-sdk-python"
version = "2.0.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiofiles" },
{ name = "aiohttp" },
{ name = "alibabacloud-kms20160120" },
{ name = "alibabacloud-tea-openapi" },
{ name = "grpcio" },
{ name = "protobuf" },
{ name = "psutil" },
{ name = "pycryptodome" },
{ name = "pydantic" },
]
sdist = { url = "https://files.pythonhosted.org/packages/9d/e4/c9506551fe699e1f0bc194a9024cc8fb18c8d4ee4f004dfdd5861db07b2d/nacos-sdk-python-2.0.1.tar.gz", hash = "sha256:29fa1dd14f771824b65ae0edd208bb4d20737655ae8b809685194e2f6358c2a7", size = 68582, upload-time = "2025-01-13T14:37:22.981Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e6/14/269a08582090ac1d16ff2c491455a22d4a4c4f47337eb0b142957a93ea0a/nacos_sdk_python-2.0.1-py3-none-any.whl", hash = "sha256:623cfc4645adb44f21c8613d6c0e6f1c41a0110318ce0899d57942009b626044", size = 93265, upload-time = "2025-01-13T14:37:17.808Z" },
]
[[package]] [[package]]
name = "networkx" name = "networkx"
version = "3.6.1" version = "3.6.1"
@@ -1801,6 +1932,12 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/9e/c9/b2622292ea83fbb4ec318f5b9ab867d0a28ab43c5717bb85b0a5f6b3b0a4/networkx-3.6.1-py3-none-any.whl", hash = "sha256:d47fbf302e7d9cbbb9e2555a0d267983d2aa476bac30e90dfbe5669bd57f3762", size = 2068504, upload-time = "2025-12-08T17:02:38.159Z" }, { url = "https://files.pythonhosted.org/packages/9e/c9/b2622292ea83fbb4ec318f5b9ab867d0a28ab43c5717bb85b0a5f6b3b0a4/networkx-3.6.1-py3-none-any.whl", hash = "sha256:d47fbf302e7d9cbbb9e2555a0d267983d2aa476bac30e90dfbe5669bd57f3762", size = 2068504, upload-time = "2025-12-08T17:02:38.159Z" },
] ]
[[package]]
name = "np"
version = "1.0.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/40/7d/749666e5a9976dcbc4d16d487bbe571efc6bbf4cdf3f4620c0ccc52b57ef/np-1.0.2.tar.gz", hash = "sha256:781265283f3823663ad8fb48741aae62abcf4c78bc19f908f8aa7c1d3eb132f8", size = 7419, upload-time = "2017-10-05T11:26:00.956Z" }
[[package]] [[package]]
name = "numpy" name = "numpy"
version = "1.26.4" version = "1.26.4"
@@ -2269,15 +2406,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/bc/96/aaa61ce33cc98421fb6088af2a03be4157b1e7e0e87087c888e2370a7f45/pillow-12.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:7dfb439562f234f7d57b1ac6bc8fe7f838a4bd49c79230e0f6a1da93e82f1fad", size = 2436012, upload-time = "2025-10-15T18:22:23.621Z" }, { url = "https://files.pythonhosted.org/packages/bc/96/aaa61ce33cc98421fb6088af2a03be4157b1e7e0e87087c888e2370a7f45/pillow-12.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:7dfb439562f234f7d57b1ac6bc8fe7f838a4bd49c79230e0f6a1da93e82f1fad", size = 2436012, upload-time = "2025-10-15T18:22:23.621Z" },
] ]
[[package]]
name = "platformdirs"
version = "4.5.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/cf/86/0248f086a84f01b37aaec0fa567b397df1a119f73c16f6c7a9aac73ea309/platformdirs-4.5.1.tar.gz", hash = "sha256:61d5cdcc6065745cdd94f0f878977f8de9437be93de97c1c12f853c9c0cdcbda", size = 21715, upload-time = "2025-12-05T13:52:58.638Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/28/3bfe2fa5a7b9c46fe7e13c97bda14c895fb10fa2ebf1d0abb90e0cea7ee1/platformdirs-4.5.1-py3-none-any.whl", hash = "sha256:d03afa3963c806a9bed9d5125c8f4cb2fdaf74a55ab60e5d59b3fde758104d31", size = 18731, upload-time = "2025-12-05T13:52:56.823Z" },
]
[[package]] [[package]]
name = "posthog" name = "posthog"
version = "5.4.0" version = "5.4.0"
@@ -2356,6 +2484,22 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/7e/cc/7e77861000a0691aeea8f4566e5d3aa716f2b1dece4a24439437e41d3d25/protobuf-5.29.5-py3-none-any.whl", hash = "sha256:6cf42630262c59b2d8de33954443d94b746c952b01434fc58a417fdbd2e84bd5", size = 172823, upload-time = "2025-05-28T23:51:58.157Z" }, { url = "https://files.pythonhosted.org/packages/7e/cc/7e77861000a0691aeea8f4566e5d3aa716f2b1dece4a24439437e41d3d25/protobuf-5.29.5-py3-none-any.whl", hash = "sha256:6cf42630262c59b2d8de33954443d94b746c952b01434fc58a417fdbd2e84bd5", size = 172823, upload-time = "2025-05-28T23:51:58.157Z" },
] ]
[[package]]
name = "psutil"
version = "7.2.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/aa/c6/d1ddf4abb55e93cebc4f2ed8b5d6dbad109ecb8d63748dd2b20ab5e57ebe/psutil-7.2.2.tar.gz", hash = "sha256:0746f5f8d406af344fd547f1c8daa5f5c33dbc293bb8d6a16d80b4bb88f59372", size = 493740, upload-time = "2026-01-28T18:14:54.428Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e7/36/5ee6e05c9bd427237b11b3937ad82bb8ad2752d72c6969314590dd0c2f6e/psutil-7.2.2-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ed0cace939114f62738d808fdcecd4c869222507e266e574799e9c0faa17d486", size = 129090, upload-time = "2026-01-28T18:15:22.168Z" },
{ url = "https://files.pythonhosted.org/packages/80/c4/f5af4c1ca8c1eeb2e92ccca14ce8effdeec651d5ab6053c589b074eda6e1/psutil-7.2.2-cp36-abi3-macosx_11_0_arm64.whl", hash = "sha256:1a7b04c10f32cc88ab39cbf606e117fd74721c831c98a27dc04578deb0c16979", size = 129859, upload-time = "2026-01-28T18:15:23.795Z" },
{ url = "https://files.pythonhosted.org/packages/b5/70/5d8df3b09e25bce090399cf48e452d25c935ab72dad19406c77f4e828045/psutil-7.2.2-cp36-abi3-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:076a2d2f923fd4821644f5ba89f059523da90dc9014e85f8e45a5774ca5bc6f9", size = 155560, upload-time = "2026-01-28T18:15:25.976Z" },
{ url = "https://files.pythonhosted.org/packages/63/65/37648c0c158dc222aba51c089eb3bdfa238e621674dc42d48706e639204f/psutil-7.2.2-cp36-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b0726cecd84f9474419d67252add4ac0cd9811b04d61123054b9fb6f57df6e9e", size = 156997, upload-time = "2026-01-28T18:15:27.794Z" },
{ url = "https://files.pythonhosted.org/packages/8e/13/125093eadae863ce03c6ffdbae9929430d116a246ef69866dad94da3bfbc/psutil-7.2.2-cp36-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:fd04ef36b4a6d599bbdb225dd1d3f51e00105f6d48a28f006da7f9822f2606d8", size = 148972, upload-time = "2026-01-28T18:15:29.342Z" },
{ url = "https://files.pythonhosted.org/packages/04/78/0acd37ca84ce3ddffaa92ef0f571e073faa6d8ff1f0559ab1272188ea2be/psutil-7.2.2-cp36-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:b58fabe35e80b264a4e3bb23e6b96f9e45a3df7fb7eed419ac0e5947c61e47cc", size = 148266, upload-time = "2026-01-28T18:15:31.597Z" },
{ url = "https://files.pythonhosted.org/packages/b4/90/e2159492b5426be0c1fef7acba807a03511f97c5f86b3caeda6ad92351a7/psutil-7.2.2-cp37-abi3-win_amd64.whl", hash = "sha256:eb7e81434c8d223ec4a219b5fc1c47d0417b12be7ea866e24fb5ad6e84b3d988", size = 137737, upload-time = "2026-01-28T18:15:33.849Z" },
{ url = "https://files.pythonhosted.org/packages/8c/c7/7bb2e321574b10df20cbde462a94e2b71d05f9bbda251ef27d104668306a/psutil-7.2.2-cp37-abi3-win_arm64.whl", hash = "sha256:8c233660f575a5a89e6d4cb65d9f938126312bca76d8fe087b947b3a1aaac9ee", size = 134617, upload-time = "2026-01-28T18:15:36.514Z" },
]
[[package]] [[package]]
name = "psycopg2-binary" name = "psycopg2-binary"
version = "2.9.11" version = "2.9.11"
@@ -2746,17 +2890,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/2c/58/ca301544e1fa93ed4f80d724bf5b194f6e4b945841c5bfd555878eea9fcb/referencing-0.37.0-py3-none-any.whl", hash = "sha256:381329a9f99628c9069361716891d34ad94af76e461dcb0335825aecc7692231", size = 26766, upload-time = "2025-10-13T15:30:47.625Z" }, { url = "https://files.pythonhosted.org/packages/2c/58/ca301544e1fa93ed4f80d724bf5b194f6e4b945841c5bfd555878eea9fcb/referencing-0.37.0-py3-none-any.whl", hash = "sha256:381329a9f99628c9069361716891d34ad94af76e461dcb0335825aecc7692231", size = 26766, upload-time = "2025-10-13T15:30:47.625Z" },
] ]
[[package]]
name = "regex"
version = "2025.11.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/cc/a9/546676f25e573a4cf00fe8e119b78a37b6a8fe2dc95cda877b30889c9c45/regex-2025.11.3.tar.gz", hash = "sha256:1fedc720f9bb2494ce31a58a1631f9c82df6a09b49c19517ea5cc280b4541e01", size = 414669, upload-time = "2025-11-03T21:34:22.089Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/59/9b/7c29be7903c318488983e7d97abcf8ebd3830e4c956c4c540005fcfb0462/regex-2025.11.3-cp312-cp312-win32.whl", hash = "sha256:3839967cf4dc4b985e1570fd8d91078f0c519f30491c60f9ac42a8db039be204", size = 266194, upload-time = "2025-11-03T21:31:51.53Z" },
{ url = "https://files.pythonhosted.org/packages/1a/67/3b92df89f179d7c367be654ab5626ae311cb28f7d5c237b6bb976cd5fbbb/regex-2025.11.3-cp312-cp312-win_amd64.whl", hash = "sha256:e721d1b46e25c481dc5ded6f4b3f66c897c58d2e8cfdf77bbced84339108b0b9", size = 277069, upload-time = "2025-11-03T21:31:53.151Z" },
{ url = "https://files.pythonhosted.org/packages/d7/55/85ba4c066fe5094d35b249c3ce8df0ba623cfd35afb22d6764f23a52a1c5/regex-2025.11.3-cp312-cp312-win_arm64.whl", hash = "sha256:64350685ff08b1d3a6fff33f45a9ca183dc1d58bbfe4981604e70ec9801bbc26", size = 270330, upload-time = "2025-11-03T21:31:54.514Z" },
]
[[package]] [[package]]
name = "requests" name = "requests"
version = "2.32.5" version = "2.32.5"
@@ -3224,8 +3357,9 @@ dependencies = [
{ name = "load-dotenv" }, { name = "load-dotenv" },
{ name = "loguru" }, { name = "loguru" },
{ name = "minio" }, { name = "minio" },
{ name = "mmcv" },
{ name = "moviepy" }, { name = "moviepy" },
{ name = "nacos-sdk-python" },
{ name = "np" },
{ name = "numpy" }, { name = "numpy" },
{ name = "ollama" }, { name = "ollama" },
{ name = "opencv-python" }, { name = "opencv-python" },
@@ -3275,8 +3409,9 @@ requires-dist = [
{ name = "load-dotenv", specifier = ">=0.1.0" }, { name = "load-dotenv", specifier = ">=0.1.0" },
{ name = "loguru", specifier = ">=0.7.3" }, { name = "loguru", specifier = ">=0.7.3" },
{ name = "minio", specifier = ">=7.2.20" }, { name = "minio", specifier = ">=7.2.20" },
{ name = "mmcv", specifier = ">=2.2.0" },
{ name = "moviepy", specifier = "==1.0.3" }, { name = "moviepy", specifier = "==1.0.3" },
{ name = "nacos-sdk-python", specifier = "==2.0.1" },
{ name = "np", specifier = ">=1.0.2" },
{ name = "numpy", specifier = "<2" }, { name = "numpy", specifier = "<2" },
{ name = "ollama", specifier = ">=0.6.1" }, { name = "ollama", specifier = ">=0.6.1" },
{ name = "opencv-python", specifier = ">=4.11.0.86" }, { name = "opencv-python", specifier = ">=4.11.0.86" },
@@ -3605,18 +3740,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/54/85/6ec269b0952ec7e36ba019125982cf11d91256a778c7c3f98a4c5043d283/xxhash-3.6.0-cp312-cp312-win_arm64.whl", hash = "sha256:eae5c13f3bc455a3bbb68bdc513912dc7356de7e2280363ea235f71f54064829", size = 27876, upload-time = "2025-10-02T14:34:54.371Z" }, { url = "https://files.pythonhosted.org/packages/54/85/6ec269b0952ec7e36ba019125982cf11d91256a778c7c3f98a4c5043d283/xxhash-3.6.0-cp312-cp312-win_arm64.whl", hash = "sha256:eae5c13f3bc455a3bbb68bdc513912dc7356de7e2280363ea235f71f54064829", size = 27876, upload-time = "2025-10-02T14:34:54.371Z" },
] ]
[[package]]
name = "yapf"
version = "0.43.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "platformdirs" },
]
sdist = { url = "https://files.pythonhosted.org/packages/23/97/b6f296d1e9cc1ec25c7604178b48532fa5901f721bcf1b8d8148b13e5588/yapf-0.43.0.tar.gz", hash = "sha256:00d3aa24bfedff9420b2e0d5d9f5ab6d9d4268e72afbf59bb3fa542781d5218e", size = 254907, upload-time = "2024-11-14T00:11:41.584Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/37/81/6acd6601f61e31cfb8729d3da6d5df966f80f374b78eff83760714487338/yapf-0.43.0-py3-none-any.whl", hash = "sha256:224faffbc39c428cb095818cf6ef5511fdab6f7430a10783fdfb292ccf2852ca", size = 256158, upload-time = "2024-11-14T00:11:39.37Z" },
]
[[package]] [[package]]
name = "yarl" name = "yarl"
version = "1.22.0" version = "1.22.0"