diff --git a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/README.md
index 324b4a37e82a1089de191efaae0878d4abb3bbb6..c936af86477ae2c1951ae6f7a08e169371835d5b 100644
--- a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/README.md
@@ -1,257 +1,257 @@
-# MGN-推理指导
-
-
-- [概述](#ZH-CN_TOPIC_0000001172161501)
-
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
-
-- [快速上手](#ZH-CN_TOPIC_0000001126281700)
-
- - [获取源码](#section4622531142816)
- - [准备数据集](#section183221994411)
- - [模型推理](#section741711594517)
-
-- [模型推理性能](#ZH-CN_TOPIC_0000001172201573)
-
-- [配套环境](#ZH-CN_TOPIC_0000001126121892)
-
- ******
-# 概述
-
-MGN网络是一种多分支深度网络架构的特征识别网络,由一个用于全局特征表示的分支和两个用于局部特征表示的分支组成。将图像均匀地划分为几个条纹,并改变不同局部分支中的特征数量,以获得具有多个粒度的局部特征表示。
-
-- 参考实现:
-
- ```
- url=https://github.com/GNAYUOHZ/ReID-MGN.git
- ```
-
-## 输入输出数据
-
-- 输入数据
-
- | 输入数据 | 数据类型 | 大小 | 数据排布格式 |
- |---------|--------|--------| ------------ |
- | input | Float16 | batchsize x 3 x 384x 128 | ND |
-
-
-- 输出数据
-
- | 输出数据 | 数据类型 | 大小 | 数据排布格式 |
- |------------------|---------| -------- | ------------ |
- | output | FLOAT16 | batchsize x 1000 | ND |
-
-
-
-# 推理环境准备\[所有版本\]
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
-| 配套 | 版本 | 环境准备指导 |
-| ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
-| 固件与驱动 | 22.0.3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
-| CANN | 6.0.RC1 | - |
-| Python | 3.7.5 | - |
-| PyTorch | >=1.8.0 | - |
-| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
-
-# 快速上手
-
-## 获取源码
-1. 获取本仓源码
-
- ```
- git clone https://gitee.com/ascend/ModelZoo-PyTorch.git
- cd ModelZoo-PyTorch/ACL_PyTorch/built-in/cv/MGN_for_Pytorch
- ```
-
-1. 获取源码。
-
- ```
- git clone https://github.com/GNAYUOHZ/ReID-MGN.git ./MGN
- patch -R MGN/data.py < module.patch
- ```
-
-2. 安装依赖。
-
- ```
- pip3 install -r requirements.txt
- ```
-
-## 准备数据集
-
-1. 获取原始数据集。
-
- 该模型将[Market1501数据集](https://pan.baidu.com/s/1ntIi2Op?_at_=1624593258681) 的训练集随机划分为训练集和验证集,为复现精度这里采用固定的验证集。
-
-2. 数据预处理。
-
- 1.将下载好的数据集移动到./ReID-MGN-master/data目录下
-
- 2.执行预处理脚本,生成数据集预处理后的bin文件
-
- ```
- # 首先在要cd到ReID-MGN-master目录下.
- python3 ./postprocess_MGN.py --mode save_bin --data_path ./data/market1501
- ```
-
-3. 生成数据集信息文件
-
- 1.生成数据集信息文件脚本preprocess_MGN.py
-
- 2.执行生成数据集信息脚本,生成数据集信息文件
-
- ```
- python ./preprocess_MGN.py bin ./data/market1501/bin_data/q/ ./q_bin.info 384 128
- python ./preprocess_MGN.py bin ./data/market1501/bin_data/g/ ./g_bin.info 384 128
-
- python ./preprocess_MGN.py bin ./data/market1501/bin_data_flip/q/ ./q_bin_flip.info 384 128
- python ./preprocess_MGN.py bin ./data/market1501/bin_data_flip/g/ ./g_bin_flip.info 384 128
- ```
-
- 第一个参数为模型输入的类型,第二个参数为生成的bin文件路径,第三个为输出的info文件,后面为宽高信息
-
-## 模型推理
-
-1. 模型转换。
-
- 使用PyTorch将模型权重文件.pth转换为.onnx文件,再使用ATC工具将.onnx文件转为离线推理模型文件.om文件。
-
- 1. 获取权重文件。
-
- 到以下[链接](https://pan.baidu.com/s/12AkumLX10hLx9vh_SQwdyw)下载预训练模型(提取码:mrl5)
-
- 2. 导出onnx文件。
-
- 1. 使用**pth2onnx.py**导出onnx文件。
-
- 运行**pth2onnx.py**脚本。
-
- ```
- #将model.pt模型转为market1501.onnx模型,注意,生成onnx模型名(第二个参数)和batch size(第三个参数)根据实际大小设置.
- python3.7 ./pth2onnx.py ./model/model.pt ./model/model_mkt1501_bs1.onnx 1
- ```
- > **说明:**
- 运行成功后文件夹下生成**model_mkt1501_bs1.onnx**模型文件。
-
- 3. 使用ATC工具将ONNX模型转OM模型。
-
- 1. 配置环境变量。
-
- ```
- source /usr/local/Ascend/ascend-toolkit/set_env.sh
- ```
-
- > **说明:**
- 该脚本中环境变量仅供参考,请以实际安装环境配置环境变量。详细介绍请参见《[CANN 开发辅助工具指南 \(推理\)](https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373?category=developer-documents&subcategory=auxiliary-development-tools)》。
-
- 2. 执行命令查看芯片名称($\{chip\_name\})。
-
- ```
- npu-smi info
- #该设备芯片名为Ascend310P3 (自行替换)
- 回显如下:
- +-------------------+-----------------+------------------------------------------------------+
- | NPU Name | Health | Power(W) Temp(C) Hugepages-Usage(page) |
- | Chip Device | Bus-Id | AICore(%) Memory-Usage(MB) |
- +===================+=================+======================================================+
- | 0 310P3 | OK | 15.8 42 0 / 0 |
- | 0 0 | 0000:82:00.0 | 0 1074 / 21534 |
- +===================+=================+======================================================+
- | 1 310P3 | OK | 15.4 43 0 / 0 |
- | 0 1 | 0000:89:00.0 | 0 1070 / 21534 |
- +===================+=================+======================================================+
- ```
-
- 3. 执行ATC命令。
- ```
- atc --framework=5 \
- --model=./model/model_mkt1501_bs1.onnx \
- --input_format=NCHW \
- --input_shape="image:1,3,384,128" \
- --output=mgn_mkt1501_bs1 \
- --log=debug \
- --soc_version=Ascend${chip_name}
- ```
-
- - 参数说明:
-
- --model:ONNX模型文件
-
- --framework:5代表ONNX模型
-
- --output:输出的OM模型
-
- --input_format:输入数据的格式
-
- --input_shape:输入数据的shape
-
- --log:日志级别
-
- --soc_version:处理器型号
-
- --insert_op_conf: aipp预处理算子配置文件
-
- 运行成功后在output文件夹下生成**om**模型文件。
-
-2. 开始推理验证。
-
- a. 安装ais_bench推理工具。
-
- 请访问[ais_bench推理工具](https://gitee.com/ascend/tools/tree/master/ais-bench_workload/tool/ais_bench)代码仓,根据readme文档进行工具安装。
-
-
- b. 执行推理。
-
- ```
- python3 -m ais_bench --model_type=vision --device_id=0 --batch_size=1 --om_path=mgn_mkt1501_bs1.om --input_text_path=./q_bin.info --input_width=384 --input_height=128 --output_binary=False --useDvpp=False
- ```
-
- - 参数说明:
-
- - model:需要推理om模型的路径。
- - input:模型需要的输入bin文件夹路径。
- - output:推理结果输出路径。
- - outfmt:输出数据的格式。
- - output_dirname:推理结果输出子文件夹。
- ...
-
-
- c. 精度验证。
-
-后处理统计mAP精度
-
-调用postprocess_MGN.py脚本的“evaluate_om”模式推理结果与语义分割真值进行比对,可以获得mAP精度数据。
-
-```
-python3.7 ./postprocess_MGN.py --mode evaluate_om --data_path ./data/market1501/
-```
-
-第一个参数为main函数运行模式,第二个为原始数据目录,第三个为模型所在目录。
-查看输出结果:
-
-```
-mAP: 0.9423
-```
-
-经过对bs8的om测试,本模型batch8的精度没有差别,精度数据均如上。
-
- d. 性能验证。
-
- 可使用ais_bench推理工具的纯推理模式验证不同batch_size的om模型的性能,参考命令如下:
-
- ```
- python3 -m ais_bench --model ./output/mgn_mkt1501_bs1.om --loop 1000 --batchsize ${bs}
-
- ```
-
-
-# 模型推理性能&精度
-
-调用ACL接口推理计算,性能参考下列数据。
-
-| 芯片型号 | Batch Size | 数据集 | 精度 | 性能 |
-|-------|------------|----------|-----------------------|---------|
-| 300I Pro | 8 | market1501 | mAP=0.9423 | 1519fps |
+# MGN-推理指导
+
+- [概述](#概述)
+- [推理环境准备](#推理环境准备)
+- [获取源码](#获取源码)
+- [准备数据集](#准备数据集)
+- [模型转换](#模型转换)
+- [文件目录结构](#文件目录结构)
+- [模型推理](#模型推理)
+- [推理结果验证](#推理结果验证)
+
+# 概述
+
+MGN网络是一种多分支深度网络架构的特征识别网络,由一个用于全局特征表示的分支和两个用于局部特征表示的分支组成。将图像均匀地划分为几个条纹,并改变不同局部分支中的特征数量,以获得具有多个粒度的局部特征表示。
+
+- 参考实现:
+
+ ```
+ url=https://github.com/GNAYUOHZ/ReID-MGN.git
+ ```
+
+## 输入输出数据
+
+- 输入数据
+
+ | 输入数据 | 数据类型 | 大小 | 数据排布格式 |
+ |---------|--------|--------| ------------ |
+ | input | Float16 | batchsize x 3 x 384x 128 | ND |
+
+
+- 输出数据
+
+ | 输出数据 | 数据类型 | 大小 | 数据排布格式 |
+ |------------------|---------| -------- | ------------ |
+ | output | FLOAT16 | batchsize x 1000 | ND |
+
+
+
+# 推理环境准备
+
+- 该模型需要以下插件与驱动
+
+ **表 1** 版本配套表
+
+| 配套 | 版本 | 环境准备指导 |
+| ------------------------------------------------------------ |------------| ------------------------------------------------------------ |
+| 固件与驱动 | 25.0.rc1.1 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
+| CANN | 8.1.RC1 | - |
+| Python | 3.9 | - |
+| PyTorch | >=1.8.1 | - |
+| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
+
+
+# 获取源码
+1. 获取本仓源码
+
+ ```
+ git clone https://gitee.com/ascend/ModelZoo-PyTorch.git
+ cd ModelZoo-PyTorch/ACL_PyTorch/built-in/cv/MGN_for_Pytorch
+ ```
+
+2. 获取源码并执行diff文件。
+
+ ```
+ git clone https://github.com/GNAYUOHZ/ReID-MGN.git ./MGN
+ cd MGN
+ git apply ../module.patch
+ ```
+
+3. 安装依赖。
+
+ ```
+ pip3 install -r requirements.txt
+ ```
+
+# 准备数据集
+
+1. 获取原始数据集。
+
+ 该模型将[Market1501数据集](https://pan.baidu.com/s/1ntIi2Op?_at_=1624593258681) 的训练集随机划分为训练集和验证集,为复现精度这里采用固定的验证集。
+
+2. 数据预处理。
+
+ 1.将下载好的数据集`zip`文件解压,生成`Market-1501-v15.09.15`文件夹
+
+ 2.执行数据预处理脚本,生成数据集预处理后的bin文件
+
+ ```
+ # 1. 配置MGN外部路径
+ export PYTHONPATH="MGN:$PYTHONPATH"
+ # 2. 执行数据预处理
+ python3 mgn_preprocess.py --data_path=./Market-1501-v15.09.15
+ ```
+ 参数说明:
+ * `--data_path`是数据集的路径,默认值为`./Market-1501-v15.09.15`
+
+ 如果你的配置参与与默认值相同,可以不用指定,直接执行`python3 mgn_preprocess.py`。
+
+# 模型转换
+1. 下载原始模型权重:到以下[链接](https://pan.baidu.com/s/12AkumLX10hLx9vh_SQwdyw)下载预训练模型(提取码:mrl5)
+2. 原始`pt`模型转`onnx`模型:
+ ```
+ # 1.配置MGN外部路径
+ export PYTHONPATH="MGN:$PYTHONPATH"
+ # 2. 执行模型转换脚本
+ python3 mgn_convert.py --model_path=./model --model_weight_file=model.pt --onnx_file=model_mkt1501_bs1.onnx --batchonnx=1
+ ```
+ 参数说明:
+ * `--model_path`是模型权重文件的路径,默认值是`./model`
+ * `--model_weight_file`是模型权重文件名,默认值是`model.pt`
+ * `--onnx_file`是生成的onnx名称,默认值是`model_mkt1501_bs1.onnx`
+ * `--batchonnx`是生成onnx时配置的batch_size,默认值是`1`
+
+ 如果你的配置参与与默认值相同,可以不用指定,直接执行`python3 mgn_convert.py`。
+3. `onnx`模型转`om`模型
+ 1. 配置环境变量。
+
+ ```
+ source /usr/local/Ascend/ascend-toolkit/set_env.sh
+ ```
+
+ > **说明:**
+ 该脚本中环境变量仅供参考,请以实际安装环境配置环境变量。详细介绍请参见《[CANN 开发辅助工具指南 \(推理\)](https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373?category=developer-documents&subcategory=auxiliary-development-tools)》。
+
+ 2. 执行命令查看芯片名称($\{chip\_name\})。
+
+ ```
+ npu-smi info
+ #该设备芯片名为Ascend310P3 (自行替换)
+ 回显如下:
+ +-------------------+-----------------+------------------------------------------------------+
+ | NPU Name | Health | Power(W) Temp(C) Hugepages-Usage(page) |
+ | Chip Device | Bus-Id | AICore(%) Memory-Usage(MB) |
+ +===================+=================+======================================================+
+ | 0 310P3 | OK | 15.8 42 0 / 0 |
+ | 0 0 | 0000:82:00.0 | 0 1074 / 21534 |
+ +===================+=================+======================================================+
+ | 1 310P3 | OK | 15.4 43 0 / 0 |
+ | 0 1 | 0000:89:00.0 | 0 1070 / 21534 |
+ +===================+=================+======================================================+
+ ```
+
+ 3. 执行ATC命令。
+ ```
+ atc --framework=5 \
+ --model=./model/model_mkt1501_bs1.onnx \
+ --input_format=NCHW \
+ --input_shape="image:1,3,384,128" \ # 这里的1是指batch_size
+ --output=mgn_mkt1501_bs1 \
+ --log=debug \
+ --soc_version=Ascend${chip_name}
+ ```
+
+ - 参数说明:
+
+ --framework:指定输入模型的框架类型, 5代表ONNX模型
+
+ --model:ONNX模型文件
+
+ --output:输出的OM模型
+
+ --input_format:输入数据的格式
+
+ --input_shape:输入数据的shape
+
+ --log:日志级别
+
+ --soc_version:昇腾处理器型号
+
+ 运行成功后在当前文件夹下生成**mgn_mkt1501_bs1.om**模型文件。
+
+# 文件目录结构
+
+准备工作完成后,文件目录结构大致如下:
+
+ ```text
+ 📁 MGN_for_Pytorch/
+ ├── 📁 MGN/ # MGN源码
+ ├── 📁 model/ # MGN模型权重
+ | |── 📄 model.pt
+ | └── 📄 model_mk1501_bs1.onnx
+ ├── 📁 Market-1501-v15.09.15/ # 数据集
+ | |── 📁 bin_data
+ | | |── 📁 q # query数据原始文件
+ | | └── 📁 g # gallery数据原始文件
+ | └── 📁 bin_data_flip
+ | |── 📁 q # query数据flip文件
+ | └── 📁 g # gallery数据flip文件
+ │── 📄 mgn_convert.py
+ │── 📄 mgn_evaluate.py
+ │── 📄 mgn_infer.sh
+ │── 📄 mgn_preprocess.py
+ │── 📄 module.patch
+ └── 📄 om_executor.py
+ ```
+
+# 模型推理
+
+1. 安装ais_bench推理工具。
+
+ 请访问[ais_bench推理工具](https://gitee.com/ascend/tools/tree/master/ais-bench_workload/tool/ais_bench)代码仓,根据readme文档进行工具安装。
+
+2. 执行推理。
+ ```
+ chmod +x mgn_infer.sh
+ ./mgn_infer.sh
+ ```
+ `mgn_infer.sh`内包含4条`python`推理命令,分别是`query`数据、`gallery`数据、`query_flip`数据、`gallery_flip`数据。
+
+ `mgn_infer.sh`脚本支持自定义传入参数,例如:
+ ```
+ ./mgn_infer.sh -m "mgn_mkt1501_bs1.om" -b 1 -i "./Market-1501-v15.09.15" -o "./result" -f "TXT"
+ ```
+ - 参数说明:
+ - -m:需要推理om模型的路径。
+ - -b:batch size的大小。
+ - -i:输入数据的路径。
+ - -o: 推理结果输出文件夹路径。
+ - -f:输出数据的格式。
+
+
+# 推理结果验证
+
+1. 精度验证
+
+ 后处理统计mAP精度
+
+ 使用mgn_evaluate.py,对推理结果与语义分割真值进行比对,可以获得mAP精度数据。
+
+ ```
+ # 1.配置MGN外部路径
+ export PYTHONPATH="MGN:$PYTHONPATH"
+ # 2. 执行结果验证脚本
+ python3 mgn_evaluate.py --result=./result
+ ```
+ 参数说明:
+ * `--result`是推理输出结果的路径,默认值为`./result`
+
+ 如果你的配置参与与默认值相同,可以不用指定,直接执行`python3 mgn_evaluate.py`。
+
+
+2. 性能验证。
+
+ 可使用ais_bench推理工具的纯推理模式验证不同batch_size的om模型的性能,参考命令如下:
+
+ ```
+ export bs=1
+ python3 -m ais_bench --model=mgn_mkt1501_bs1.om --loop=1000 --batchsize=${bs}
+ ```
+
+## 模型推理性能&精度
+
+调用ACL接口推理计算,性能参考下列数据。
+
+| 芯片型号 | Batch Size | 数据集 | 精度 | 性能 |
+|-------|------------|----------|-----------------------|---------|
+| 300I Pro | 8 | market1501 | mAP=0.9423 | 1519fps |
diff --git a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_convert.py b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_convert.py
new file mode 100644
index 0000000000000000000000000000000000000000..fb74a670f4b7c1351cd7cfa06bb7d271b9a5e1c5
--- /dev/null
+++ b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_convert.py
@@ -0,0 +1,49 @@
+# Copyright (c) 2025 Huawei Technologies Co., Ltd
+# [Software Name] is licensed under Mulan PSL v2.
+# You can use this software according to the terms and conditions of the Mulan PSL v2.
+# You may obtain a copy of Mulan PSL v2 at:
+# http://license.coscl.org.cn/MulanPSL2
+# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+# EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+# MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+# See the Mulan PSL v2 for more details.
+
+import os
+import sys
+import torch
+
+from MGN.opt import opt
+from MGN.data import Data
+from MGN.network import MGN
+from MGN.utils.metrics import mean_ap, cmc, re_ranking
+
+from om_executor import OMExcutor
+
+
+class Convertor(OMExcutor):
+ def __init__(self, data):
+ super().__init__(data)
+
+
+ def pth2onnx(self, pt_file_path, onnx_file_path, batch_size):
+ model = MGN()
+ model = model.to('cpu')
+ model.load_state_dict(torch.load(pt_file_path, map_location=torch.device('cpu')))
+ model.eval()
+ input_names = ["image"]
+ output_names = ["features"]
+ dynamic_axes = {'image': {0: '-1'}, 'features': {0: '-1'}}
+ dummy_input = torch.randn(batch_size, 3, 384, 128)
+ torch.onnx.export(model, dummy_input, onnx_file_path, input_names=input_names,
+ dynamic_axes=dynamic_axes, output_names=output_names,
+ opset_version=11, verbose=True)
+ print("Convert to ONNX model file SUCCESS!")
+
+
+if __name__ == '__main__':
+ data = Data()
+ mgn_convertor = Convertor(data)
+ print("start convert to onnx")
+ model_pt_file = os.path.join(opt.model_path, opt.model_weight_file)
+ model_onnx_file = os.path.join(opt.model_path, opt.onnx_file)
+ mgn_convertor.pth2onnx(model_pt_file, model_onnx_file, opt.batchonnx)
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_evaluate.py b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_evaluate.py
new file mode 100644
index 0000000000000000000000000000000000000000..5d037f8ee7ba71e24f02284fe6fb93346b738170
--- /dev/null
+++ b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_evaluate.py
@@ -0,0 +1,97 @@
+# Copyright (c) 2025 Huawei Technologies Co., Ltd
+# [Software Name] is licensed under Mulan PSL v2.
+# You can use this software according to the terms and conditions of the Mulan PSL v2.
+# You may obtain a copy of Mulan PSL v2 at:
+# http://license.coscl.org.cn/MulanPSL2
+# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+# EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+# MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+# See the Mulan PSL v2 for more details.
+
+import os
+import sys
+import numpy as np
+import torch
+from scipy.spatial.distance import cdist
+
+from MGN.opt import opt
+from MGN.data import Data
+from MGN.network import MGN
+from MGN.utils.metrics import mean_ap, cmc, re_ranking
+
+from om_executor import OMExcutor
+
+
+def extract_feature_om(prediction_dir, prediction_flip_dir, output_idx='_0.txt'):
+
+ def get_sorted_files(path, suffix):
+ return sorted([
+ fname for fname in os.listdir(path)
+ if fname.endswith(suffix)
+ ])
+
+ # make the list of files first
+ file_names = get_sorted_files(prediction_dir, output_idx)
+ file_names_flip = get_sorted_files(prediction_flip_dir, output_idx)
+
+ if len(file_names) != len(file_names_flip):
+ raise ValueError("Mismatch in number of original and flipped feature files.")
+
+ features = []
+ for i, (fname, fname_flip) in enumerate(zip(file_names, file_names_flip)):
+ f1 = torch.from_numpy(np.loadtxt(os.path.join(prediction_dir, fname), dtype=np.float32))
+ f2 = torch.from_numpy(np.loadtxt(os.path.join(prediction_flip_dir, fname_flip), dtype=np.float32))
+
+ ff = f1 + f2
+ ff = ff.unsqueeze(0)
+ ff = torch.nn.functional.normalize(ff, p=2, dim=1)
+ features.append(ff) # torch.cat会触发数据复制,效率较低,最后统一torch.cat一次,性能更好
+
+ if i % 100 == 0:
+ print(f"Extracted {i} features...")
+
+ return torch.cat(features, dim=0)
+
+
+class Evaluator(OMExcutor):
+ def __init__(self, data):
+ super().__init__(data)
+
+ def evaluate_om(self):
+
+ def rank(dist):
+ r = cmc(dist, self.queryset.ids, self.testset.ids, self.queryset.cameras, self.testset.cameras,
+ separate_camera_set=False,
+ single_gallery_shot=False,
+ first_match_break=True)
+ m_ap = mean_ap(dist, self.queryset.ids, self.testset.ids, self.queryset.cameras, self.testset.cameras)
+ return r, m_ap
+
+ query_prediction_file_path = os.path.join(opt.result, "q_out")
+ query_prediction_file_path_flip = os.path.join(opt.result, "q_filp")
+ gallery_prediction_file_path = os.path.join(opt.result, "g_out")
+ gallery_prediction_file_path_flip = os.path.join(opt.result, "g_filp")
+ print('extract features, this may take a few minutes')
+ qf = extract_feature_om(query_prediction_file_path, query_prediction_file_path_flip).numpy()
+ gf = extract_feature_om(gallery_prediction_file_path, gallery_prediction_file_path_flip).numpy()
+
+ # re rank
+ q_g_dist = np.dot(qf, np.transpose(gf))
+ q_q_dist = np.dot(qf, np.transpose(qf))
+ g_g_dist = np.dot(gf, np.transpose(gf))
+ dist = re_ranking(q_g_dist, q_q_dist, g_g_dist)
+ r, m_ap = rank(dist)
+ print('[With Re-Ranking] mAP: {:.4f} rank1: {:.4f} rank3: {:.4f} rank5: {:.4f} rank10: {:.4f}'
+ .format(m_ap, r[0], r[2], r[4], r[9]))
+
+ # no re rank
+ dist = cdist(qf, gf)
+ r, m_ap = rank(dist)
+ print('[Without Re-Ranking] mAP: {:.4f} rank1: {:.4f} rank3: {:.4f} rank5: {:.4f} rank10: {:.4f}'
+ .format(m_ap, r[0], r[2], r[4], r[9]))
+
+if __name__ == '__main__':
+ data = Data()
+ mgn_evaluator = Evaluator(data)
+ print("start result evaluate")
+ mgn_evaluator.evaluate_om()
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_infer.sh b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_infer.sh
new file mode 100644
index 0000000000000000000000000000000000000000..e3030ffb8741ceeaee2dc0bf6b264c2c3f071874
--- /dev/null
+++ b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_infer.sh
@@ -0,0 +1,51 @@
+# Copyright (c) 2025 Huawei Technologies Co., Ltd
+# [Software Name] is licensed under Mulan PSL v2.
+# You can use this software according to the terms and conditions of the Mulan PSL v2.
+# You may obtain a copy of Mulan PSL v2 at:
+# http://license.coscl.org.cn/MulanPSL2
+# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+# EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+# MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+# See the Mulan PSL v2 for more details.
+
+#!/bin/bash
+
+# 默认值
+MODEL="mgn_mkt1501_bs1.om"
+BATCH_SIZE=1
+INPUT_BASE="./Market-1501-v15.09.15"
+OUTPUT_BASE="./result"
+OUTPUT_FMT="TXT"
+
+# 解析命令行参数
+while getopts ":m:b:i:o:f:" opt; do
+ case $opt in
+ m) MODEL="$OPTARG" ;;
+ b) BATCH_SIZE="$OPTARG" ;;
+ i) INPUT_BASE="$OPTARG" ;;
+ o) OUTPUT_BASE="$OPTARG" ;;
+ f) OUTPUT_FMT="$OPTARG" ;;
+ \?) echo "无效选项: -$OPTARG" >&2; exit 1 ;;
+ :) echo "选项 -$OPTARG 需要一个参数" >&2; exit 1 ;;
+ esac
+done
+
+echo "模型文件: $MODEL"
+echo "批大小: $BATCH_SIZE"
+echo "输入数据路径: $INPUT_BASE"
+echo "输出结果路径: $OUTPUT_BASE"
+echo "输出格式: $OUTPUT_FMT"
+
+echo "Running inference on q original..."
+python3 -m ais_bench --model="$MODEL" --device=0 --batchsize="$BATCH_SIZE" --input="${INPUT_BASE}/bin_data/q/" --output="$OUTPUT_BASE" --output_dirname=q_out --outfmt="$OUTPUT_FMT"
+
+echo "Running inference on g original..."
+python3 -m ais_bench --model="$MODEL" --device=0 --batchsize="$BATCH_SIZE" --input="${INPUT_BASE}/bin_data/g/" --output="$OUTPUT_BASE" --output_dirname=g_out --outfmt="$OUTPUT_FMT"
+
+echo "Running inference on q flip..."
+python3 -m ais_bench --model="$MODEL" --device=0 --batchsize="$BATCH_SIZE" --input="${INPUT_BASE}/bin_data_flip/q/" --output="$OUTPUT_BASE" --output_dirname=q_filp --outfmt="$OUTPUT_FMT"
+
+echo "Running inference on g flip..."
+python3 -m ais_bench --model="$MODEL" --device=0 --batchsize="$BATCH_SIZE" --input="${INPUT_BASE}/bin_data_flip/g/" --output="$OUTPUT_BASE" --output_dirname=g_filp --outfmt="$OUTPUT_FMT"
+
+echo "All inference tasks completed."
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_preprocess.py b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_preprocess.py
new file mode 100644
index 0000000000000000000000000000000000000000..14c2ab067a0f34c9dbf72be0688bb3efca7c347b
--- /dev/null
+++ b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/mgn_preprocess.py
@@ -0,0 +1,58 @@
+# Copyright (c) 2025 Huawei Technologies Co., Ltd
+# [Software Name] is licensed under Mulan PSL v2.
+# You can use this software according to the terms and conditions of the Mulan PSL v2.
+# You may obtain a copy of Mulan PSL v2 at:
+# http://license.coscl.org.cn/MulanPSL2
+# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+# EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+# MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+# See the Mulan PSL v2 for more details.
+
+import os
+import torch
+from tqdm import tqdm
+
+from MGN.opt import opt
+from MGN.data import Data
+from MGN.network import MGN
+from MGN.utils.metrics import mean_ap, cmc, re_ranking
+
+from om_executor import OMExcutor
+
+
+def save_batch_images(save_file_name, dataset_type, loader, need_flip=False):
+ index = 0
+ for inputs, _ in loader:
+ if need_flip is True:
+ inputs = inputs.index_select(3, torch.arange(inputs.size(3) - 1, -1, -1))
+ for item in inputs:
+ img_name = f"{index:05d}"
+ save_path = opt.data_path
+ if opt.data_path[-1] != '/':
+ save_path += '/'
+ save_path += save_file_name
+ save_path = os.path.join(save_path, dataset_type)
+ os.makedirs(save_path, exist_ok=True)
+ bin_file_path = os.path.join(save_path, f"{img_name}.bin")
+ item.numpy().tofile(bin_file_path)
+ index += 1
+
+
+class Preprocessor(OMExcutor):
+ def __init__(self, data):
+ super().__init__(data)
+
+ def data_preprocess(self):
+ file_name = 'bin_data'
+ file_name_flip = 'bin_data_flip'
+ save_batch_images(file_name, 'q', tqdm(self.query_loader))
+ save_batch_images(file_name, 'g', tqdm(self.test_loader))
+ save_batch_images(file_name_flip, 'q', tqdm(self.query_loader), need_flip=True)
+ save_batch_images(file_name_flip, 'g', tqdm(self.test_loader), need_flip=True)
+
+
+if __name__ == '__main__':
+ data = Data()
+ mgn_preprocessor = Preprocessor(data)
+ print("start data preprocess")
+ mgn_preprocessor.data_preprocess()
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/module.patch b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/module.patch
index 568b24c0d71adca761d9fe134750ab437709de26..df089ee4aa71668218dfea8a8a708022c6ae9ba1 100644
--- a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/module.patch
+++ b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/module.patch
@@ -1,8 +1,39 @@
-6c6
-< # from opt import opt
----
-> from opt import opt
-12c12
-< def __init__(self, opt):
----
-> def __init__(self):
+diff --git a/opt.py b/opt.py
+index e9f70b5..16f4b11 100644
+--- a/opt.py
++++ b/opt.py
+@@ -18,9 +18,17 @@ parser.add_argument('--freeze',
+ default=False,
+ help='freeze backbone or not ')
+
+-parser.add_argument('--weight',
+- default='weights/model.pt',
+- help='load weights ')
++parser.add_argument('--model_path',
++ default='./model',
++ help='model weights path')
++
++parser.add_argument('--model_weight_file',
++ default='model.pt',
++ help='model weights file name')
++
++parser.add_argument("--onnx_file",
++ default="model_mkt1501_bs1.onnx",
++ help='onnx file name')
+
+ parser.add_argument('--epoch',
+ default=500,
+@@ -46,4 +54,13 @@ parser.add_argument("--batchtest",
+ default=8,
+ help='the batch size for test')
+
++parser.add_argument("--batchonnx",
++ type=int,
++ default=1,
++ help='the batch size for convert onnx')
++
++parser.add_argument("--result",
++ default="./result",
++ help='inference result path')
++
+ opt = parser.parse_args()
diff --git a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/om_executor.py b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/om_executor.py
new file mode 100644
index 0000000000000000000000000000000000000000..973e73b79b02d1364fdcf35d4de1aecc217f2810
--- /dev/null
+++ b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/om_executor.py
@@ -0,0 +1,32 @@
+# Copyright (c) 2025 Huawei Technologies Co., Ltd
+# [Software Name] is licensed under Mulan PSL v2.
+# You can use this software according to the terms and conditions of the Mulan PSL v2.
+# You may obtain a copy of Mulan PSL v2 at:
+# http://license.coscl.org.cn/MulanPSL2
+# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+# EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+# MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+# See the Mulan PSL v2 for more details.
+
+from MGN.data import Data
+from MGN.opt import opt
+from MGN.utils.metrics import mean_ap, cmc, re_ranking
+
+
+class OMExcutor:
+ def __init__(self, data):
+ self.train_loader = data.train_loader
+ self.test_loader = data.test_loader
+ self.query_loader = data.query_loader
+ self.testset = data.testset
+ self.queryset = data.queryset
+
+ def data_preprocess(self):
+ pass
+
+ def pth2onnx(self, pt_file_path, onnx_file_path, batch_size):
+ pass
+
+ def evaluate_om(self):
+ pass
+
diff --git a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/postprocess_MGN.py b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/postprocess_MGN.py
deleted file mode 100644
index 018acd5f5442c1b851ec493df9195d56ad6c10be..0000000000000000000000000000000000000000
--- a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/postprocess_MGN.py
+++ /dev/null
@@ -1,163 +0,0 @@
-# Copyright 2024 Huawei Technologies Co., Ltd
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.jj
-
-import os
-import sys
-import numpy as np
-import torch
-import argparse
-from scipy.spatial.distance import cdist
-from tqdm import tqdm
-sys.path.append('./MGN')
-from MGN.data import Data
-from MGN.utils.metrics import mean_ap, cmc, re_ranking
-
-
-def save_batch_images(save_file_name, dataset_type, loader, need_flip=False):
- index = 0
- for (inputs, _) in loader:
- if need_flip is True:
- inputs = inputs.index_select(3, torch.arange(inputs.size(3) - 1, -1, -1))
- for i in enumerate(len(inputs)):
- img_name = dataset_type + '/' + "{:0>5d}".format(index)
- save_path = opt.data_path
- if(opt.data_path[-1] != '/'):
- save_path += '/'
- save_path += save_file_name
- inputs[i].numpy().tofile(save_path + '/' + img_name + '.bin')
- index += 1
-
-
-def extract_feature_om(prediction_file_path, prediction_file_path_flip):
- # make the list of files first
- file_names, file_names_flip = [], []
- for file_name in os.listdir(prediction_file_path):
- suffix = file_name.split('_')[-1]
- if suffix == '1.txt':
- file_names.append(file_name)
- file_names.sort()
- print("first 5 txt files: \n",file_names[:10])
- for file_name in os.listdir(prediction_file_path_flip):
- suffix = file_name.split('_')[-1]
- if suffix == '1.txt':
- file_names_flip.append(file_name)
- file_names_flip.sort()
- if len(file_names) != len(file_names_flip):
- print('num of filp features doesnt match that of orig')
- features = torch.FloatTensor()
- for i in enumerate(len(file_names)):
- fea_path = os.path.join(prediction_file_path, file_names[i])
- fea_path_f = os.path.join(prediction_file_path_flip, file_names_flip[i])
- f1 = torch.from_numpy(np.loadtxt(fea_path, dtype=np.float32))
- f2 = torch.from_numpy(np.loadtxt(fea_path_f, dtype=np.float32))
- ff = f1 + f2
- ff = torch.unsqueeze(ff, 0)
- fnorm = torch.norm(ff, p=2, dim=1, keepdim=True)
- ff = ff.div(fnorm.expand_as(ff))
- features = torch.cat((features, ff), 0)
- if i < 8:
- print(i, "th f1: \n", f1.shape, f1)
- print(i, "th f2: \n", f2.shape, f2)
- print(i, "th ff: \n", ff.shape, ff)
- if i % 100 == 0:
- print("the " + str(i) + "th image file is extracted.")
- return features
-
-
-class OMExcutor():
- def __init__(self, data):
- self.train_loader = data.train_loader
- self.test_loader = data.test_loader
- self.query_loader = data.query_loader
- self.testset = data.testset
- self.queryset = data.queryset
-
- def evaluate_om(self):
- query_prediction_file_path, query_prediction_file_path_flip = './result/q_bin/dumpOutput_device0/', \
- './result/q_bin_flip/dumpOutput_device0/'
- gallery_prediction_file_path, gallery_prediction_file_path_flip = './result/g_bin/dumpOutput_device0/', \
- './result/g_bin_flip/dumpOutput_device0/'
- print('extract features, this may take a few minutes')
- qf = extract_feature_om(query_prediction_file_path, query_prediction_file_path_flip).numpy()
- gf = extract_feature_om(gallery_prediction_file_path, gallery_prediction_file_path_flip).numpy()
- print("shape of features, qf: " + str(qf.shape) + "gf: " + str(gf.shape))
- print("arr qf: \n", qf[:10, :10])
- print("arr gf: \n", gf[:10, :10])
-
- def rank(dist):
- r = cmc(dist, self.queryset.ids, self.testset.ids, self.queryset.cameras, self.testset.cameras,
- separate_camera_set=False,
- single_gallery_shot=False,
- first_match_break=True)
- m_ap = mean_ap(dist, self.queryset.ids, self.testset.ids, self.queryset.cameras, self.testset.cameras)
- return r, m_ap
- ######################### re rank##########################
- q_g_dist = np.dot(qf, np.transpose(gf))
- q_q_dist = np.dot(qf, np.transpose(qf))
- g_g_dist = np.dot(gf, np.transpose(gf))
- dist = re_ranking(q_g_dist, q_q_dist, g_g_dist)
- r, m_ap = rank(dist)
- print('[With Re-Ranking] mAP: {:.4f} rank1: {:.4f} rank3: {:.4f} rank5: {:.4f} rank10: {:.4f}'
- .format(m_ap, r[0], r[2], r[4], r[9]))
- #########################no re rank##########################
- dist = cdist(qf, gf)
- r, m_ap = rank(dist)
- print('[Without Re-Ranking] mAP: {:.4f} rank1: {:.4f} rank3: {:.4f} rank5: {:.4f} rank10: {:.4f}'
- .format(m_ap, r[0], r[2], r[4], r[9]))
-
- def save_data(self):
- save_file_name = 'bin_data'
- save_file_name_flip = 'bin_data_flip'
- print('saving images, this may take a few minutes')
- save_batch_images(save_file_name, 'q', tqdm(self.query_loader))
- save_batch_images(save_file_name, 'g', tqdm(self.test_loader))
- save_batch_images(save_file_name_flip, 'q', tqdm(self.query_loader), need_flip=True)
- save_batch_images(save_file_name_flip, 'g', tqdm(self.test_loader), need_flip=True)
-
-
-def parse_func():
- parser = argparse.ArgumentParser()
- parser.add_argument('--data_path',
- default="Market-1501-v15.09.15",
- help='path of Market-1501-v15.09.15')
- parser.add_argument('--mode',
- default='train', choices=['train', 'evaluate', 'evaluate_om', 'save_bin', 'vis'],
- help='train or evaluate ')
- parser.add_argument('--query_image',
- default='0001_c1s1_001051_00.jpg',
- help='path to the image you want to query')
- parser.add_argument("--batchid",
- default=4,
- help='the batch for id')
- parser.add_argument("--batchimage",
- default=4,
- help='the batch of per id')
- parser.add_argument("--batchtest",
- default=8,
- help='the batch size for test')
- return parser.parse_args()
-
-
-if __name__ == '__main__':
- opt = parse_func()
- data = Data(opt)
- main = OMExcutor(data)
- if opt.mode == 'evaluate_om':
- print('start evaluate om')
- main.evaluate_om()
- elif opt.mode == 'save_bin':
- print('start evaluate')
- main.save_data()
- else:
- raise NotImplementedError()
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/preprocess_MGN.py b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/preprocess_MGN.py
deleted file mode 100644
index 0d1db5b292d1f48ffc1ed38ae74933388e87c9a8..0000000000000000000000000000000000000000
--- a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/preprocess_MGN.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# Copyright 2021 Huawei Technologies Co., Ltd
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-import sys
-import cv2
-from glob import glob
-
-
-def get_bin_info(file_path, info_name, width, height):
- bin_images = glob(os.path.join(file_path, '*.bin'))
- with open(info_name, 'w') as file:
- for index, img in enumerate(bin_images):
- content = ' '.join([str(index), img, width, height])
- file.write(content)
- file.write('\n')
-
-
-def get_jpg_info(file_path, info_name):
- extensions = ['jpg', 'jpeg', 'JPG', 'JPEG']
- image_names = []
- for extension in extensions:
- image_names.append(glob(os.path.join(file_path, '*.' + extension)))
- with open(info_name, 'w') as file:
- for image_name in image_names:
- if len(image_name) == 0:
- continue
- else:
- for index, img in enumerate(image_name):
- img_cv = cv2.imread(img)
- shape = img_cv.shape
- width, height = shape[1], shape[0]
- content = ' '.join([str(index), img, str(width), str(height)])
- file.write(content)
- file.write('\n')
-
-
-if __name__ == '__main__':
- file_type = sys.argv[1]
- file_path = sys.argv[2]
- info_name = sys.argv[3]
- if file_type == 'bin':
- width = sys.argv[4]
- height = sys.argv[5]
- if len(sys.argv) == 6:
- print("The number of input parameters must be equal to 5")
- sys.exit(1)
- get_bin_info(file_path, info_name, width, height)
- elif file_type == 'jpg':
- if len(sys.argv) == 4:
- print("The number of input parameters must be equal to 3")
- get_jpg_info(file_path, info_name)
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/pth2onnx.py b/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/pth2onnx.py
deleted file mode 100644
index 9cf592a866cee79048bc34c03b252b518f55596b..0000000000000000000000000000000000000000
--- a/ACL_PyTorch/built-in/cv/MGN_for_Pytorch/pth2onnx.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright 2021 Huawei Technologies Co., Ltd
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-import sys
-import torch
-sys.path.append("./MGN")
-from MGN.network import MGN
-os.environ['CUDA_VISIBLE_DEVICES'] = '0'
-
-
-def pth2onnx(input_file, output_file, batch_size):
- model = MGN()
- model = model.to('cpu')
- model.load_state_dict(torch.load(input_file, map_location=torch.device('cpu')))
- model.eval()
- input_names = ["image"]
- output_names = ["features"]
- dynamic_axes = {'image': {0: '-1'}, 'features': {0: '-1'}}
- dummy_input = torch.randn(batch_size, 3, 384, 128)
- torch.onnx.export(model, dummy_input, output_file, input_names = input_names,
- dynamic_axes = dynamic_axes, output_names = output_names,
- opset_version=11, verbose=True)
- print("***********************************Convert to ONNX model file SUCCESS!***"
- "*******************************************")
-
-
-if __name__ == '__main__':
- pth2onnx(sys.argv[1], sys.argv[2], int(sys.argv[3]))
\ No newline at end of file