diff --git a/ACL_TensorFlow/built-in/cv/Resnet50v1.5_for_ACL/README.md b/ACL_TensorFlow/built-in/cv/Resnet50v1.5_for_ACL/README.md index 694ccc3ad106df32aab5bea94f675c991eb9af95..e5cc170ec23b6e01f63fa60d3654d2eb89babf76 100644 --- a/ACL_TensorFlow/built-in/cv/Resnet50v1.5_for_ACL/README.md +++ b/ACL_TensorFlow/built-in/cv/Resnet50v1.5_for_ACL/README.md @@ -1,90 +1,84 @@ +中文|[English](README_EN.md) +# Resnet50v1.5 TensorFlow离线推理 -# Resnet50v1.5 Inference for Tensorflow +此链接提供Resnet50v1.5 TensorFlow模型在NPU上离线推理的脚本和方法 -This repository provides a script and recipe to Inference of the Resnet50v1.5 model. +## 注意 +**此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。** -## Notice -**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** - -Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. +在开始之前,请注意以下适配条件。如果不匹配,可能导致运行失败。 | Conditions | Need | | --- | --- | -| CANN Version | >=5.0.3 | -| Chip Platform| Ascend310/Ascend310P3 | -| 3rd Party Requirements| Please follow the 'requirements.txt' | +| CANN版本 | >=5.0.3 | +| 芯片平台| Ascend310/Ascend310P3 | +| 第三方依赖| 请参考 'requirements.txt' | -## Quick Start Guide +## 快速指南 -### 1. Clone the respository +### 1. 拷贝代码 ```shell git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/cv/Resnet50v1.5_for_ACL ``` -### 2. Download and preprocess the dataset +### 2. 下载数据集和预处理 -1. Download the ImageNet2012 Validation dataset by yourself. You can get the validation pictures(50000 JPEGS and a ILSVRC2012val-label-index.txt) +1. 请自行下载ImageNet2012测试数据集。 你可以获得验证图片(50000 JPEGS and a ILSVRC2012val-label-index.txt) 2. Put JPEGS to **'scripts/ILSVRC2012val'** and label text to **'scripts/'** -3. Images Preprocess: +3. 图像预处理: ``` cd scripts mkdir input_bins python3 resnet50v15_preprocessing.py ./ILSVRC2012val/ ./input_bins/ ``` -The jpegs pictures will be preprocessed to bin fils. +jpegs图片将被预处理为bin-fils。 -### 3. Offline Inference +### 3. 离线推理 -**Convert pb to om.** +**离线模型转换** -- configure the env +- 环境变量设置 - ``` - export install_path=/usr/local/Ascend - export PATH=/usr/local/python3.7.5/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH - export PYTHONPATH=${install_path}/atc/python/site-packages:${install_path}/atc/python/site-packages/auto_tune.egg/auto_tune:${install_path}/atc/python/site-packages/schedule_search.egg:$PYTHONPATH - export LD_LIBRARY_PATH=${install_path}/atc/lib64:${install_path}/acllib/lib64:$LD_LIBRARY_PATH - export ASCEND_OPP_PATH=${install_path}/opp - ``` + 请参考[说明](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719),设置环境变量 -- convert pb to om +- Pb模型转换为om模型 - [pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/Official/cv/Resnet50v1.5_for_ACL/resnet50v15_tf.pb) + [pb模型下载链接](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/Official/cv/Resnet50v1.5_for_ACL/resnet50v15_tf.pb) ``` atc --model=resnet50v15_tf.pb --framework=3 --output=resnet50v15_tf_1batch --output_type=FP32 --soc_version=Ascend310 --input_shape="input_tensor:1,224,224,3" --insert_op_conf=resnet50v15_aipp.cfg --enable_small_channel=1 --log=info ``` -- Build the program +- 编译程序 ``` bash build.sh ``` -- Run the program: +- 开始运行: ``` cd scripts bash benchmark_tf.sh ``` -## Performance +## 推理结果 -### Result +### 结果 -Our result was obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. +本结果是通过运行上面适配的推理脚本获得的。要获得相同的结果,请按照《快速指南》中的步骤操作。 -#### Inference accuracy results +#### 推理精度结果 | model | **data** | Top1/Top5 | | :---------------: | :-------: | :-------------: | | offline Inference | 50000 images | 76.5 %/ 93.1% | -## Reference +## 参考 [1] https://github.com/IntelAI/models diff --git a/ACL_TensorFlow/built-in/cv/Resnet50v1.5_for_ACL/README_EN.md b/ACL_TensorFlow/built-in/cv/Resnet50v1.5_for_ACL/README_EN.md new file mode 100644 index 0000000000000000000000000000000000000000..ba5015856960285981fa245e0fcdc2724d7191a7 --- /dev/null +++ b/ACL_TensorFlow/built-in/cv/Resnet50v1.5_for_ACL/README_EN.md @@ -0,0 +1,84 @@ +English|[中文](README.md) + +# Resnet50v1.5 Inference for Tensorflow + +This repository provides a script and recipe to Inference of the Resnet50v1.5 model. + +## Notice +**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** + +Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. + +| Conditions | Need | +| --- | --- | +| CANN Version | >=5.0.3 | +| Chip Platform| Ascend310/Ascend310P3 | +| 3rd Party Requirements| Please follow the 'requirements.txt' | + +## Quick Start Guide + +### 1. Clone the respository + +```shell +git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git +cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/cv/Resnet50v1.5_for_ACL +``` + +### 2. Download and preprocess the dataset + +1. Download the ImageNet2012 Validation dataset by yourself. You can get the validation pictures(50000 JPEGS and a ILSVRC2012val-label-index.txt) + +2. Put JPEGS to **'scripts/ILSVRC2012val'** and label text to **'scripts/'** + +3. Images Preprocess: +``` +cd scripts +mkdir input_bins +python3 resnet50v15_preprocessing.py ./ILSVRC2012val/ ./input_bins/ +``` +The jpegs pictures will be preprocessed to bin fils. + +### 3. Offline Inference + +**Convert pb to om.** + +- configure the env + + Please follow the [guide](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719) to set the envs + +- convert pb to om + + [pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/Official/cv/Resnet50v1.5_for_ACL/resnet50v15_tf.pb) + + ``` + atc --model=resnet50v15_tf.pb --framework=3 --output=resnet50v15_tf_1batch --output_type=FP32 --soc_version=Ascend310 --input_shape="input_tensor:1,224,224,3" --insert_op_conf=resnet50v15_aipp.cfg --enable_small_channel=1 --log=info + ``` + +- Build the program + + ``` + bash build.sh + ``` + +- Run the program: + + ``` + cd scripts + bash benchmark_tf.sh + ``` + +## Performance + +### Result + +Our result was obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. + +#### Inference accuracy results + +| model | **data** | Top1/Top5 | +| :---------------: | :-------: | :-------------: | +| offline Inference | 50000 images | 76.5 %/ 93.1% | + +## Reference + +[1] https://github.com/IntelAI/models diff --git a/ACL_TensorFlow/built-in/cv/YOLOv3_for_ACL/README.md b/ACL_TensorFlow/built-in/cv/YOLOv3_for_ACL/README.md index f7d191f51b5dacb69876896cff016d0293c8afb1..78e184231791911a4c889ef889fa6f54ae54a074 100644 --- a/ACL_TensorFlow/built-in/cv/YOLOv3_for_ACL/README.md +++ b/ACL_TensorFlow/built-in/cv/YOLOv3_for_ACL/README.md @@ -1,40 +1,40 @@ +中文|[English](README_EN.md) +# Yolov3 TensorFlow离线推理 -# Yolov3 Inference for Tensorflow +此链接提供Yolov3 TensorFlow模型在NPU上离线推理的脚本和方法 -This repository provides a script and recipe to Inference the Yolov3 model. +## 注意 +**此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。** -## Notice -**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** - -Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. +在开始之前,请注意以下适配条件。如果不匹配,可能导致运行失败。 | Conditions | Need | | --- | --- | -| CANN Version | >=5.0.3 | -| Chip Platform| Ascend310/Ascend310P3 | -| 3rd Party Requirements| Please follow the 'requirements.txt' | +| CANN版本 | >=5.0.3 | +| 芯片平台| Ascend310/Ascend310P3 | +| 第三方依赖| 请参考 'requirements.txt' | -## Quick Start Guide +## 快速指南 -### 1. Clone the respository +### 1. 拷贝代码 ```shell git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/cv/YOLOv3_for_ACL ``` -### 2. Requirements +### 2. 必要条件 opencv-python==4.2.0.34 -### 3. Download and preprocess the dataset +### 3. 下载数据集和预处理 -1. dataset - To compare with official implement, for example, we use [get_coco_dataset.sh](https://github.com/pjreddie/darknet/blob/master/scripts/get_coco_dataset.sh) to prepare our dataset. +1. 数据集 + 例如,与官方实施相比,我们使用 [get_coco_dataset.sh](https://github.com/pjreddie/darknet/blob/master/scripts/get_coco_dataset.sh) 准备我们的数据集。 -2. annotation file +2. 注释文件 cd scripts @@ -50,7 +50,7 @@ opencv-python==4.2.0.34 - `label_index x_min y_min x_max y_max`. (The origin of coordinates is at the left top corner, left top => (xmin, ymin), right bottom => (xmax, ymax).) - `image_index` is the line index which starts from zero. `label_index` is in range [0, class_num - 1]. - For example: + 例如: ``` 0 xxx/xxx/a.jpg 1920 1080 0 453 369 473 391 1 588 245 608 268 @@ -59,24 +59,17 @@ opencv-python==4.2.0.34 ``` -### 3. Offline Inference +### 3. 离线推理 -**Convert pb to om.** +**离线模型转换** -- Configure the env according to your installation path +- 环境变量设置 - ``` - #Please modify the environment settings as needed - export install_path=/usr/local/Ascend - export PATH=/usr/local/python3.7.5/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH - export PYTHONPATH=${install_path}/atc/python/site-packages:${install_path}/atc/python/site-packages/auto_tune.egg/auto_tune:${install_path}/atc/python/site-packages/schedule_search.egg:$PYTHONPATH - export LD_LIBRARY_PATH=${install_path}/atc/lib64:${install_path}/acllib/lib64:$LD_LIBRARY_PATH - export ASCEND_OPP_PATH=${install_path}/opp - ``` + 请参考[说明](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719),设置环境变量 -- convert pb to om +- Pb模型转换为om模型 - [pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/yolov3_tf.pb) + [pb模型下载链接](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/yolov3_tf.pb) For Ascend310: ``` @@ -87,7 +80,7 @@ opencv-python==4.2.0.34 atc --model=yolov3_tf.pb --framework=3 --output=yolov3_tf_aipp --output_type=FP32 --soc_version=Ascend310P3 --input_shape="input:1,416,416,3" --log=info --insert_op_conf=yolov3_tf_aipp.cfg ``` -- Build the program +- 编译程序 For Ascend310: ``` @@ -100,7 +93,7 @@ opencv-python==4.2.0.34 bash build.sh ``` -- Run the program: +- 开始运行: ``` cd scripts @@ -109,18 +102,18 @@ opencv-python==4.2.0.34 -## Performance +## 推理结果 -### Result +### 结果 -Our result were obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. +本结果是通过运行上面适配的推理脚本获得的。要获得相同的结果,请按照《快速指南》中的步骤操作。 -#### Inference accuracy results +#### 推理精度结果 IoU=0.5 | model | Npu_nums | **mAP** | | :----: | :------: | :-----: | | Yolov3 | 1 | 55.3% | -## Reference +## 参考 [1] https://gitee.com/ascend/ModelZoo-TensorFlow/tree/master/TensorFlow/built-in/cv/detection/YoloV3_ID0076_for_TensorFlow diff --git a/ACL_TensorFlow/built-in/cv/YOLOv3_for_ACL/README_EN.md b/ACL_TensorFlow/built-in/cv/YOLOv3_for_ACL/README_EN.md new file mode 100644 index 0000000000000000000000000000000000000000..efdce86789ab839948fdba5f5c45caefba9e0260 --- /dev/null +++ b/ACL_TensorFlow/built-in/cv/YOLOv3_for_ACL/README_EN.md @@ -0,0 +1,120 @@ +English|[中文](README.md) + +# Yolov3 Inference for Tensorflow + +This repository provides a script and recipe to Inference the Yolov3 model. + +## Notice +**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** + +Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. + +| Conditions | Need | +| --- | --- | +| CANN Version | >=5.0.3 | +| Chip Platform| Ascend310/Ascend310P3 | +| 3rd Party Requirements| Please follow the 'requirements.txt' | + +## Quick Start Guide + +### 1. Clone the respository + +```shell +git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git +cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/cv/YOLOv3_for_ACL +``` + +### 2. Requirements + +opencv-python==4.2.0.34 + + +### 3. Download and preprocess the dataset + +1. dataset + To compare with official implement, for example, we use [get_coco_dataset.sh](https://github.com/pjreddie/darknet/blob/master/scripts/get_coco_dataset.sh) to prepare our dataset. + +2. annotation file + + cd scripts + + Using script generate `coco2014_minival.txt` file. Modify the path in `coco_minival_anns.py` and `5k.txt`, then execute: + + ``` + python3 coco_minival_anns.py + ``` + + One line for one image, in the format like `image_index image_absolute_path img_width img_height box_1 box_2 ... box_n`. + Box_x format: + + - `label_index x_min y_min x_max y_max`. (The origin of coordinates is at the left top corner, left top => (xmin, ymin), right bottom => (xmax, ymax).) + - `image_index` is the line index which starts from zero. `label_index` is in range [0, class_num - 1]. + + For example: + + ``` + 0 xxx/xxx/a.jpg 1920 1080 0 453 369 473 391 1 588 245 608 268 + 1 xxx/xxx/b.jpg 1920 1080 1 466 403 485 422 2 793 300 809 320 + ... + ``` + + +### 3. Offline Inference + +**Convert pb to om.** + +- configure the env + + Please follow the [guide](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719) to set the envs + ``` + +- convert pb to om + + [pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/yolov3_tf.pb) + + For Ascend310: + ``` + atc --model=yolov3_tf.pb --framework=3 --output=yolov3_tf_aipp --output_type=FP32 --soc_version=Ascend310 --input_shape="input:1,416,416,3" --log=info --insert_op_conf=yolov3_tf_aipp.cfg + ``` + For Ascend310P3: + ``` + atc --model=yolov3_tf.pb --framework=3 --output=yolov3_tf_aipp --output_type=FP32 --soc_version=Ascend310P3 --input_shape="input:1,416,416,3" --log=info --insert_op_conf=yolov3_tf_aipp.cfg + ``` + +- Build the program + + For Ascend310: + ``` + unset ASCEND310P3_DVPP + bash build.sh + ``` + For Ascend310P3: + ``` + export ASCEND310P3_DVPP=1 + bash build.sh + ``` + +- Run the program: + + ``` + cd scripts + bash benchmark_tf.sh --batchSize=1 --modelType=yolov3 --imgType=raw --precision=fp16 --outputType=fp32 --useDvpp=1 --deviceId=0 --modelPath=yolov3_tf_aipp.om --trueValuePath=instance_val2014.json --imgInfoFile=coco2014_minival.txt --classNamePath=../../coco.names + ``` + + + +## Performance + +### Result + +Our result were obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. + +#### Inference accuracy results + +IoU=0.5 +| model | Npu_nums | **mAP** | +| :----: | :------: | :-----: | +| Yolov3 | 1 | 55.3% | + +## Reference +[1] https://gitee.com/ascend/ModelZoo-TensorFlow/tree/master/TensorFlow/built-in/cv/detection/YoloV3_ID0076_for_TensorFlow diff --git a/ACL_TensorFlow/built-in/cv/YOLOv4_for_ACL/README.md b/ACL_TensorFlow/built-in/cv/YOLOv4_for_ACL/README.md index 70a81cab78df430bb084e04b2c66b08ca04d2300..17bcc4b9d1a7f2c0857e7759d841805179a38462 100644 --- a/ACL_TensorFlow/built-in/cv/YOLOv4_for_ACL/README.md +++ b/ACL_TensorFlow/built-in/cv/YOLOv4_for_ACL/README.md @@ -1,36 +1,36 @@ +中文|[English](README_EN.md) +# YOLOv4 TensorFlow离线推理 -# YOLOv4 Inference for Tensorflow +此链接提供YOLOv4 TensorFlow模型在NPU上离线推理的脚本和方法 -This repository provides a script and recipe to Inference of the YOLOv4 model. +## 注意 +**此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。** -## Notice -**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** - -Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. +在开始之前,请注意以下适配条件。如果不匹配,可能导致运行失败。 | Conditions | Need | | --- | --- | -| CANN Version | >=5.0.3 | -| Chip Platform| Ascend310/Ascend310P3 | -| 3rd Party Requirements| Please follow the 'requirements.txt' | +| CANN版本 | >=5.0.3 | +| 芯片平台| Ascend310/Ascend310P3 | +| 第三方依赖| 请参考 'requirements.txt' | -## Quick Start Guide +## 快速指南 -### 1. Clone the respository +### 1. 拷贝代码 ```shell git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/cv/YOLOv4_for_ACL ``` -### 2. Download and preprocess the dataset +### 2. 下载数据集和预处理 -1. Download the COCO-2017 validation dataset by yourself. +1. 自行下载COCO-2017验证数据集。 -2. Put pictures to **'scripts/val2017'** +2. 将图片放到 **'scripts/val2017'** -3. Images Preprocess: +3. 图像预处理: ``` cd scripts mkdir input_bins @@ -43,55 +43,50 @@ python3 preprocess.py ./val2017/ ./input_bins/ python3 load_coco_json.py ``` -### 3. Offline Inference +### 3. 离线推理 -**Convert pb to om.** +**离线模型转换** -- configure the env +- 环境变量设置 - ``` - export install_path=/usr/local/Ascend - export PATH=/usr/local/python3.7.5/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH - export PYTHONPATH=${install_path}/atc/python/site-packages:${install_path}/atc/python/site-packages/auto_tune.egg/auto_tune:${install_path}/atc/python/site-packages/schedule_search.egg:$PYTHONPATH - export LD_LIBRARY_PATH=${install_path}/atc/lib64:${install_path}/acllib/lib64:$LD_LIBRARY_PATH - export ASCEND_OPP_PATH=${install_path}/opp - ``` + 请参考[说明](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719),设置环境变量 + +- Pb模型转换为om模型 -- convert pb to om - [pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/yolov4_tf.pb) + [pb模型下载链接](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/yolov4_tf.pb) ``` atc --model=yolov4_tf.pb --framework=3 --output=yolov4_tf_1batch --output_type=FP32 --soc_version=Ascend310 --input_shape="x:1,416,416,3" --log=info ``` -- Build the program +- 编译程序 ``` bash build.sh ``` -- Run the program: +- 开始运行: ``` cd scripts bash benchmark_tf.sh ``` -## Performance +## 推理结果 -### Result +### 结果 -Our result was obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. +本结果是通过运行上面适配的推理脚本获得的。要获得相同的结果,请按照《快速指南》中的步骤操作。 -#### Inference accuracy results +#### 推理精度结果 | model | **data** | mAP | | :---------------: | :-------: | :-------------: | | offline Inference | 5000 images | 60.7% | -## Reference +## 参考 [1] https://github.com/hunglc007/tensorflow-yolov4-tflite [2] https://github.com/rafaelpadilla/Object-Detection-Metrics diff --git a/ACL_TensorFlow/built-in/cv/YOLOv4_for_ACL/README_EN.md b/ACL_TensorFlow/built-in/cv/YOLOv4_for_ACL/README_EN.md new file mode 100644 index 0000000000000000000000000000000000000000..8f8cc59ac0bd87321e5ca20cb072c15dc4287217 --- /dev/null +++ b/ACL_TensorFlow/built-in/cv/YOLOv4_for_ACL/README_EN.md @@ -0,0 +1,91 @@ +English|[中文](README.md) + +# YOLOv4 Inference for Tensorflow + +This repository provides a script and recipe to Inference of the YOLOv4 model. + +## Notice +**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** + +Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. + +| Conditions | Need | +| --- | --- | +| CANN Version | >=5.0.3 | +| Chip Platform| Ascend310/Ascend310P3 | +| 3rd Party Requirements| Please follow the 'requirements.txt' | + +## Quick Start Guide + +### 1. Clone the respository + +```shell +git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git +cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/cv/YOLOv4_for_ACL +``` + +### 2. Download and preprocess the dataset + +1. Download the COCO-2017 validation dataset by yourself. + +2. Put pictures to **'scripts/val2017'** + +3. Images Preprocess: +``` +cd scripts +mkdir input_bins +python3 preprocess.py ./val2017/ ./input_bins/ +``` + The pictures will be preprocessed to bin files. + +4. Split ground-truth labels from **instances_val2017.json** +``` +python3 load_coco_json.py +``` + +### 3. Offline Inference + +**Convert pb to om.** + +- configure the env + + Please follow the [guide](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719) to set the envs + +- convert pb to om + + [pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/yolov4_tf.pb) + + ``` + atc --model=yolov4_tf.pb --framework=3 --output=yolov4_tf_1batch --output_type=FP32 --soc_version=Ascend310 --input_shape="x:1,416,416,3" --log=info + ``` + +- Build the program + + ``` + bash build.sh + ``` + +- Run the program: + + ``` + cd scripts + bash benchmark_tf.sh + ``` + +## Performance + +### Result + +Our result was obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. + +#### Inference accuracy results + +| model | **data** | mAP | +| :---------------: | :-------: | :-------------: | +| offline Inference | 5000 images | 60.7% | + + +## Reference +[1] https://github.com/hunglc007/tensorflow-yolov4-tflite + +[2] https://github.com/rafaelpadilla/Object-Detection-Metrics diff --git a/ACL_TensorFlow/contrib/cv/Yolov5_for_ACL/README.md b/ACL_TensorFlow/contrib/cv/Yolov5_for_ACL/README.md index d7d5bed61ad804bbc57ec0db770bc15c2b70514c..bff0e02eecd4b8bcf5cbcc0bbe00e6d955398cc4 100644 --- a/ACL_TensorFlow/contrib/cv/Yolov5_for_ACL/README.md +++ b/ACL_TensorFlow/contrib/cv/Yolov5_for_ACL/README.md @@ -1,33 +1,33 @@ +中文|[English](README_EN.md) +# Yolov5 TensorFlow离线推理 -# Yolov5 Inference for Tensorflow +此链接提供Yolov5 TensorFlow模型在NPU上离线推理的脚本和方法 -This repository provides a script and recipe to Inference of the Yolov5 model. +## 注意 +**此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。** -## Notice -**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** - -Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. +在开始之前,请注意以下适配条件。如果不匹配,可能导致运行失败。 | Conditions | Need | | --- | --- | -| CANN Version | >=5.0.3 | -| Chip Platform| Ascend310/Ascend310P3 | -| 3rd Party Requirements| Please follow the 'requirements.txt' | +| CANN版本 | >=5.0.3 | +| 芯片平台| Ascend310/Ascend310P3 | +| 第三方依赖| 请参考 'requirements.txt' | -## Quick Start Guide +## 快速指南 -### 1. Clone the respository +### 1. 拷贝代码 ```shell git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git cd Modelzoo-TensorFlow/ACL_TensorFlow/contrib/cv/Yolov5_for_ACL ``` -### 2. Download and preprocess the dataset +### 2. 下载数据集和预处理 -1. Refer to this [url](https://github.com/hunglc007/tensorflow-yolov4-tflite/README.md) to download and preprocess the dataset -The operation is as follows: +1. 参考此URL[url](https://github.com/hunglc007/tensorflow-yolov4-tflite/README.md)下载并预处理数据集 +操作如下: ``` # run script in /script/get_coco_dataset_2017.sh to download COCO 2017 Dataset # preprocess coco dataset @@ -38,62 +38,56 @@ cd scripts python coco_convert.py --input ./coco/annotations/instances_val2017.json --output val2017.pkl python coco_annotation.py --coco_path ./coco ``` -There will generate coco2017 test data set under *data/dataset/*. +生成coco2017测试数据集目录 *data/dataset/*. -### 3. Offline Inference +### 3. 离线推理 -**Convert pb to om.** +**离线模型转换** -- configure the env +- 环境变量设置 - ``` - export install_path=/usr/local/Ascend - export PATH=/usr/local/python3.7.5/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH - export PYTHONPATH=${install_path}/atc/python/site-packages:${install_path}/atc/python/site-packages/auto_tune.egg/auto_tune:${install_path}/atc/python/site-packages/schedule_search.egg:$PYTHONPATH - export LD_LIBRARY_PATH=${install_path}/atc/lib64:${install_path}/acllib/lib64:$LD_LIBRARY_PATH - export ASCEND_OPP_PATH=${install_path}/opp - ``` + 请参考[说明](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719),设置环境变量 -- convert pb to om +- Pb模型转换为om模型 ``` atc --model=yolov5_tf2_gpu.pb --framework=3 --output=yolov5_tf2_gpu --soc_version=Ascend310 --input_shape="Input:1,640,640,3" --out_nodes="Identity:0;Identity_1:0;Identity_2:0;Identity_3:0;Identity_4:0;Identity_5:0" --log=info ``` -- Build the program +- 编译程序 ``` bash build.sh ``` -- Run the program: +- 开始运行: ``` cd offline_inference bash benchmark_tf.sh ``` -- Run the post process: +- 运行后期处理: ``` cd .. python3 offline_inference/postprocess.py ``` -## Performance +## 推理结果 -### Result +### 结果 -Our result were obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. +本结果是通过运行上面适配的推理脚本获得的。要获得相同的结果,请按照《快速指南》中的步骤操作。 -#### Inference accuracy results +#### 推理精度结果 | model | **data** | AP/AR | | :---------------: | :-------: | :-----------: | | offline Inference | 4952 images | 0.221/0.214 | -## Reference +## 参考 [1] https://github.com/hunglc007/tensorflow-yolov4-tflite [2] https://github.com/ultralytics/yolov5 diff --git a/ACL_TensorFlow/contrib/cv/Yolov5_for_ACL/README_EN.md b/ACL_TensorFlow/contrib/cv/Yolov5_for_ACL/README_EN.md new file mode 100644 index 0000000000000000000000000000000000000000..7944991204b23c1d1d87f5c4977e4d9456fa8706 --- /dev/null +++ b/ACL_TensorFlow/contrib/cv/Yolov5_for_ACL/README_EN.md @@ -0,0 +1,95 @@ +English|[中文](README.md) + +# Yolov5 Inference for Tensorflow + +This repository provides a script and recipe to Inference of the Yolov5 model. + +## Notice +**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** + +Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. + +| Conditions | Need | +| --- | --- | +| CANN Version | >=5.0.3 | +| Chip Platform| Ascend310/Ascend310P3 | +| 3rd Party Requirements| Please follow the 'requirements.txt' | + +## Quick Start Guide + +### 1. Clone the respository + +```shell +git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git +cd Modelzoo-TensorFlow/ACL_TensorFlow/contrib/cv/Yolov5_for_ACL +``` + +### 2. Download and preprocess the dataset + +1. Refer to this [url](https://github.com/hunglc007/tensorflow-yolov4-tflite/README.md) to download and preprocess the dataset +The operation is as follows: +``` +# run script in /script/get_coco_dataset_2017.sh to download COCO 2017 Dataset +# preprocess coco dataset +cd data +mkdir dataset +cd .. +cd scripts +python coco_convert.py --input ./coco/annotations/instances_val2017.json --output val2017.pkl +python coco_annotation.py --coco_path ./coco +``` +There will generate coco2017 test data set under *data/dataset/*. + +### 3. Offline Inference + +**Convert pb to om.** + +- configure the env + + Please follow the [guide](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719) to set the envs + +- convert pb to om + + ``` + atc --model=yolov5_tf2_gpu.pb --framework=3 --output=yolov5_tf2_gpu --soc_version=Ascend310 --input_shape="Input:1,640,640,3" --out_nodes="Identity:0;Identity_1:0;Identity_2:0;Identity_3:0;Identity_4:0;Identity_5:0" --log=info + ``` + +- Build the program + + ``` + bash build.sh + ``` + +- Run the program: + + ``` + cd offline_inference + bash benchmark_tf.sh + ``` + +- Run the post process: + + ``` + cd .. + python3 offline_inference/postprocess.py + ``` + +## Performance + +### Result + +Our result were obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. + +#### Inference accuracy results + +| model | **data** | AP/AR | +| :---------------: | :-------: | :-----------: | +| offline Inference | 4952 images | 0.221/0.214 | + + +## Reference +[1] https://github.com/hunglc007/tensorflow-yolov4-tflite + +[2] https://github.com/ultralytics/yolov5 + +[3]https://github.com/khoadinh44/YOLOv5_customized_data diff --git a/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/cfg.py b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/cfg.py index 862f8a95766c33d41a709069255d4e55e401f06f..cc68bb6736a80dea809780ae58229da32436ae9e 100644 --- a/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/cfg.py +++ b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/cfg.py @@ -40,6 +40,7 @@ def make_config(FLAGS): custom_op.name = "NpuOptimizer" custom_op.parameter_map["use_off_line"].b = True custom_op.parameter_map["precision_mode"].s = tf.compat.as_bytes("allow_mix_precision") + custom_op.parameter_map["hcom_parallel"].b = True config.graph_options.rewrite_options.remapping = RewriterConfig.OFF ## Auto Tune diff --git a/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/configs/.keep b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/configs/.keep new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/configs/rank_table_8p.json b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/configs/rank_table_8p.json new file mode 100644 index 0000000000000000000000000000000000000000..cd9041f3efa3eb1a9e1959ac758b60e2313778a0 --- /dev/null +++ b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/configs/rank_table_8p.json @@ -0,0 +1,52 @@ +{ + "server_count":"1", + "server_list":[ + { + "server_id":"10.147.179.27", + "device":[ + { + "device_id":"0", + "device_ip":"192.168.100.100", + "rank_id":"0" + }, + { + "device_id":"1", + "device_ip":"192.168.101.100", + "rank_id":"1" + }, + { + "device_id":"2", + "device_ip":"192.168.102.100", + "rank_id":"2" + }, + { + "device_id":"3", + "device_ip":"192.168.103.100", + "rank_id":"3" + }, + { + "device_id":"4", + "device_ip":"192.168.100.101", + "rank_id":"4" + }, + { + "device_id":"5", + "device_ip":"192.168.101.101", + "rank_id":"5" + }, + { + "device_id":"6", + "device_ip":"192.168.102.101", + "rank_id":"6" + }, + { + "device_id":"7", + "device_ip":"192.168.103.101", + "rank_id":"7" + } + ] + } + ], + "status":"completed", + "version":"1.0" +} \ No newline at end of file diff --git a/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/model.py b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/model.py index d6d1d9079540ccb59d6a2f73ab0a8a2117241027..4e813d77f1869647ded1b3eb44d28fe69be3ffe7 100644 --- a/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/model.py +++ b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/model.py @@ -127,9 +127,17 @@ class CGAN(object): self.g_vars = [var for var in t_vars if 'fusion_model' in var.name] print(self.g_vars) + RANK_SIZE = int(os.getenv('RANK_SIZE')) + with tf.name_scope('train_step'): - self.train_fusion_op = tf.train.AdamOptimizer(config.learning_rate).minimize(self.g_loss_total,var_list=self.g_vars) - self.train_discriminator_op=tf.train.AdamOptimizer(config.learning_rate).minimize(self.d_loss,var_list=self.d_vars) + if int(RANK_SIZE) > 1: + self.train_fusion_op = tf.train.AdamOptimizer(config.learning_rate) # .minimize(self.g_loss_total,var_list=self.g_vars) + self.train_fusion_op = npu_distributed_optimizer_wrapper(self.train_fusion_op).minimize(self.g_loss_total,var_list=self.g_vars) + self.train_discriminator_op=tf.train.AdamOptimizer(config.learning_rate) # .minimize(self.d_loss,var_list=self.d_vars) + self.train_discriminator_op=npu_distributed_optimizer_wrapper(self.train_discriminator_op).minimize(self.d_loss,var_list=self.d_vars) + else: + self.train_fusion_op = tf.train.AdamOptimizer(config.learning_rate).minimize(self.g_loss_total,var_list=self.g_vars) + self.train_discriminator_op = tf.train.AdamOptimizer(config.learning_rate).minimize(self.d_loss,var_list=self.d_vars) #将所有统计的量合起来 self.summary_op = tf.summary.merge_all() #生成日志文件 @@ -149,19 +157,30 @@ class CGAN(object): print("Training...") perf_list=[] fps_list=[] + RANK_SIZE = int(os.getenv('RANK_SIZE')) + rank_id = int(os.getenv('DEVICE_INDEX')) + if int(RANK_SIZE) > 1: + rank_id = int(os.getenv('RANK_ID')) + input = tf.trainable_variables() + bcast_global_variables_op = hccl_ops.broadcast(input, 0) + self.sess.run(bcast_global_variables_op) + else : + rank_id = 0 for ep in range(config.epoch): # Run by batch images - batch_idxs = len(train_data_ir) // config.batch_size + batch_idxs = len(train_data_ir) // (config.batch_size*RANK_SIZE) # print(batch_idxs) # print(ep) # for idx in range(0, batch_idxs): + start_idx=rank_id for idx in range(0, batch_idxs - config.info_num): #add config.info_num=0 默认为0 start_time = time.time() - batch_images_ir = train_data_ir[idx*config.batch_size : (idx+1)*config.batch_size] - batch_labels_ir = train_label_ir[idx*config.batch_size : (idx+1)*config.batch_size] - batch_images_vi = train_data_vi[idx*config.batch_size : (idx+1)*config.batch_size] - batch_labels_vi = train_label_vi[idx*config.batch_size : (idx+1)*config.batch_size] + batch_images_ir = train_data_ir[start_idx*config.batch_size : (start_idx+1)*config.batch_size] + batch_labels_ir = train_label_ir[start_idx*config.batch_size : (start_idx+1)*config.batch_size] + batch_images_vi = train_data_vi[start_idx*config.batch_size : (start_idx+1)*config.batch_size] + batch_labels_vi = train_label_vi[start_idx*config.batch_size : (start_idx+1)*config.batch_size] + start_idx=start_idx+RANK_SIZE # print(counter) counter += 1 for i in range(2): diff --git a/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/test/train_full_8p.sh b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/test/train_full_8p.sh new file mode 100644 index 0000000000000000000000000000000000000000..d7ab858a8dc37665e3fbeb5fec5fd33605178268 --- /dev/null +++ b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/test/train_full_8p.sh @@ -0,0 +1,178 @@ +#!/bin/bash + +#当前路径,不需要修改 +cur_path=`pwd` + +#集合通信参数,不需要修改 + +export RANK_SIZE=8 +export RANK_TABLE_FILE=${cur_path}/../configs/rank_table_8p.json +export JOB_ID=10087 +RANK_ID_START=0 + + +# 数据集路径,保持为空,不需要修改 +data_path='' +#预训练模型地址 +ckpt_path='' + +#设置默认日志级别,不需要改 +#export ASCEND_GLOBAL_LOG_LEVEL=3 +#export ASCEND_DEVICE_ID=4 + +#基础参数,需要模型审视修改 +#网络名称,同目录名称 +Network="FusionGAN_ID2124_for_TensorFlow" +#训练epoch +epochs=12 +#训练batch_size +batch_size=64 + + +#TF2.X独有,需要模型审视修改 +export NPU_LOOP_SIZE=${train_steps} + +#维测参数,precision_mode需要模型审视修改 +precision_mode="allow_mix_precision" +#维持参数,以下不需要修改 +over_dump=False +data_dump_flag=False +data_dump_step="10" +profiling=False + +# 帮助信息,不需要修改 +if [[ $1 == --help || $1 == -h ]];then + echo"usage:./train_performance_1P.sh " + echo " " + echo "parameter explain: + --precision_mode precision mode(allow_fp32_to_fp16/force_fp16/must_keep_origin_dtype/allow_mix_precision) + --over_dump if or not over detection, default is False + --data_dump_flag data dump flag, default is False + --data_dump_step data dump step, default is 10 + --profiling if or not profiling for performance debug, default is False + --data_path source data of training + --ckpt_path model + -h/--help show help message + " + exit 1 +fi + +#参数校验,不需要修改 +for para in $* +do + if [[ $para == --precision_mode* ]];then + precision_mode=`echo ${para#*=}` + elif [[ $para == --over_dump* ]];then + over_dump=`echo ${para#*=}` + over_dump_path=${cur_path}/output/overflow_dump + mkdir -p ${over_dump_path} + elif [[ $para == --data_dump_flag* ]];then + data_dump_flag=`echo ${para#*=}` + data_dump_path=${cur_path}/output/data_dump + mkdir -p ${data_dump_path} + elif [[ $para == --data_dump_step* ]];then + data_dump_step=`echo ${para#*=}` + elif [[ $para == --profiling* ]];then + profiling=`echo ${para#*=}` + profiling_dump_path=${cur_path}/output/profiling + mkdir -p ${profiling_dump_path} + elif [[ $para == --data_path* ]];then + data_path=`echo ${para#*=}` + elif [[ $para == --ckpt_path* ]];then + ckpt_path=`echo ${para#*=}` + fi +done +# #校验是否传入data_path,不需要修改 +# if [[$data_path == ""]];then +# echo "[Error] para \"data_path\" must be confing" +# exit 1 +# fi + +#训练开始时间,不需要修改 +start_time=$(date +%s) + +#进入训练脚本目录,需要模型审视修改 +cd $cur_path/../ +for((RANK_ID=$RANK_ID_START;RANK_ID<$((RANK_SIZE+RANK_ID_START));RANK_ID++)); +do + #设置环境变量,不需要修改 + echo "Device ID: $ASCEND_DEVICE_ID" + export RANK_ID=$RANK_ID + export ASCEND_DEVICE_ID=$RANK_ID + ASCEND_DEVICE_ID=$RANK_ID + DEVICE_INDEX=$RANK_ID + export DEVICE_INDEX=${DEVICE_INDEX} + + #创建DeviceID输出目录,不需要修改 + if [ -d ${cur_path}/output/${ASCEND_DEVICE_ID} ];then + rm -rf ${cur_path}/output/${ASCEND_DEVICE_ID} + mkdir -p ${cur_path}/output/$ASCEND_DEVICE_ID/ckpt + else + mkdir -p ${cur_path}/output/$ASCEND_DEVICE_ID/ckpt + fi + + #执行训练脚本,以下传参不需要修改,其他需要模型审视修改 + #12epoch,生成11模型 + nohup python3 main.py ${data_path}/dataset ${cur_path} \ + --epoch=12 \ + --info_num=0 > ${cur_path}/output/${ASCEND_DEVICE_ID}/train_${ASCEND_DEVICE_ID}.log 2>&1 & + +done +wait + +export RANK_ID=7 +export ASCEND_DEVICE_ID=$RANK_ID +ASCEND_DEVICE_ID=$RANK_ID +DEVICE_INDEX=$RANK_ID +export DEVICE_INDEX=${DEVICE_INDEX} +nohup python3 test_one_image.py ${data_path}/dataset ${cur_path} > ${cur_path}/output/${ASCEND_DEVICE_ID}/test_${ASCEND_DEVICE_ID}.log 2>&1 & +wait + +#训练结束时间,不需要修改 +end_time=$(date +%s) +e2e_time=$(( $end_time - $start_time )) + +#结果打印,不需要修改 +echo "------------------ Final result ------------------" +#输出性能FPS,需要模型审视修改 +TrainingTime=`grep 'perf_mean:' $cur_path/output/${ASCEND_DEVICE_ID}/train_${ASCEND_DEVICE_ID}.log|awk 'END {print $14}'` +FPS=`grep 'fps_mean:' $cur_path/output/${ASCEND_DEVICE_ID}/train_${ASCEND_DEVICE_ID}.log|awk 'END {print $16}'` +#打印,不需要修改 +echo "Final Performance TrainingTime : $TrainingTime" +echo "Final Performance images/sec : $FPS" + +#输出训练精度,需要模型审视修改 +train_accuracy=`grep all_result $cur_path/output/${ASCEND_DEVICE_ID}/test_${ASCEND_DEVICE_ID}.log|awk -F "vif:" '{print $2}'` + +#打印,不需要修改 +echo "Final Train Accuracy : ${train_accuracy}" +echo "E2E Training Duration sec : $e2e_time" + +#性能看护结果汇总 +#训练用例信息,不需要修改 +BatchSize=${batch_size} +DeviceType=`uname -m` +CaseName=${Network}_bs${BatchSize}_${RANK_SIZE}'p'_'acc' + +##获取性能数据,不需要修改 +#吞吐量 +ActualFPS=${FPS} +#单迭代训练时长 +#TrainingTime=`awk 'BEGIN{printf "%.2f\n",'${FPS}'/69}'` + +#从train_$ASCEND_DEVICE_ID.log提取Loss到train_${CaseName}_loss.txt中,需要根据模型审视 +grep 'loss_d:' $cur_path/output/$ASCEND_DEVICE_ID/train_$ASCEND_DEVICE_ID.log|awk '{print $10}' >> $cur_path/output/$ASCEND_DEVICE_ID/train_${CaseName}_loss.txt +#最后一个迭代loss值,不需要修改 +ActualLoss=`awk 'END {print}' $cur_path/output/$ASCEND_DEVICE_ID/train_${CaseName}_loss.txt` + +#关键信息打印到${CaseName}.log中,不需修改 +echo "Network = ${Network}" > $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log +echo "RankSize = ${RANK_SIZE}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log +echo "BatchSize = ${BatchSize}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log +echo "DeviceType = ${DeviceType}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log +echo "CaseName = ${CaseName}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log +echo "ActualFPS = ${ActualFPS}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log +echo "TrainingTime = ${TrainingTime}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log +echo "ActualLoss = ${ActualLoss}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log +echo "E2ETrainingTime = ${e2e_time}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log +echo "TrainAccuracy = ${train_accuracy}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log \ No newline at end of file diff --git a/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/utils.py b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/utils.py index 0c82b6f8c0e8e37c61a3f9038db70e5089416388..7318d1ce0f986d4517ac463c923f34f785132376 100644 --- a/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/utils.py +++ b/TensorFlow/contrib/cv/fusiongan/FusionGAN_ID2124_for_TensorFlow/utils.py @@ -105,6 +105,7 @@ def make_data(sess, data, label, data_dir): Depending on 'is_train' (flag value), savepath would be changed. #把_20都删了 """ + os.environ["HDF5_USE_FILE_LOCKING"] = "FALSE" if FLAGS.is_train: #savepath = os.path.join(os.getcwd(), os.path.join('checkpoint',data_dir,'train.h5')) savepath = os.path.join(os.path.join(data_dir, 'train.h5'))