diff --git a/ACL_TensorFlow/built-in/cv/Vgg19_for_ACL/README.md b/ACL_TensorFlow/built-in/cv/Vgg19_for_ACL/README.md index 41b657481115686cb33f0dc8170837e725a50d8c..14a16cca26b179e27d90e1ef53945b30b405a3a8 100644 --- a/ACL_TensorFlow/built-in/cv/Vgg19_for_ACL/README.md +++ b/ACL_TensorFlow/built-in/cv/Vgg19_for_ACL/README.md @@ -1,13 +1,13 @@ +中文|[English](README_EN.md) +# Vgg19 TensorFlow离线推理 -# Vgg19 Inference for Tensorflow +此链接提供Vgg19 TensorFlow模型在NPU上离线推理的脚本和方法 -This repository provides a script and recipe to Inference of the Vgg19 model. +## 注意 +**此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。** -## Notice -**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** - -Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. +在开始之前,请注意以下适配条件。如果不匹配,可能导致运行失败。 | Conditions | Need | | --- | --- | @@ -15,75 +15,70 @@ Before starting, please pay attention to the following adaptation conditions. If | Chip Platform| Ascend310/Ascend310P3 | | 3rd Party Requirements| Please follow the 'requirements.txt' | -## Quick Start Guide +## 快速指南 -### 1. Clone the respository +### 1. 拷贝代码 ```shell git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/cv/Vgg19_for_ACL ``` -### 2. Download and preprocess the dataset - -1. Download the ImageNet2012 Validation dataset by yourself. You can get the validation pictures(50000 JPEGS and a ILSVRC2012val-label-index.txt) +### 2. 下载数据集和预处理 -2. Put JPEGS to **'scripts/ILSVRC2012val'** and label text to **'scripts/'** +1. 请自行下载ImageNet2012测试数据集 -3. Images Preprocess: -``` -cd scripts -mkdir input_bins -python3 vgg19_preprocessing.py ./ILSVRC2012val/ ./input_bins/ -``` -The jpegs pictures will be preprocessed to bin fils. +### 3. 离线推理 -### 3. Offline Inference +**离线模型转换** -**Convert pb to om.** +- 环境变量设置 -- configure the env - - ``` - export install_path=/usr/local/Ascend - export PATH=/usr/local/python3.7.5/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH - export PYTHONPATH=${install_path}/atc/python/site-packages:${install_path}/atc/python/site-packages/auto_tune.egg/auto_tune:${install_path}/atc/python/site-packages/schedule_search.egg:$PYTHONPATH - export LD_LIBRARY_PATH=${install_path}/atc/lib64:${install_path}/acllib/lib64:$LD_LIBRARY_PATH - export ASCEND_OPP_PATH=${install_path}/opp - ``` + 请参考[说明](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719),设置环境变量 -- convert pb to om +- Pb模型转换为om模型 [pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/Official/cv/Vgg19_for_ACL.zip) + For Ascend310: ``` atc --model=vgg19_tf.pb --framework=3 --output=vgg19_tf_1batch --output_type=FP32 --soc_version=Ascend310 --input_shape="input:1,224,224,3" --insert_op_conf=vgg19_tf_aipp.cfg --enable_small_channel=1 --log=info ``` - -- Build the program - + For Ascend310P3: + ``` + atc --model=vgg19_tf.pb --framework=3 --output=vgg19_tf_1batch --output_type=FP32 --soc_version=Ascend310P3 --input_shape="input:1,224,224,3" --insert_op_conf=vgg19_tf_aipp.cfg --enable_small_channel=1 --log=info + ``` +- 编译程序 + + For Ascend310: + ``` + unset ASCEND310P3_DVPP + bash build.sh + ``` + For Ascend310P3: ``` + export ASCEND310P3_DVPP=1 bash build.sh ``` -- Run the program: +- 开始运行: ``` cd scripts bash benchmark_tf.sh ``` -## Performance +## 推理结果 -### Result +### 结果 -Our result was obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. +本结果是通过运行上面适配的推理脚本获得的。要获得相同的结果,请按照《快速指南》中的步骤操作。 -#### Inference accuracy results +#### 推理精度结果 | model | **data** | Top1/Top5 | | :---------------: | :-------: | :-------------: | | offline Inference | 50000 images | 71.0 %/ 89.8% | -## Reference +## 参考 [1] https://github.com/tensorflow/models/tree/master/research/slim \ No newline at end of file diff --git a/ACL_TensorFlow/built-in/cv/Vgg19_for_ACL/README_EN.md b/ACL_TensorFlow/built-in/cv/Vgg19_for_ACL/README_EN.md new file mode 100644 index 0000000000000000000000000000000000000000..c463bfe67e28d91ecb05b8642845014620694947 --- /dev/null +++ b/ACL_TensorFlow/built-in/cv/Vgg19_for_ACL/README_EN.md @@ -0,0 +1,83 @@ +English|[中文](README.md) + +# Vgg19 Inference for Tensorflow + +This repository provides a script and recipe to Inference of the Vgg19 model. + +## Notice +**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** + +Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. + +| Conditions | Need | +| --- | --- | +| CANN Version | >=5.0.3 | +| Chip Platform| Ascend310/Ascend310P3 | +| 3rd Party Requirements| Please follow the 'requirements.txt' | + +## Quick Start Guide + +### 1. Clone the respository + +```shell +git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git +cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/cv/Vgg19_for_ACL +``` + +### 2. Download and preprocess the dataset + +1. Download the ImageNet2012 Validation dataset by yourself. You can get the validation pictures(50000 JPEGS and a ILSVRC2012val-label-index.txt) + +2. Put JPEGS to **'scripts/ILSVRC2012val'** and label text to **'scripts/'** + +3. Images Preprocess: +``` +cd scripts +mkdir input_bins +python3 vgg19_preprocessing.py ./ILSVRC2012val/ ./input_bins/ +``` +The jpegs pictures will be preprocessed to bin fils. + +### 3. Offline Inference + +**Convert pb to om.** + +- configure the env + + Please follow the [guide](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719) to set the envs + +- convert pb to om + + [pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/Official/cv/Vgg19_for_ACL.zip) + + ``` + atc --model=vgg19_tf.pb --framework=3 --output=vgg19_tf_1batch --output_type=FP32 --soc_version=Ascend310 --input_shape="input:1,224,224,3" --insert_op_conf=vgg19_tf_aipp.cfg --enable_small_channel=1 --log=info + ``` + +- Build the program + + ``` + bash build.sh + ``` + +- Run the program: + + ``` + cd scripts + bash benchmark_tf.sh + ``` + +## Performance + +### Result + +Our result was obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. + +#### Inference accuracy results + +| model | **data** | Top1/Top5 | +| :---------------: | :-------: | :-------------: | +| offline Inference | 50000 images | 71.0 %/ 89.8% | + +## Reference +[1] https://github.com/tensorflow/models/tree/master/research/slim \ No newline at end of file diff --git a/ACL_TensorFlow/built-in/cv/YOLOv2_for_ACL/README.md b/ACL_TensorFlow/built-in/cv/YOLOv2_for_ACL/README.md index a3f4ddad2e5d13300eacf96cdb97c6e4d8c9511e..02fc84b41ef01739a64fead1ac1abc7cad21eeec 100644 --- a/ACL_TensorFlow/built-in/cv/YOLOv2_for_ACL/README.md +++ b/ACL_TensorFlow/built-in/cv/YOLOv2_for_ACL/README.md @@ -1,106 +1,88 @@ +中文|[English](README_EN.md) -# YOLOv2 Inference for Tensorflow +# YOLOv2 TensorFlow离线推理 -This repository provides a script and recipe to Inference of the YOLOv2 model. +此链接提供OLOv2 TensorFlow模型在NPU上离线推理的脚本和方法 -## Notice -**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** +## 注意 +**此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。** -Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. +在开始之前,请注意以下适配条件。如果不匹配,可能导致运行失败。 | Conditions | Need | | --- | --- | -| CANN Version | >=5.0.3 | -| Chip Platform| Ascend310/Ascend310P3 | -| 3rd Party Requirements| Please follow the 'requirements.txt' | +| CANN版本 | >=5.0.3 | +| 芯片平台| Ascend310/Ascend310P3 | +| 第三方依赖| 请参考 'requirements.txt' | -## Quick Start Guide +## 快速指南 -### 1. Clone the respository +### 1. 拷贝代码 ```shell git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/cv/YOLOv2_for_ACL ``` -### 2. Download and preprocess the dataset +### 2. 下载数据集和预处理 -1. Download the VOC2007 test dataset by yourself, then extract **VOCtest_06-Nov-2007.tar**. +1. 请自行下载ImageNet2012测试数据集 -2. Move VOC2007 test dataset to **'scripts/VOC2007'** like this: -``` -VOC2007 -|----Annotations -|----ImageSets -|----JPEGImages -|----SegmentationClass -|----SegmentationObject -``` - -3. Images Preprocess: -``` -cd scripts -mkdir input_bins -python3 preprocess.py ./VOC2007/JPEGImages/ ./input_bins/ -``` - The pictures will be preprocessed to bin files. - - -4. Convert Groundtruth labels to text format -``` -python3 xml2txt.py ./VOC2007/Annotations/ ./yolov2_postprocess/groundtruths/ -``` +### 3. 离线推理 -### 3. Offline Inference +**离线模型转换** -**Convert pb to om.** +- 环境变量设置 -- configure the env + 请参考[说明](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719),设置环境变量 - ``` - export install_path=/usr/local/Ascend - export PATH=/usr/local/python3.7.5/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH - export PYTHONPATH=${install_path}/atc/python/site-packages:${install_path}/atc/python/site-packages/auto_tune.egg/auto_tune:${install_path}/atc/python/site-packages/schedule_search.egg:$PYTHONPATH - export LD_LIBRARY_PATH=${install_path}/atc/lib64:${install_path}/acllib/lib64:$LD_LIBRARY_PATH - export ASCEND_OPP_PATH=${install_path}/opp - ``` - -- convert pb to om +- Pb模型转换为om模型 - [pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/yolov2_tf.pb) + [pb模型下载链接](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/yolov2_tf.pb) 以batcsize=1为例: + For Ascend310: ``` atc --model=./yolov2.pb --input_shape='Placeholder:1,416,416,3' --input_format=NHWC --output=./yolov2_tf_1batch --soc_version=Ascend310 --framework=3 ``` + For Ascend310P3: + ``` + atc --model=./yolov2.pb --input_shape='Placeholder:1,416,416,3' --input_format=NHWC --output=./yolov2_tf_1batch --soc_version=Ascend310P3 --framework=3 + ``` -- Build the program +- 编译程序 + For Ascend310: + ``` + bash build.sh + ``` + For Ascend310P3: ``` + export ASCEND310P3_DVPP=1 bash build.sh ``` -- Run the program: +- 开始运行: ``` cd scripts bash benchmark_tf.sh ``` -## Performance +## 推理结果 -### Result +### 结果 -Our result was obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. +本结果是通过运行上面适配的推理脚本获得的。要获得相同的结果,请按照《快速指南》中的步骤操作。 -#### Inference accuracy results +#### 推理精度结果 | model | **data** | mAP | | :---------------: | :-------: | :-------------: | | offline Inference | 4952 images | 59.43% | -## Reference +## 参考 [1] https://github.com/KOD-Chen/YOLOv2-Tensorflow diff --git a/ACL_TensorFlow/built-in/cv/YOLOv2_for_ACL/README_EN.md b/ACL_TensorFlow/built-in/cv/YOLOv2_for_ACL/README_EN.md new file mode 100644 index 0000000000000000000000000000000000000000..0abea07d4b2852f4d350fdfb8a46bca9d60708f1 --- /dev/null +++ b/ACL_TensorFlow/built-in/cv/YOLOv2_for_ACL/README_EN.md @@ -0,0 +1,82 @@ +English|[中文](README.md) + +# YOLOv2 Inference for Tensorflow + +This repository provides a script and recipe to Inference of the YOLOv2 model. + +## Notice +**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** + +Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. + +| Conditions | Need | +| --- | --- | +| CANN Version | >=5.0.3 | +| Chip Platform| Ascend310/Ascend310P3 | +| 3rd Party Requirements| Please follow the 'requirements.txt' | + +## Quick Start Guide + +### 1. Clone the respository + +```shell +git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git +cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/cv/YOLOv2_for_ACL +``` + +### 2. Download and preprocess the dataset + +1. Download the ImageNet2012 dataset by yourself + + + + +### 3. Offline Inference + +**Convert pb to om.** + +- configure the env + + Please follow the [guide](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719) to set the envs + + + +- convert pb to om + + [pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/yolov2_tf.pb) + + 以batcsize=1为例: + + ``` + atc --model=./yolov2.pb --input_shape='Placeholder:1,416,416,3' --input_format=NHWC --output=./yolov2_tf_1batch --soc_version=Ascend310 --framework=3 + ``` + +- Build the program + + ``` + bash build.sh + ``` + +- Run the program: + + ``` + cd scripts + bash benchmark_tf.sh + ``` + +## Performance + +### Result + +Our result was obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. + +#### Inference accuracy results + +| model | **data** | mAP | +| :---------------: | :-------: | :-------------: | +| offline Inference | 4952 images | 59.43% | + +## Reference +[1] https://github.com/KOD-Chen/YOLOv2-Tensorflow + +