diff --git a/ACL_TensorFlow/built-in/recommendation/KGAT_for_ACL/README.md b/ACL_TensorFlow/built-in/recommendation/KGAT_for_ACL/README.md index b476c1d158dba9bfdce7a4f75f10d05126051fd3..9c642007af860c5e6df887dbd9aa27e745dc7a60 100644 --- a/ACL_TensorFlow/built-in/recommendation/KGAT_for_ACL/README.md +++ b/ACL_TensorFlow/built-in/recommendation/KGAT_for_ACL/README.md @@ -1,83 +1,74 @@ +中文|[English](README_EN.md) # - -# KGAT Inference for TensorFlow +# KGAT TensorFlow离线推理 *** -This repository provides a script and recipe to Inference the KGAT Inference +此链接提供KGAT TensorFlow模型在NPU上离线推理的脚本和方法 -* [x] KGAT Inference, based on [knowledge_graph_attention_network](https://github.com/xiangwang1223/knowledge_graph_attention_network) +* [x] KGAT 推理, 基于 [knowledge_graph_attention_network](https://github.com/xiangwang1223/knowledge_graph_attention_network) *** -## Notice -**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** +## 注意 +**此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。** -Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. +在开始之前,请注意以下适配条件。如果不匹配,可能导致运行失败。 | Conditions | Need | | --- | --- | -| CANN Version | >=5.0.3 | -| Chip Platform| Ascend310/Ascend310P3 | -| 3rd Party Requirements| Please follow the 'requirements.txt' | +| CANN版本 | >=5.0.3 | +| 芯片平台| Ascend310/Ascend310P3 | +| 第三方依赖| 请参考 'requirements.txt' | -## Quick Start Guide +## 快速指南 -### 1. Clone the respository +### 1. 拷贝代码 ```shell git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/recommendation/KGAT_for_ACL ``` -### 2. Download and preprocess the dataset +### 2. 下载数据集和预处理 -Download the dataset by yourself, more details see: [amazon-book](./Data/README.md) +请自行下载测试数据集,详情见: [amazon-book](./Data/README.md) -### 3. Obtain the pb model +### 3. 获取pb模型 -Obtain the pb model, more details see: [pb](./Model/pb_model/README.md) +获取pb模型, 详情见: [pb](./Model/pb_model/README.md) -### 4. Build the program -Build the inference application, more details see: [xacl_fmk](./xacl_fmk/README.md) +### 4. 编译程序 +编译推理应用程序, 详情见: [xacl_fmk](./xacl_fmk/README.md) -### 5. Offline Inference +### 5. 离线推理 **KGAT** *** -* KGAT in KGAT_for_ACL use static batch size, set predict_batch_size=2048 as input parameter, so we throw away the last batch of test data(batch size=959) -* The following commands are executed in the ./Model directory +* KGAT 在 KGAT_for_ACL 中 使用静态batch, 设置 predict_batch_size=2048 作为输入参数, 所以我们舍弃了最后一批测试数据(batch size=959) +* 在./Model目录中执行以下命令 *** -**Configure the env** -``` -ASCEND_HOME=/usr/local/Ascend -PYTHON_HOME=/usr/local/python3.7 -export PATH=$PATH:$PYTHON_HOME/bin:$ASCEND_HOME/atc/ccec_compiler/bin:$ASCEND_HOME/atc/bin:$ASCEND_HOME/toolkit/bin/ -export LD_LIBRARY_PATH=$ASCEND_HOME/acllib/lib64:$ASCEND_HOME/toolkit/lib64:$ASCEND_HOME/add-ons:$ASCEND_HOME/opp/op_proto/built-in/:$ASCEND_HOME/opp/framework/built-in/tensorflow/:$ASCEND_HOME/opp/op_impl/built-in/ai_core/tbe/op_tiling -export PYTHONPATH=$ASCEND_HOME/atc/python/site-packages/auto_tune.egg:$ASCEND_HOME/atc/python/site-packages/schedule_search.egg:/caffe/python/:$ASCEND_HOME/ops/op_impl/built-in/ai_core/tbe/ -export ASCEND_OPP_PATH=$ASCEND_HOME/opp -export SOC_VERSION=Ascend310 -# HOST_TYPE in Ascend310 support Atlas300 and MiniRC -export HOST_TYPE=Atlas300 -``` +**环境变量设置** + + 请参考[说明](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719),设置环境变量 -**PreProcess** +**预处理** ```Bash python3 offline_inference/data_preprocess.py ``` -The generated bin file is in the Model/input_bin directory +在Model/input_bin目录中生成bin文件 -**Convert pb to om** +**Pb模型转换为om模型** -[pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/Official/recommendation/KGAT_for_ACL.zip) +[pb模型下载链接](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/Official/recommendation/KGAT_for_ACL.zip) ```Bash atc --model=KGAT_tf.pb --framework=3 --output=ID1376_KGAT_tf_gpu --soc_version=Ascend310 --input_shape="Placeholder:2048;Placeholder_1:24915;Placeholder_4:3" --log=info ``` -Put the converted om file in the Model directory. +将转换后的om文件放入Model目录 -**Run the inference and PostProcess** +**运行推理与后处理** ```Bash python3 offline_inference/xacl_inference.py ``` @@ -114,13 +105,13 @@ output_bin ``` -### 6. Performance +### 6. 性能 -### Result +### 结果 -Our result was obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. +本结果是通过运行上面适配的推理脚本获得的。要获得相同的结果,请按照《快速指南》中的步骤操作。 -#### Inference accuracy results +#### 推理精度结果 | | ascend310 | |----------------|--------| diff --git a/ACL_TensorFlow/built-in/recommendation/KGAT_for_ACL/README_EN.md b/ACL_TensorFlow/built-in/recommendation/KGAT_for_ACL/README_EN.md new file mode 100644 index 0000000000000000000000000000000000000000..1164c1ec3268dc10a581f9c47fb5d8abcd6e7862 --- /dev/null +++ b/ACL_TensorFlow/built-in/recommendation/KGAT_for_ACL/README_EN.md @@ -0,0 +1,121 @@ +English|[中文](README.md) +# +# KGAT Inference for TensorFlow + +*** +This repository provides a script and recipe to Inference the KGAT Inference + +* [x] KGAT Inference, based on [knowledge_graph_attention_network](https://github.com/xiangwang1223/knowledge_graph_attention_network) + +*** + +## Notice +**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** + +Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. + +| Conditions | Need | +| --- | --- | +| CANN Version | >=5.0.3 | +| Chip Platform| Ascend310/Ascend310P3 | +| 3rd Party Requirements| Please follow the 'requirements.txt' | + +## Quick Start Guide + +### 1. Clone the respository +```shell +git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git +cd Modelzoo-TensorFlow/ACL_TensorFlow/built-in/recommendation/KGAT_for_ACL +``` + +### 2. Download and preprocess the dataset + +Download the dataset by yourself, more details see: [amazon-book](./Data/README.md) + +### 3. Obtain the pb model + +Obtain the pb model, more details see: [pb](./Model/pb_model/README.md) + +### 4. Build the program +Build the inference application, more details see: [xacl_fmk](./xacl_fmk/README.md) + +### 5. Offline Inference + +**KGAT** +*** +* KGAT in KGAT_for_ACL use static batch size, set predict_batch_size=2048 as input parameter, so we throw away the last batch of test data(batch size=959) +* The following commands are executed in the ./Model directory +*** +**configure the env** + + Please follow the [guide](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719) to set the envs + +**PreProcess** +```Bash +python3 offline_inference/data_preprocess.py +``` + +The generated bin file is in the Model/input_bin directory + + +**Convert pb to om** + +[pb download link](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/modelzoo/Official/recommendation/KGAT_for_ACL.zip) + +```Bash +atc --model=KGAT_tf.pb --framework=3 --output=ID1376_KGAT_tf_gpu --soc_version=Ascend310 --input_shape="Placeholder:2048;Placeholder_1:24915;Placeholder_4:3" --log=info +``` + +Put the converted om file in the Model directory. + +**Run the inference and PostProcess** +```Bash +python3 offline_inference/xacl_inference.py +``` + +```log +2021-08-19 19:38:24.124 - I - [XACL]: Om model file is: ID1376_KGAT_tf_gpu.om +2021-08-19 19:38:24.124 - I - [XACL]: Input files are: input_bin/input1,input_bin/input2,input_bin/input3 +2021-08-19 19:38:24.124 - I - [XACL]: Output file prefix is: output_bin/kgat_output_bin +2021-08-19 19:38:24.124 - I - [XACL]: Input type is director +2021-08-19 19:38:24.272 - I - [XACL]: Init acl interface success +2021-08-19 19:38:24.866 - I - [XACL]: Load acl model interface success +2021-08-19 19:38:24.866 - I - [XACL]: Create description interface success +2021-08-19 19:38:24.866 - I - [XACL]: The input file: input_bin/input1/users_00000.bin is checked +2021-08-19 19:38:24.866 - I - [XACL]: The input file: input_bin/input2/pos_items_00000.bin is checked +2021-08-19 19:38:24.866 - I - [XACL]: The input file: input_bin/input3/node_dropout_00000.bin is checked +... +2021-08-19 19:41:22.743 - I - [XACL]: The input file: input_bin/input1/users_00033.bin is checked +2021-08-19 19:41:22.743 - I - [XACL]: The input file: input_bin/input2/pos_items_00033.bin is checked +2021-08-19 19:41:22.743 - I - [XACL]: The input file: input_bin/input3/node_dropout_00033.bin is checked +2021-08-19 19:41:22.743 - I - [XACL]: Create input data interface success +2021-08-19 19:41:22.782 - I - [XACL]: Create output data interface success +2021-08-19 19:41:27.705 - I - [XACL]: Run acl model success +2021-08-19 19:41:27.705 - I - [XACL]: Loop 0, start timestamp 1629373282783, end timestamp 1629373287705, cost time 4922.59ms +2021-08-19 19:41:27.914 - I - [XACL]: Dump output 0 to file success +2021-08-19 19:41:27.914 - I - [XACL]: Single batch average NPU inference time of 1 loops: 4922.59 ms 0.20 fps +2021-08-19 19:41:27.914 - I - [XACL]: Destroy input data success +2021-08-19 19:41:28.134 - I - [XACL]: Destroy output data success +2021-08-19 19:41:28.460 - I - [XACL]: Start to finalize acl, aclFinalize interface adds 2s delay to upload device logs +2021-08-19 19:41:30.197 - I - [XACL]: Finalize acl success +2021-08-19 19:41:30.197 - I - [XACL]: 34 samples average NPU inference time of 34 batches: 4931.11 ms 0.20 fps +output_bin +[INFO] 推理结果生成结束 +{'precision': array([0.01522007, 0.01111792, 0.00916784, 0.00797109, 0.00710147]), 'recall': array([0.14694857, 0.20585731, 0.2472628 , 0.28113915, 0.30843164]), 'ndcg': array([0.09972443, 0.12063131, 0.13402407, 0.14439348, 0.15252858]), 'hit_ratio': array([0.25184514, 0.34177161, 0.39911603, 0.44301682, 0.47649134]), 'auc': 0.0} +``` + + +### 6. Performance + +### Result + +Our result was obtained by running the applicable inference script. To achieve the same results, follow the steps in the Quick Start Guide. + +#### Inference accuracy results + +| | ascend310 | +|----------------|--------| +| precision | [0.01522007, 0.01111792, 0.00916784, 0.00797109, 0.00710147] | +| recall | [0.14694857, 0.20585731, 0.2472628 , 0.28113915, 0.30843164] | +| ndcg | [0.09972443, 0.12063131, 0.13402407, 0.14439348, 0.15252858] | +| hit_ratio | [0.25184514, 0.34177161, 0.39911603, 0.44301682, 0.47649134] | diff --git a/ACL_TensorFlow/contrib/cv/FasterRCNN_for_ACL/README.md b/ACL_TensorFlow/contrib/cv/FasterRCNN_for_ACL/README.md index 1172ce2da264e60512d02838a5912a67602b1c41..6b150fe35606f7e1292d902382a67c35fd3765aa 100644 --- a/ACL_TensorFlow/contrib/cv/FasterRCNN_for_ACL/README.md +++ b/ACL_TensorFlow/contrib/cv/FasterRCNN_for_ACL/README.md @@ -46,7 +46,7 @@ cd Modelzoo-TensorFlow/ACL/Official/cv/FasterRCNN_for_ACL 4. 创建两个数据集文件夹,一个是用于“image_info”和“images”文件的your_data_path,另一个是“source_ids”文件的your_datasourceid_path。将bin文件移动到正确的目录; 5. 将“instances_val2017.json”复制到FasterRCNN_for_ACL/scripts 目录下; -### 3. Offline Inference +### 3. 离线推理 **Pb模型转换为om模型** - 访问 "FasterRCNN_for_ACL" 文件夹 @@ -60,7 +60,7 @@ cd Modelzoo-TensorFlow/ACL/Official/cv/FasterRCNN_for_ACL ``` atc --model=/your/pb/path/your_fast_pb_name.pb --framework=3 --output=your_fastom_name--output_type=FP32 --soc_version=Ascend310P3 --input_shape="image:1,1024,1024,3;image_info:1,5" --keep_dtype=./keeptype.cfg --precision_mode=force_fp16 --out_nodes="generate_detections/combined_non_max_suppression/CombinedNonMaxSuppression:3;generate_detections/denormalize_box/concat:0;generate_detections/add:0;generate_detections/combined_non_max_suppression/CombinedNonMaxSuppression:1" ``` -注意: 替换模型参数, 输出, 环境变量 +注意: 替换模型、输出、环境变量的参数 - 编译程序 diff --git a/ACL_TensorFlow/contrib/cv/MaskRCNN_for_ACL/README.md b/ACL_TensorFlow/contrib/cv/MaskRCNN_for_ACL/README.md index f3f64ed351c2ae61c5809fd637cc620b046a5cdb..cb84d296327f808f4fbd6b6b16378756d0f74178 100644 --- a/ACL_TensorFlow/contrib/cv/MaskRCNN_for_ACL/README.md +++ b/ACL_TensorFlow/contrib/cv/MaskRCNN_for_ACL/README.md @@ -1,75 +1,68 @@ +中文|[English](README_EN.md) +# MaskRCNN TensorFlow离线推理 -# MaskRCNN Inference for Tensorflow +此链接提供MaskRCNN TensorFlow模型在NPU上离线推理的脚本和方法 -This repository provides a script and recipe to Inference the MaskRCNN model. +## 注意 +**此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。** -## Notice -**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** - -Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. +在开始之前,请注意以下适配条件。如果不匹配,可能导致运行失败。 | Conditions | Need | | --- | --- | -| CANN Version | >=5.0.3 | -| Chip Platform| Ascend310/Ascend310P3 | -| 3rd Party Requirements| Please follow the 'requirements.txt' | - -## Quick Start Guide +| CANN版本 | >=5.0.3 | +| 芯片平台| Ascend310/Ascend310P3 | +| 第三方依赖| 请参考 'requirements.txt' | -### 1. Clone the respository +## 快速指南 +### 1. 拷贝代码 ```shell git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git cd Modelzoo-TensorFlow/ACL/Official/cv/MaskRCNN_for_ACL ``` -### 2. Download and preprocess the dataset +### 2. 下载数据集和预处理 -1. Access to the "datapreprocess" directory. -2. Download and build TFRecords of the dataset,[COCO 2017](http://cocodataset.org/#download). +1. 访问“datapreprocess”目录 +2. 下载并生成TFRecords数据集,[COCO 2017](http://cocodataset.org/#download). ``` bash download_and_preprocess_mscoco.sh ``` - Note: Data will be downloaded, preprocessed to tfrecords format and saved in the directory (on the host). Or if you have downloaded and created the TFRecord file (TFRecord generated based on the official tpu script of tensorflow), skip this step. - Or if you have downloaded the COCO images, run the following command to convert them to TFRecord. + 注意:数据将被下载,预处理为tfrecords格式,并保存在目录中(主机上)。如果您已经下载并创建了TFRecord文件(根据tensorflow的官方tpu脚本生成的TFRecord),请跳过此步骤。 如果您已经下载了COCO镜像,请运行以下命令将其转换为TFRecord格式 - ``` + python3 object_detection/dataset_tools/create_coco_tf_record.py --include_masks=True --val_image_dir=/your/val_tfrecord_file/path --val_annotations_file=/your/val_annotations_file/path/instances_val2017.json --output_dir=/your/tfrecord_file/out/path - ``` + -2. Transfer to Bin file. +2. 将数据集转成bin文件 ``` python3 data_2_bin.py --validation_file_pattern /your/val_tfrecord_file/path/val_file_prefix* --binfilepath /your/bin_file_out_path ``` -4. Create 2 dataset folders, one is your_data_path for "image_info" and "images" files, and one is your_datasourceid_path for "source_ids" files. Move your bin files to the correct directory. -5. Copy the "instances_val2017.json" to the MaskRCNN_for_ACL/scripts. +4. 创建两个数据集文件夹,一个是用于“image_info”和“images”文件的your_data_path,另一个是“source_ids”文件的your_datasourceid_path。将bin文件移动到正确的目录; +5. 将“instances_val2017.json”复制到MaskRCNN_for_ACL/scripts目录下; -### 3. Offline Inference +### 3. 离线推理 -**Convert pb to om.** +**Pb模型转换为om模型** -- Access to the "MaskRCNN_for_ACL" directory. -- Configure the env +- 访问 "MaskRCNN_for_ACL" 文件夹 - ``` - export install_path=/usr/local/Ascend - export PATH=/usr/local/python3.7.5/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH - export PYTHONPATH=${install_path}/opp/op_impl/built-in/ai_core/tbe:$PYTHONPATH - export LD_LIBRARY_PATH=${install_path}/atc/lib64:${install_path}/acllib/lib64/:$LD_LIBRARY_PATH - export ASCEND_OPP_PATH=${install_path}/opp - ``` +- configure the env + + Please follow the [guide](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719) to set the envs -- Convert pb to om +- Pb模型转换为om模型 ``` atc --model=/your/pb/path/your_maskpb_name.pb --framework=3 --output=your_maskom_name --output_type=FP32 --soc_version=Ascend310P3 --input_shape="image:1,1024,1024,3;image_info:1,5" --keep_dtype=./keeptype.cfg --precision_mode=force_fp16 --out_nodes="generate_detections/combined_non_max_suppression/CombinedNonMaxSuppression:3;generate_detections/denormalize_box/concat:0;generate_detections/add:0;generate_detections/combined_non_max_suppression/CombinedNonMaxSuppression:1" ``` -Notes: Replace the values of model, output, soc_version +注意: 替换模型、输出、环境变量的参数 -- Build the program +- 编译程序 ``` bash build.sh @@ -81,17 +74,17 @@ Notes: Replace the values of model, output, soc_version cd scripts ./benchmark_tf.sh --batchSize=1 --modelType=maskrcnn --outputType=fp32 --deviceId=0 --modelPath=/your/maskom/path/your_maskom_name.om --dataPath=/your/data/path --innum=2 --suffix1=image_info.bin --suffix2=images.bin --imgType=raw --sourceidpath=/your/datasourceid/path ``` -Notes: Replace the values of modelPath, dataPath, and sourceidpath. Use an absolute path. +注意:替换modelPath、dataPath和sourceidpath的参数。使用绝对路径。 -## Accuracy +## 精度 -### Result +### 结果 -Our result were obtained by running the applicable training script. To achieve the same results, follow the steps in the Quick Start Guide. +本结果是通过运行上面适配的推理脚本获得的。要获得相同的结果,请按照《快速指南》中的步骤操作。 -#### Inference accuracy results +#### 推理精度结果 | model | **data** | Bbox/Segm | | :---------------: | :-------: | :---------------: | diff --git a/ACL_TensorFlow/contrib/cv/MaskRCNN_for_ACL/README_EN.md b/ACL_TensorFlow/contrib/cv/MaskRCNN_for_ACL/README_EN.md new file mode 100644 index 0000000000000000000000000000000000000000..a066ac7b41699128169e694c832d84297e7a8055 --- /dev/null +++ b/ACL_TensorFlow/contrib/cv/MaskRCNN_for_ACL/README_EN.md @@ -0,0 +1,93 @@ +English|[中文](README.md) + +# MaskRCNN Inference for Tensorflow + +This repository provides a script and recipe to Inference the MaskRCNN model. + +## Notice +**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** + +Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. + +| Conditions | Need | +| --- | --- | +| CANN Version | >=5.0.3 | +| Chip Platform| Ascend310/Ascend310P3 | +| 3rd Party Requirements| Please follow the 'requirements.txt' | + +## Quick Start Guide + +### 1. Clone the respository + +```shell +git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git +cd Modelzoo-TensorFlow/ACL/Official/cv/MaskRCNN_for_ACL +``` + +### 2. Download and preprocess the dataset + +1. Access to the "datapreprocess" directory. +2. Download and build TFRecords of the dataset,[COCO 2017](http://cocodataset.org/#download). + +``` + bash download_and_preprocess_mscoco.sh +``` + Note: Data will be downloaded, preprocessed to tfrecords format and saved in the directory (on the host). Or if you have downloaded and created the TFRecord file (TFRecord generated based on the official tpu script of tensorflow), skip this step. + Or if you have downloaded the COCO images, run the following command to convert them to TFRecord. + + + python3 object_detection/dataset_tools/create_coco_tf_record.py --include_masks=True --val_image_dir=/your/val_tfrecord_file/path --val_annotations_file=/your/val_annotations_file/path/instances_val2017.json --output_dir=/your/tfrecord_file/out/path + + +2. Transfer to Bin file. +``` + python3 data_2_bin.py --validation_file_pattern /your/val_tfrecord_file/path/val_file_prefix* --binfilepath /your/bin_file_out_path +``` +4. Create 2 dataset folders, one is your_data_path for "image_info" and "images" files, and one is your_datasourceid_path for "source_ids" files. Move your bin files to the correct directory. +5. Copy the "instances_val2017.json" to the MaskRCNN_for_ACL/scripts. + + +### 3. Offline Inference + +**Convert pb to om.** + +- Access to the "MaskRCNN_for_ACL" directory. +- Configure the env + + Please follow the [guide](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719) to set the envs + +- Convert pb to om + + ``` + atc --model=/your/pb/path/your_maskpb_name.pb --framework=3 --output=your_maskom_name --output_type=FP32 --soc_version=Ascend310P3 --input_shape="image:1,1024,1024,3;image_info:1,5" --keep_dtype=./keeptype.cfg --precision_mode=force_fp16 --out_nodes="generate_detections/combined_non_max_suppression/CombinedNonMaxSuppression:3;generate_detections/denormalize_box/concat:0;generate_detections/add:0;generate_detections/combined_non_max_suppression/CombinedNonMaxSuppression:1" + ``` +Notes: Replace the values of model, output, soc_version + +- Build the program + + ``` + bash build.sh + ``` + +- Run the program: + + ``` + cd scripts + ./benchmark_tf.sh --batchSize=1 --modelType=maskrcnn --outputType=fp32 --deviceId=0 --modelPath=/your/maskom/path/your_maskom_name.om --dataPath=/your/data/path --innum=2 --suffix1=image_info.bin --suffix2=images.bin --imgType=raw --sourceidpath=/your/datasourceid/path + ``` +Notes: Replace the values of modelPath, dataPath, and sourceidpath. Use an absolute path. + + + +## Accuracy + +### Result + +Our result were obtained by running the applicable training script. To achieve the same results, follow the steps in the Quick Start Guide. + +#### Inference accuracy results + +| model | **data** | Bbox/Segm | +| :---------------: | :-------: | :---------------: | +| offline Inference | 5000 images | 33.1%/30.2 | + diff --git a/ACL_TensorFlow/contrib/nlp/GRU4Rec_for_ACL/README.md b/ACL_TensorFlow/contrib/nlp/GRU4Rec_for_ACL/README.md index f65196bc878779e63556792c66c9c68bc528a3e5..6cfe2fbd95509ae675ac518ae1a789a3f79934db 100644 --- a/ACL_TensorFlow/contrib/nlp/GRU4Rec_for_ACL/README.md +++ b/ACL_TensorFlow/contrib/nlp/GRU4Rec_for_ACL/README.md @@ -39,7 +39,7 @@ cd Modelzoo-TensorFlow/ACL_TensorFlow/contrib/nlp/GRU4Rec_for_ACL ### 4. 编译程序 编译推理程序, 详情见: [xacl_fmk](./xacl_fmk/README.md) -将xacl放到当前文件夹. +将xacl放到当前文件夹 ### 5. 离线推理 @@ -70,7 +70,7 @@ python3 xnlp_fmk.py \ **冻结Pb模型** * --output_dir:在此路径下,冻结脚本会把checkpoint文件转成Pb模型 -* --checkpoint_dir:checkpoint文件, 包括 'checkpoint', 'ckpt.data', 'ckpt.index' and 'ckpt.meta' +* --checkpoint_dir:checkpoint文件, 包括 'checkpoint', 'ckpt.data', 'ckpt.index' 和 'ckpt.meta' * --pb_model_file:pb模型文件名 * --predict_batch_size:与实际批量相比,仅测试了50个 * 保持其他参数与上一步相同 @@ -90,7 +90,7 @@ python3 xnlp_fmk.py \ ** * --om_model_file:om模型名 * --soc_version, --in_nodes, --out_nodes :根据实际情况传参 -* 添加额外需要的atc 参数,例如: --precision_mode +* 添加额外需要的atc参数,例如: --precision_mode * --predict_batch_size :实际batch, 当前仅支持静态batch * 保持其他参数与上一步相同 * 将“logits”保存为'GRU4Rec_full.txt','logits'是'SoftmaxV2'的输出节点,如果pb模型使用其他名称,请更改; diff --git a/ACL_TensorFlow/contrib/nlp/LSTM_for_ACL/README.md b/ACL_TensorFlow/contrib/nlp/LSTM_for_ACL/README.md index 0c44c439204888c095b9b50f224278d6e3e7e514..b95fdd00d8854ee1c376771daaec2187152287a6 100644 --- a/ACL_TensorFlow/contrib/nlp/LSTM_for_ACL/README.md +++ b/ACL_TensorFlow/contrib/nlp/LSTM_for_ACL/README.md @@ -1,210 +1,206 @@ -# - -# LSTM Inference for TensorFlow - -*** -This repository provides a script and recipe to Inference the LSTM Inference - -* [x] LSTM Inference, based on [Sentiment Analysis with Word Embedding](https://github.com/HqWei/Sentiment-Analysis) - -*** - -## Notice -**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** - -Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. - -| Conditions | Need | -| --- | --- | -| CANN Version | >=5.0.3 | -| Chip Platform| Ascend310/Ascend310P3 | -| 3rd Party Requirements| Please follow the 'requirements.txt' | - -## Quick Start Guide - -### 1. Clone the respository -```shell -git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git -cd Modelzoo-TensorFlow/ACL_TensorFlow/contrib/nlp/LSTM_for_ACL -``` - -### 2. Download and preprocess the dataset - -Download the dataset by yourself, more details see: [IMDB](./data/IMDB/README.md) - -### 3. Obtain the fine-tuned checkpoint files or pb model - -Obtain the fine-tuned checkpoint files or pb model, more details see: [ckpt](./save/ckpt/README.md) or [models](./save/model/README.md) - -### 4. Build the program -Build the inference application, more details see: [xacl_fmk](./xacl_fmk/README.md) -Put xacl to the current dictory. - -### 5. Offline Inference - -**LSTM** -*** -* LSTM use lstm for model_name parameter, imdb for task_name -* LSTM in LSTM_for_ACL max_seq_len=250 -* LSTM in LSTM_for_ACL use static batch size, set predict_batch_size=24 as input parameter -*** -**Configure the env** -``` -export install_path=/usr/local/Ascend -export PATH=/usr/local/python3.7.5/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH -export PYTHONPATH=${install_path}/atc/python/site-packages:${install_path}/atc/python/site-packages/auto_tune.egg/auto_tune:${install_path}/atc/python/site-packages/schedule_search.egg:$PYTHONPATH -export LD_LIBRARY_PATH=${install_path}/atc/lib64:${install_path}/acllib/lib64:$LD_LIBRARY_PATH -export ASCEND_OPP_PATH=${install_path}/opp -``` - -**PreProcess** -* Change --data_dir to the real path of each downstream task dataset, and make sure the **predict** file under the path -* Change --output_dir to the same with --data_dir, and preprocess script will convert text to bin files under this path -* Keep the --model_name=lstm when do the LSTM tasks -* Change --task_name to the task you want to do, only support imdb tasks -* More datasets and tasks details like download link see README.md in each datasets' path -```Bash -python3 xnlp_fmk.py \ - --data_dir=./data/IMDB \ - --output_dir=./data/IMDB \ - --model_name=lstm \ - --task_name=imdb \ - --action_type=preprocess - -``` - -**Freeze pb model** -* Change --output_dir to the real path, and freeze script will convert checkpoint files to pb model file under this path -* Change --checkpoint_dir to the real path of checkpoint files, include 'checkpoint', 'ckpt.data', 'ckpt.index' and 'ckpt.meta' -* Rename --pb_model_file to the real pb model file name -* Change --predict_batch_size to the real batch size, only 24 has been tested -* Keep other parameters the same as the previous step -```Bash -python3 xnlp_fmk.py \ - --data_dir=./data/IMDB \ - --output_dir=./save/model \ - --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ - --checkpoint_dir=./save/ckpt/lstm_imdb \ - --predict_batch_size=24 \ - --model_name=lstm \ - --task_name=imdb \ - --action_type=freeze - -``` - -**Convert pb to om** -* Rename --om_model_file to the real om model file name -* Change the --soc_version, --in_nodes, --out_nodes according to the actual situation -* Add additional atc parameters if you need, e.g., --precision_mode -* Change --predict_batch_size to the real batch size, currently only support static batch size -* Keep other parameters the same as the previous step -```Bash -python3 xnlp_fmk.py \ - --data_dir=./data/IMDB \ - --output_dir=./save/model \ - --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ - --om_model_file=./save/model/LSTM_IMDB_BatchSize_24.om \ - --predict_batch_size=24 \ - --soc_version="Ascend310" \ - --in_nodes="\"input_ids:24,250\"" \ - --out_nodes="\"logits:0\"" \ - --model_name=lstm \ - --task_name=imdb \ - --action_type=atc - -``` - -**Run the inference** -* Change --output_dir to the real path and script will save the output bin file under this path -* Build the inference application and put it under current path, more details see: [xacl_fmk](./xacl_fmk/README.md) -* Keep other parameters the same as the previous step -```Bash -python3 xnlp_fmk.py \ - --data_dir=./data/IMDB \ - --output_dir=./save/output \ - --om_model_file=./save/model/LSTM_IMDB_BatchSize_24.om \ - --predict_batch_size=24 \ - --model_name=lstm \ - --task_name=imdb \ - --action_type=npu - -``` - -**PostProcess** -* Change --output_dir to the real path and script will save the precision result file under this path -* Keep other parameters the same as the previous step -```Bash -python3 xnlp_fmk.py \ - --data_dir=./data/IMDB \ - --output_dir=./save/output \ - --om_model_file=./save/model/LSTM_IMDB_BatchSize_24.om \ - --predict_batch_size=24 \ - --model_name=lstm \ - --task_name=imdb \ - --action_type=postprocess - -``` - -## Other Usages -**Convert pb to pbtxt** -* Change --output_dir to the real path, and convert script will convert pb model file to pbtxt model file under this path -* Rename --pb_model_file to the real pb model file name -* Keep other parameters the same as the previous step -```Bash -python3 xnlp_fmk.py \ - --output_dir=./save/model \ - --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ - --model_name=lstm \ - --task_name=imdb \ - --action_type=pbtxt - -``` - -**Run the inference by pb model** -* Change the --in_nodes, --out_nodes according to the actual situation -* Keep other parameters the same as the previous step -```Bash -python3 xnlp_fmk.py \ - --data_dir=./data/GAD \ - --output_dir=./save/output \ - --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ - --predict_batch_size=24 \ - --in_nodes="\"input_ids:24,250\"" \ - --out_nodes="\"logits:0\"" \ - --model_name=lstm \ - --task_name=imdb \ - --action_type=cpu - -``` - -## Reference - -[1] https://arxiv.org/abs/1810.04805 - -[2] https://github.com/google-research/bert - -[3] https://github.com/kyzhouhzau/BERT-NER - -[4] https://github.com/zjy-ucas/ChineseNER - -[5] https://github.com/hanxiao/bert-as-service - -[6] https://github.com/macanv/BERT-BiLSTM-CRF-NER - -[7] https://github.com/tensorflow/tensor2tensor - -[8] https://github.com/google-research/albert - -[9] https://github.com/brightmart/albert_zh - -[10] https://github.com/HqWei/Sentiment-Analysis - -[11] https://gitee.com/wang-bain/xacl_fmk - -[12] https://github.com/brightmart/roberta_zh - -[13] https://github.com/dmis-lab/biobert - -[14] https://github.com/Songweiping/GRU4Rec_TensorFlow - -# +中文|[English](README_EN.md) +# + +# LSTM TensorFlow离线推理 + +*** +此链接提供LSTM TensorFlow模型在NPU上离线推理的脚本和方法 + +* [x] LSTM 推理, 基于 [Sentiment Analysis with Word Embedding](https://github.com/HqWei/Sentiment-Analysis) + +*** + +## Notice +**此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。** + +在开始之前,请注意以下适配条件。如果不匹配,可能导致运行失败。 + +| Conditions | Need | +| --- | --- | +| CANN版本 | >=5.0.3 | +| 芯片平台| Ascend310/Ascend310P3 | +| 第三方依赖| 请参考 'requirements.txt' | + +## 快速指南 + +### 1. 拷贝代码 +```shell +git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git +cd Modelzoo-TensorFlow/ACL_TensorFlow/contrib/nlp/LSTM_for_ACL +``` + +### 2. 下载数据集和预处理 + +请自行下载测试数据集, 详情见: [IMDB](./data/IMDB/README.md) + +### 3. 获取checkpoint文件或pb模型 + +获取checkpoint文件或pb模型, 详情见: [ckpt](./save/ckpt/README.md) or [models](./save/model/README.md) + +### 4. 编译程序 +编译推理程序, 详情见: [xacl_fmk](./xacl_fmk/README.md) +将xacl放到当前文件夹 + +### 5. 离线推理 + +**LSTM** +*** +* LSTM 将lstm用于modelname参数,imdb用于task_name参数 +* LSTM 在 LSTM_for_ACL 中: max_seq_len=250 +* LSTM 在 LSTM_for_ACL 中:使用静态batch,将predict_batch_size=24设置为输入参数 +*** +**环境变量设置** + + 请参考[说明](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719),设置环境变量 + +**预处理** +* --data_dir:每个任务数据集的实际路径, 并且确保 **predict** 文件在当前路径下 +* --output_dir 的传参与--data_dir相同, 预处理脚本会将文本转换为该路径下的bin文件 +* --model_name:当进行LSTM任务,参数为 lstm +* --task_name:任务名, 仅支持imdb任务 +* 更多数据集和任务详细信息,如下载链接,请参阅自述文件。每个数据集路径中的README.md +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/IMDB \ + --output_dir=./data/IMDB \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=preprocess + +``` + +**冻结Pb模型** +* --output_dir:在此路径下,冻结脚本会把checkpoint文件转成Pb模型 +* --checkpoint_dir:checkpoint文件, 包括 'checkpoint', 'ckpt.data', 'ckpt.index' 和 'ckpt.meta' +* --pb_model_file:pb模型文件名 +* --predict_batch_size:与实际批量相比,仅测试了24个 +* 保持其他参数与上一步相同 +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/IMDB \ + --output_dir=./save/model \ + --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ + --checkpoint_dir=./save/ckpt/lstm_imdb \ + --predict_batch_size=24 \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=freeze + +``` + +**Pb模型转换为om模型 ** +* --om_model_file:om模型名 +* --soc_version, --in_nodes, --out_nodes :根据实际情况传参 +* 添加额外需要的atc参数,例如: --precision_mode +* --predict_batch_size :实际batch, 当前仅支持静态batch +* 保持其他参数与上一步相同 +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/IMDB \ + --output_dir=./save/model \ + --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ + --om_model_file=./save/model/LSTM_IMDB_BatchSize_24.om \ + --predict_batch_size=24 \ + --soc_version="Ascend310" \ + --in_nodes="\"input_ids:24,250\"" \ + --out_nodes="\"logits:0\"" \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=atc + +``` + +**运行推理** +* -output_dir :脚本将在该路径下保存输出bin文件 +* 构建推理应用程序并将其置于当前路径下,详情见: [xacl_fmk](./xacl_fmk/README.md) +* 保持其他参数与上一步相同 +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/IMDB \ + --output_dir=./save/output \ + --om_model_file=./save/model/LSTM_IMDB_BatchSize_24.om \ + --predict_batch_size=24 \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=npu + +``` + +**后处理** +* --output_dir:脚本将在该路径下保存精度结果文件 +* 保持其他参数与上一步相同 +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/IMDB \ + --output_dir=./save/output \ + --om_model_file=./save/model/LSTM_IMDB_BatchSize_24.om \ + --predict_batch_size=24 \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=postprocess + +``` + +## 其他用途 +**将pb模型转换为pbtxt** +* --output_dir:在此路径下,脚本会将pb模型转为pbtxt模型文件 +* --pb_model_file:pb模型文件名 +* 保持其他参数与上一步相同 +```Bash +python3 xnlp_fmk.py \ + --output_dir=./save/model \ + --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=pbtxt + +``` + +**通过pb模型运行推理l** +* --in_nodes, --out_nodes:根据实际情况传参 +* 保持其他参数与上一步相同 +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/GAD \ + --output_dir=./save/output \ + --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ + --predict_batch_size=24 \ + --in_nodes="\"input_ids:24,250\"" \ + --out_nodes="\"logits:0\"" \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=cpu + +``` + +## Reference + +[1] https://arxiv.org/abs/1810.04805 + +[2] https://github.com/google-research/bert + +[3] https://github.com/kyzhouhzau/BERT-NER + +[4] https://github.com/zjy-ucas/ChineseNER + +[5] https://github.com/hanxiao/bert-as-service + +[6] https://github.com/macanv/BERT-BiLSTM-CRF-NER + +[7] https://github.com/tensorflow/tensor2tensor + +[8] https://github.com/google-research/albert + +[9] https://github.com/brightmart/albert_zh + +[10] https://github.com/HqWei/Sentiment-Analysis + +[11] https://gitee.com/wang-bain/xacl_fmk + +[12] https://github.com/brightmart/roberta_zh + +[13] https://github.com/dmis-lab/biobert + +[14] https://github.com/Songweiping/GRU4Rec_TensorFlow + +# diff --git a/ACL_TensorFlow/contrib/nlp/LSTM_for_ACL/README_EN.md b/ACL_TensorFlow/contrib/nlp/LSTM_for_ACL/README_EN.md new file mode 100644 index 0000000000000000000000000000000000000000..c8c7ad43a6592929efeb5b366c4c7f93f1f6165b --- /dev/null +++ b/ACL_TensorFlow/contrib/nlp/LSTM_for_ACL/README_EN.md @@ -0,0 +1,207 @@ +English|[中文](README.md) +# + +# LSTM Inference for TensorFlow + +*** +This repository provides a script and recipe to Inference the LSTM Inference + +* [x] LSTM Inference, based on [Sentiment Analysis with Word Embedding](https://github.com/HqWei/Sentiment-Analysis) + +*** + +## Notice +**This sample only provides reference for you to learn the Ascend software stack and is not for commercial purposes.** + +Before starting, please pay attention to the following adaptation conditions. If they do not match, may leading in failure. + +| Conditions | Need | +| --- | --- | +| CANN Version | >=5.0.3 | +| Chip Platform| Ascend310/Ascend310P3 | +| 3rd Party Requirements| Please follow the 'requirements.txt' | + +## Quick Start Guide + +### 1. Clone the respository +```shell +git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git +cd Modelzoo-TensorFlow/ACL_TensorFlow/contrib/nlp/LSTM_for_ACL +``` + +### 2. Download and preprocess the dataset + +Download the dataset by yourself, more details see: [IMDB](./data/IMDB/README.md) + +### 3. Obtain the fine-tuned checkpoint files or pb model + +Obtain the fine-tuned checkpoint files or pb model, more details see: [ckpt](./save/ckpt/README.md) or [models](./save/model/README.md) + +### 4. Build the program +Build the inference application, more details see: [xacl_fmk](./xacl_fmk/README.md) +Put xacl to the current dictory. + +### 5. Offline Inference + +**LSTM** +*** +* LSTM use lstm for model_name parameter, imdb for task_name +* LSTM in LSTM_for_ACL max_seq_len=250 +* LSTM in LSTM_for_ACL use static batch size, set predict_batch_size=24 as input parameter +*** +**Configure the env** + + Please follow the [guide](https://gitee.com/ascend/ModelZoo-TensorFlow/wikis/02.%E7%A6%BB%E7%BA%BF%E6%8E%A8%E7%90%86%E6%A1%88%E4%BE%8B/Ascend%E5%B9%B3%E5%8F%B0%E6%8E%A8%E7%90%86%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E8%AE%BE%E7%BD%AE?sort_id=6458719) to set the envs + + +**PreProcess** +* Change --data_dir to the real path of each downstream task dataset, and make sure the **predict** file under the path +* Change --output_dir to the same with --data_dir, and preprocess script will convert text to bin files under this path +* Keep the --model_name=lstm when do the LSTM tasks +* Change --task_name to the task you want to do, only support imdb tasks +* More datasets and tasks details like download link see README.md in each datasets' path +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/IMDB \ + --output_dir=./data/IMDB \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=preprocess + +``` + +**Freeze pb model** +* Change --output_dir to the real path, and freeze script will convert checkpoint files to pb model file under this path +* Change --checkpoint_dir to the real path of checkpoint files, include 'checkpoint', 'ckpt.data', 'ckpt.index' and 'ckpt.meta' +* Rename --pb_model_file to the real pb model file name +* Change --predict_batch_size to the real batch size, only 24 has been tested +* Keep other parameters the same as the previous step +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/IMDB \ + --output_dir=./save/model \ + --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ + --checkpoint_dir=./save/ckpt/lstm_imdb \ + --predict_batch_size=24 \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=freeze + +``` + +**Convert pb to om** +* Rename --om_model_file to the real om model file name +* Change the --soc_version, --in_nodes, --out_nodes according to the actual situation +* Add additional atc parameters if you need, e.g., --precision_mode +* Change --predict_batch_size to the real batch size, currently only support static batch size +* Keep other parameters the same as the previous step +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/IMDB \ + --output_dir=./save/model \ + --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ + --om_model_file=./save/model/LSTM_IMDB_BatchSize_24.om \ + --predict_batch_size=24 \ + --soc_version="Ascend310" \ + --in_nodes="\"input_ids:24,250\"" \ + --out_nodes="\"logits:0\"" \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=atc + +``` + +**Run the inference** +* Change --output_dir to the real path and script will save the output bin file under this path +* Build the inference application and put it under current path, more details see: [xacl_fmk](./xacl_fmk/README.md) +* Keep other parameters the same as the previous step +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/IMDB \ + --output_dir=./save/output \ + --om_model_file=./save/model/LSTM_IMDB_BatchSize_24.om \ + --predict_batch_size=24 \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=npu + +``` + +**PostProcess** +* Change --output_dir to the real path and script will save the precision result file under this path +* Keep other parameters the same as the previous step +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/IMDB \ + --output_dir=./save/output \ + --om_model_file=./save/model/LSTM_IMDB_BatchSize_24.om \ + --predict_batch_size=24 \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=postprocess + +``` + +## Other Usages +**Convert pb to pbtxt** +* Change --output_dir to the real path, and convert script will convert pb model file to pbtxt model file under this path +* Rename --pb_model_file to the real pb model file name +* Keep other parameters the same as the previous step +```Bash +python3 xnlp_fmk.py \ + --output_dir=./save/model \ + --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=pbtxt + +``` + +**Run the inference by pb model** +* Change the --in_nodes, --out_nodes according to the actual situation +* Keep other parameters the same as the previous step +```Bash +python3 xnlp_fmk.py \ + --data_dir=./data/GAD \ + --output_dir=./save/output \ + --pb_model_file=./save/model/LSTM_IMDB_BatchSize_24.pb \ + --predict_batch_size=24 \ + --in_nodes="\"input_ids:24,250\"" \ + --out_nodes="\"logits:0\"" \ + --model_name=lstm \ + --task_name=imdb \ + --action_type=cpu + +``` + +## Reference + +[1] https://arxiv.org/abs/1810.04805 + +[2] https://github.com/google-research/bert + +[3] https://github.com/kyzhouhzau/BERT-NER + +[4] https://github.com/zjy-ucas/ChineseNER + +[5] https://github.com/hanxiao/bert-as-service + +[6] https://github.com/macanv/BERT-BiLSTM-CRF-NER + +[7] https://github.com/tensorflow/tensor2tensor + +[8] https://github.com/google-research/albert + +[9] https://github.com/brightmart/albert_zh + +[10] https://github.com/HqWei/Sentiment-Analysis + +[11] https://gitee.com/wang-bain/xacl_fmk + +[12] https://github.com/brightmart/roberta_zh + +[13] https://github.com/dmis-lab/biobert + +[14] https://github.com/Songweiping/GRU4Rec_TensorFlow + +#