diff --git a/lite/tutorials/source_en/use/benchmark_tool.md b/lite/tutorials/source_en/use/benchmark_tool.md index 48151bda5ed26753441a911ff0832bdb1fe7c239..ff7f54fb89501c882e6add51b9bdef08be03b6ca 100644 --- a/lite/tutorials/source_en/use/benchmark_tool.md +++ b/lite/tutorials/source_en/use/benchmark_tool.md @@ -47,6 +47,7 @@ The following describes the parameters in detail. | `--modelPath=` | Mandatory | Specifies the file path of the MindSpore Lite model for benchmark testing. | String | Null | - | | `--accuracyThreshold=` | Optional | Specifies the accuracy threshold. | Float | 0.5 | - | | `--calibDataPath=` | Optional | Specifies the file path of the benchmark data. The benchmark data, as the comparison output of the tested model, is output from the forward inference of the tested model under other deep learning frameworks using the same input. | String | Null | - | +| `--calibDataType=` | Optional | Specifies the calibration data type. | String | FLOAT | FLOAT or INT8 | | `--cpuBindMode=` | Optional | Specifies the type of the CPU core bound to the model inference program. | Integer | 1 | −1: medium core
1: large core
0: not bound | | `--device=` | Optional | Specifies the type of the device on which the model inference program runs. | String | CPU | CPU or GPU | | `--help` | Optional | Displays the help information about the `benchmark` command. | - | - | - | @@ -92,3 +93,10 @@ Mean bias of node age_out : 0% Mean bias of all nodes: 0% ======================================================= ``` + +When the origin model's input or output data type is uint8, they needs to be reduced by 128 and converted to int8 type before it can be used as benchmark data to verify accuracy. And when the output data type is INT8, you need to specify calibDataType as INT8 in the parameter. + +```bash +./benchmark --modelPath=./models/test_benchmark_int8.ms --inDataPath=./input/test_benchmark_int8.bin --device=CPU --accuracyThreshold=3 --calibDataPath=./output/test_benchmark_int8.out --calibDataType=INT8 +``` + diff --git a/lite/tutorials/source_zh_cn/use/benchmark_tool.md b/lite/tutorials/source_zh_cn/use/benchmark_tool.md index 52ac1654a38de1495ebc8181539213255b5e7e43..d984fb2c2181dcd3726e1f3e61e083ee6ac100e4 100644 --- a/lite/tutorials/source_zh_cn/use/benchmark_tool.md +++ b/lite/tutorials/source_zh_cn/use/benchmark_tool.md @@ -47,6 +47,7 @@ Benchmark工具是一款可以对MindSpore Lite模型进行基准测试的工具 | `--modelPath=` | 必选 | 指定需要进行基准测试的MindSpore Lite模型文件路径。 | String | null | - | | `--accuracyThreshold=` | 可选 | 指定准确度阈值。 | Float | 0.5 | - | | `--calibDataPath=` | 可选 | 指定标杆数据的文件路径。标杆数据作为该测试模型的对比输出,是该测试模型使用相同输入并由其它深度学习框架前向推理而来。 | String | null | - | +| `--calibDataType=` | 可选 | 指定标杆数据类型。 | String | FLOAT | FLOAT、INT8 | | `--cpuBindMode=` | 可选 | 指定模型推理程序运行时绑定的CPU核类型。 | Integer | 1 | -1:表示中核
1:表示大核
0:表示不绑定 | | `--device=` | 可选 | 指定模型推理程序运行的设备类型。 | String | CPU | CPU、GPU | | `--help` | 可选 | 显示`benchmark`命令的帮助信息。 | - | - | - | @@ -92,3 +93,10 @@ Mean bias of node age_out : 0% Mean bias of all nodes: 0% ======================================================= ``` + +原模型输入输出数据类型为uint8时,需要将其减128转换为int8类型后才能作为标杆数据验证精度,输出数据类型为INT8时需要在参数中指定calibDataType为INT8。 + +```bash +./benchmark --modelPath=./models/test_benchmark_int8.ms --inDataPath=./input/test_benchmark_int8.bin --device=CPU --accuracyThreshold=3 --calibDataPath=./output/test_benchmark_int8.out --calibDataType=INT8 +``` +