diff --git a/docs/lite/docs/source_en/mindir/benchmark_tool.md b/docs/lite/docs/source_en/mindir/benchmark_tool.md index 0ab57251143fa84f2ca4757fbdc8d3db3a1e82ed..d366fb5e62777a233f3063b186c8b383d6b2ab74 100644 --- a/docs/lite/docs/source_en/mindir/benchmark_tool.md +++ b/docs/lite/docs/source_en/mindir/benchmark_tool.md @@ -12,7 +12,7 @@ Before performing inference after converting the model, you can use the Benchmar To use the Benchmark tool, you need to do the following environment preparation work. -- Compile: The Benchmark tool code is in the `mindspore-lite/tools/benchmark` directory of the MindSpore source code. Refer to the build documentation for [Environment requirements](https://www.mindspore.cn/lite/docs/en/master/mindir/build.html#environment-requirements) and [Compilation Examples](https://www.mindspore.cn/lite/docs/en/master/mindir/build.html#compilation-examples) in the build documentation to perform the compilation. +- Compile: The Benchmark tool code is in the `mindspore-lite/tools/benchmark` directory of the MindSpore Lite source code. Refer to the build documentation for [Environment requirements](https://www.mindspore.cn/lite/docs/en/master/mindir/build.html#environment-requirements) and [Compilation Examples](https://www.mindspore.cn/lite/docs/en/master/mindir/build.html#compilation-examples) in the build documentation to perform the compilation. - Run: Refer to [compilation output](https://www.mindspore.cn/lite/docs/en/master/mindir/build.html#directory-structure) in the build documentation to get the `benchmark` tool from the compiled package. @@ -54,7 +54,7 @@ Detailed parameter descriptions are provided below. | `--accuracyThreshold=` | Optional | Specify the accuracy threshold. | Float | 0.5 | - | | `--benchmarkDataFile=` | Optional | Specify the file path to the benchmark data. The benchmark data is used as the comparison output for this test model, which is derived from the same input and forward inference from other deep learning frameworks. | String | null | - | | `--benchmarkDataType=` | Optional | Specify the benchmark data type. | String | FLOAT | FLOAT, INT32, INT8, UINT8 | -| `--device=` | Optional | Specify the type of device on which the model inference program runs. | String | CPU | CPU, GPU, NPU, Ascend | +| `--device=` | Optional | Specify the type of device on which the model inference program runs. | String | CPU | CPU, NPU, Ascend | | `--help` | Optional | Display help information for the `benchmark` command. | - | - | - | | `--inDataFile=` | Optional | Specify the file path to the test model input data. If not set, random input is used. | String | null | - | | `--loopCount=` | Optional | Specify the number of forward inference runs for the test model when Benchmark tool performs benchmarking, with a positive integer value. | Integer | 10 | - | @@ -115,5 +115,5 @@ If you need to specify the dimension of the input data (e.g. input dimension is If the model is encryption model, inference is performed after both `decryptKey` and `cryptoLibPath` are configured to decrypt the model. For example: ```bash -./benchmark --modelFile=/path/to/encry_model.mindir --decryptKey=30313233343536373839414243444546 --cryptoLibPath=/root/anaconda3/bin/openssl +./benchmark --modelFile=/path/to/encry_model.mindir --decryptKey=********************* --cryptoLibPath=/root/anaconda3/bin/openssl ``` \ No newline at end of file diff --git a/docs/lite/docs/source_zh_cn/mindir/benchmark_tool.md b/docs/lite/docs/source_zh_cn/mindir/benchmark_tool.md index 5b993712b140f594aa0cf3b71420c0dc3f1fad1f..158891b8837e5a74f37d376f0fbd8f382e34e4d7 100644 --- a/docs/lite/docs/source_zh_cn/mindir/benchmark_tool.md +++ b/docs/lite/docs/source_zh_cn/mindir/benchmark_tool.md @@ -12,7 +12,7 @@ 使用Benchmark工具,需要进行如下环境准备工作。 -- 编译:Benchmark工具代码在MindSpore源码的`mindspore-lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/build.html#环境准备)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/build.html#编译示例)执行编译。 +- 编译:Benchmark工具代码在MindSpore Lite源码的`mindspore-lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/build.html#环境准备)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/build.html#编译示例)执行编译。 - 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/build.html#目录结构),从编译出来的包中获得`benchmark`工具。 @@ -54,7 +54,7 @@ | `--accuracyThreshold=` | 可选 | 指定准确度阈值。 | Float | 0.5 | - | | `--benchmarkDataFile=` | 可选 | 指定标杆数据的文件路径。标杆数据作为该测试模型的对比输出,是该测试模型使用相同输入并由其他深度学习框架前向推理而来。 | String | null | - | | `--benchmarkDataType=` | 可选 | 指定标杆数据类型。 | String | FLOAT | FLOAT、INT32、INT8、UINT8 | -| `--device=` | 可选 | 指定模型推理程序运行的设备类型。 | String | CPU | CPU、GPU、NPU、Ascend | +| `--device=` | 可选 | 指定模型推理程序运行的设备类型。 | String | CPU | CPU、NPU、Ascend | | `--help` | 可选 | 显示`benchmark`命令的帮助信息。 | - | - | - | | `--inDataFile=` | 可选 | 指定测试模型输入数据的文件路径。如果未设置,则使用随机输入。 | String | null | - | | `--loopCount=` | 可选 | 指定Benchmark工具进行基准测试时,测试模型的前向推理运行次数,其值为正整数。 | Integer | 10 | - | @@ -115,5 +115,5 @@ Mean bias of all nodes: 0% 如果输入的模型是加密模型,需要同时配置`decryptKey`和`cryptoLibPath`对模型解密后进行推理,使用如下命令: ```bash -./benchmark --modelFile=/path/to/encry_model.mindir --decryptKey=30313233343536373839414243444546 --cryptoLibPath=/root/anaconda3/bin/openssl +./benchmark --modelFile=/path/to/encry_model.mindir --decryptKey=********************* --cryptoLibPath=/root/anaconda3/bin/openssl ``` \ No newline at end of file