diff --git a/docs/lite/docs/source_en/troubleshooting_guide.md b/docs/lite/docs/source_en/troubleshooting_guide.md index 2a743a711af35aaa6766a1c1f64df42799171367..7d302bb60f20ac859a46ad9f4fd58cbf55670aa9 100644 --- a/docs/lite/docs/source_en/troubleshooting_guide.md +++ b/docs/lite/docs/source_en/troubleshooting_guide.md @@ -275,7 +275,7 @@ If you encounter an issue when using MindSpore Lite, you can view logs first. In Run MarkAccuracy error: -1 ``` - - If the accuracy of the entire network inference performed by MindSpore Lite is incorrect, you can use the [Dump function](https://mindspore.cn/lite/docs/en/r2.1/use/benchmark_tool.html#dump) of the benchmark tool to save the output of the operator layer and compare the output with the inference result of the original framework to further locate the operator with incorrect accuracy. + - If the accuracy of the entire network inference performed by MindSpore Lite is incorrect, you can use the [Dump function](https://mindspore.cn/lite/docs/en/r2.1/use/benchmark_tool.html#dump) of the benchmark tool to save the output of the operator layer and compare the output with the inference result of the original framework to further locate the operator with abnormal feature value. - For operators with accuracy issues, you can download the [MindSpore source code](https://gitee.com/mindspore/mindspore) to check the operator implementation and construct the corresponding single-operator network for debugging and fault locating. You can also [commit an issue](https://gitee.com/mindspore/mindspore/issues) in the MindSpore community to MindSpore Lite developers for troubleshooting. 2. What do I do if the FP32 inference result is correct but the FP16 inference result contains the NaN or Inf value? diff --git a/docs/lite/docs/source_en/use/benchmark_tool.md b/docs/lite/docs/source_en/use/benchmark_tool.md index 89f0445a323a47603ba728025abc041095d84961..7a5e5f4a3c92b2762235a1009730cc611dc4b6ff 100644 --- a/docs/lite/docs/source_en/use/benchmark_tool.md +++ b/docs/lite/docs/source_en/use/benchmark_tool.md @@ -198,7 +198,7 @@ When `perfEvent` is set as `CACHE`, the columns will be `cache ref(k)`/`cache re ### Dump -Benchmark tool provides Dump function (currently only supports `CPU` and mobile `GPU` operators), which saves the input and output data of the operator in the model to a disk file. These files can be used to locate the problem of abnormal accuracy during the model inference process. +Benchmark tool provides Dump function (currently only supports `CPU` and mobile `GPU` operators), which saves the input and output data of the operator in the model to a disk file. These files can be used to locate the problem of abnormal feature value detection during the model inference process. #### Dump Step diff --git a/docs/lite/docs/source_zh_cn/troubleshooting_guide.md b/docs/lite/docs/source_zh_cn/troubleshooting_guide.md index ff701e933f977cdba499c71ffde6b8224371025c..0111ea896a72102a2e093070a70af3e3b76ae81a 100644 --- a/docs/lite/docs/source_zh_cn/troubleshooting_guide.md +++ b/docs/lite/docs/source_zh_cn/troubleshooting_guide.md @@ -275,7 +275,7 @@ Run MarkAccuracy error: -1 ``` - - 若MindSpore Lite进行整网推理存在精度问题,可以通过benchmark工具的[Dump功能](https://mindspore.cn/lite/docs/zh-CN/r2.1/use/benchmark_tool.html#dump功能) 保存算子层输出,和原框架推理结果进行对比进一步定位出现精度异常的算子。 + - 若MindSpore Lite进行整网推理存在精度问题,可以通过benchmark工具的[Dump功能](https://mindspore.cn/lite/docs/zh-CN/r2.1/use/benchmark_tool.html#dump功能) 保存算子层输出,和原框架推理结果进行对比进一步定位出现特征值检测异常的算子。 - 针对存在精度问题的算子,可以下载[MindSpore源码](https://gitee.com/mindspore/mindspore) 检查算子实现并构造相应单算子网络进行调试与问题定位;也可以在MindSpore社区[提ISSUE](https://gitee.com/mindspore/mindspore/issues) 给MindSpore Lite的开发人员处理。 2. MindSpore Lite使用fp32推理结果正确,但是fp16推理结果出现NaN或者Inf值怎么办? diff --git a/docs/lite/docs/source_zh_cn/use/benchmark_tool.md b/docs/lite/docs/source_zh_cn/use/benchmark_tool.md index f0a6c05b29784afd737cf25165300631d4064081..c0936ba33326548740f9b2f181d480392ee5a9d9 100644 --- a/docs/lite/docs/source_zh_cn/use/benchmark_tool.md +++ b/docs/lite/docs/source_zh_cn/use/benchmark_tool.md @@ -198,7 +198,7 @@ Model = model.ms, NumThreads = 1, MinRunTime = 0.104000 ms, MaxRunTime = 0.17900 ### Dump功能 -Benchmark工具提供Dump功能(目前仅支持`CPU`和移动端`GPU`算子),将模型中的算子的输入输出数据保存到磁盘文件中,可用于定位模型推理过程中精度异常的问题。 +Benchmark工具提供Dump功能(目前仅支持`CPU`和移动端`GPU`算子),将模型中的算子的输入输出数据保存到磁盘文件中,可用于定位模型推理过程中特征值检测异常的问题。 #### Dump操作步骤