diff --git a/docs/mindspore/source_en/api_python/env_var_list.rst b/docs/mindspore/source_en/api_python/env_var_list.rst index 8b8de69a2207f451a5736a7b329bf0f77d12b53f..670c736602d4472223404218e850186080119a73 100644 --- a/docs/mindspore/source_en/api_python/env_var_list.rst +++ b/docs/mindspore/source_en/api_python/env_var_list.rst @@ -323,6 +323,8 @@ Graph Compilation and Execution enable_debug_mode: Insert synchronization points before and after the graph kernel mod launch, and print debugging information if the launch fails. This is supported only for the GPU backend. Default value: `False`. + exact_precision_mode: Enable strict precision mode. The default value is `True`. Some optimizations for fused operators are based on the associative law, the distributive, algebraic simplification of nonlinear functions, etc. However, according to the IEEE-754 standard, these operations may lead to precision loss in floating-point arithmetic. If you want to enable these optimizations, you can manually set this option to `False`. + path: use specified json file. When this option is set, the above options are ignored. - Refer to the `Custom Fusion `_ diff --git a/docs/mindspore/source_zh_cn/api_python/env_var_list.rst b/docs/mindspore/source_zh_cn/api_python/env_var_list.rst index 1d9b616a1c888d5f39f10f61a6525fd95dffd5c5..e5cc1016cedced6c0a72b3c239d87799504077e6 100644 --- a/docs/mindspore/source_zh_cn/api_python/env_var_list.rst +++ b/docs/mindspore/source_zh_cn/api_python/env_var_list.rst @@ -323,6 +323,8 @@ enable_debug_mode:在图算kernelmod launch前后插同步,并在launch失败时打印调试信息,仅支持GPU后端。默认值: `False` 。 + exact_precision_mode:使能严格精度模式。默认值 `True`。融合算子的一些优化基于结合律、分配律、非线性函数化简等。但根据IEEE-754标准,这些操作对浮点数运算会有精度损失。如果你想使能这些优化,可以手动将此选项设置为 `False`。 + path:指定读取json配置。当设置该选项时,忽略以上选项。 - 详细说明参考 `自定义融合 `_ diff --git a/tutorials/source_en/custom_program/fusion_pass.md b/tutorials/source_en/custom_program/fusion_pass.md index b2a670c47530845d344da3d5c8b5400e534b5d4c..b293001c4d71a4ddff3cd95ca184c0fbd9c52140 100644 --- a/tutorials/source_en/custom_program/fusion_pass.md +++ b/tutorials/source_en/custom_program/fusion_pass.md @@ -48,6 +48,7 @@ The environment variable `MS_DEV_GRAPH_KERNEL_FLAGS` provides controlling the sw - **enable_cluster_ops_only**: Allow only the specified operators to participate in the fusion set. When this option is set, the above two options are ignored. - **disable_fusion_pattern**: Prevent the specified fusion pattern from participating in the fusion set. The list of default fusion pattern can be found in Appendix 4. - **enable_fusion_pattern_only**: Allow only the specified fusion pattern to participate in the fusion set. When this option is set, the above option is ignored. +- **exact_precision_mode**: Enable strict precision mode. The default value is `True`. Some optimizations for fused operators are based on the associative law, the distributive, algebraic simplification of nonlinear functions, etc. However, according to the IEEE-754 standard, these operations may lead to precision loss in floating-point arithmetic. If you want to enable these optimizations, you can manually set this option to `False`. ### Enabling or Disabling Automatic/Manual Fusion Pass diff --git a/tutorials/source_zh_cn/custom_program/fusion_pass.md b/tutorials/source_zh_cn/custom_program/fusion_pass.md index 3769428ee04faa80c05a9ae896b7b513c03a164d..f6cb496325785b30f1ff157b0a4489118cf2c468 100644 --- a/tutorials/source_zh_cn/custom_program/fusion_pass.md +++ b/tutorials/source_zh_cn/custom_program/fusion_pass.md @@ -47,6 +47,7 @@ - **enable_cluster_ops_only**:仅允许对应算子加入参与融合的算子集合。当设置该选项时,忽略以上两个选项。 - **disable_fusion_pattern**:禁止对应融合pattern参与融合。融合pattern名单见附录4。 - **enable_fusion_pattern_only**:仅允许对应融合pattern参与融合。当设置该选项时,忽略以上选项。 +- **exact_precision_mode**:使能严格精度模式。默认值 `True`。融合算子的一些优化基于结合律、分配律、非线性函数化简等。但根据IEEE-754标准,这些操作对浮点数运算会有精度损失。如果你想使能这些优化,可以手动将此选项设置为 `False`。 ### 指定自动/手动融合pass是否使能