diff --git a/docs/migration_guide/source_zh_cn/script_development.md b/docs/migration_guide/source_zh_cn/script_development.md
index 37f30aea4dc1527e8986bb361d8b258177bb8849..161dc0d11df21e653a14128ec9934e57c24bbf31 100644
--- a/docs/migration_guide/source_zh_cn/script_development.md
+++ b/docs/migration_guide/source_zh_cn/script_development.md
@@ -444,7 +444,7 @@
return out
```
- PyTorch和MindSpore在一些基础API的定义上比较相似,比如[mindspore.nn.SequentialCell](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.SequentialCell.html?highlight=sequentialcell#mindspore.nn.SequentialCell)和[torch.nn.Sequential](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html?highlight=sequential#torch.nn.Sequential),另外,一些算子API可能不尽相同,此处列举一些常见的API对照,更多信息可以参考MindSpore官网的[算子列表](https://www.mindspore.cn/doc/note/zh-CN/master/index.html#operator_api)。
+ PyTorch和MindSpore在一些基础API的定义上比较相似,比如[mindspore.nn.SequentialCell](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.SequentialCell.html#mindspore.nn.SequentialCell)和[torch.nn.Sequential](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html#torch.nn.Sequential),另外,一些算子API可能不尽相同,此处列举一些常见的API对照,更多信息可以参考MindSpore官网的[算子列表](https://www.mindspore.cn/doc/note/zh-CN/master/index.html#operator_api)。
| PyTorch | MindSpore |
| :-------------------------------: | :------------------------------------------------: |
diff --git a/docs/note/source_en/env_var_list.md b/docs/note/source_en/env_var_list.md
index d8d93cfb9c70613dcd0837c79d1484a23f37eb2c..24f6c54c61580b62fc641865c36be957296f8dcf 100644
--- a/docs/note/source_en/env_var_list.md
+++ b/docs/note/source_en/env_var_list.md
@@ -21,7 +21,7 @@ MindSpore environment variables are as follows:
|RANK_TABLE_FILE|MindSpore|Specifies the file to which a path points, including `DEVICE_IP`s corresponding to multiple Ascend AI Processor `DEVICE_ID`s. |String|File path, which can be a relative path or an absolute path.|This variable is used together with RANK_SIZE. |Mandatory (when the Ascend AI Processor is used)|
|RANK_SIZE|MindSpore|Specifies the number of Ascend AI Processors to be called during deep learning. |Integer|The number of Ascend AI Processors to be called ranges from 1 to 8. | This variable is used together with RANK_TABLE_FILE |Mandatory (when the Ascend AI Processor is used) |
|RANK_ID|MindSpore|Specifies the logical ID of the Ascend AI Processor called during deep learning.|Integer|The value ranges from 0 to 7. When multiple servers are running concurrently, `DEVICE_ID`s in different servers may be the same. RANK_ID can be used to avoid this problem. (RANK_ID = SERVER_ID * DEVICE_NUM + DEVICE_ID) |None|Optional|
-|MS_SUBMODULE_LOG_v|MindSpore| For details about the function and usage, see [MS_SUBMODULE_LOG_v](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_debugging_info.html?highlight=ms_submodule_log_v#log-related-environment-variables-and-configurations)|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|None | Optional
+|MS_SUBMODULE_LOG_v|MindSpore| For details about the function and usage, see [MS_SUBMODULE_LOG_v](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_debugging_info.html#log-related-environment-variables-and-configurations)|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|None | Optional
|OPTION_PROTO_LIB_PATH|MindSpore|Specifies the RPOTO dependent library path. |String|File path, which can be a relative path or an absolute path.|None|Optional|
|GE_USE_STATIC_MEMORY|GraphEngine| When a network model has too many layers, the intermediate computing data of a feature map may exceed 25 GB, for example, on the BERT24 network. In the multi-device scenario, to ensure efficient memory collaboration, set this variable to 1, indicating that static memory allocation mode is used. For other networks, dynamic memory allocation mode is used by default.
In static memory allocation mode, the default allocation is 31 GB, which is determined by the sum of graph_memory_max_size and variable_memory_max_size. In dynamic memory allocation mode, the allocation is within the sum of graph_memory_max_size and variable_memory_max_size. |Integer|1: static memory allocation mode
0: dynamic memory allocation mode|None|Optional|
|DUMP_GE_GRAPH|GraphEngine|Outputs the graph description information of each phase in the entire process to a file. This environment variable controls contents of the dumped graph. |Integer|1: full dump
2: basic dump without data such as weight
3: simplified dump with only node relationships displayed|None|Optional|
diff --git a/docs/note/source_zh_cn/env_var_list.md b/docs/note/source_zh_cn/env_var_list.md
index cbbc400691ba2f9e4205ebb91310a9c6758bb603..e985a9789beee3d457895406beac296133b681f3 100644
--- a/docs/note/source_zh_cn/env_var_list.md
+++ b/docs/note/source_zh_cn/env_var_list.md
@@ -21,7 +21,7 @@
|RANK_TABLE_FILE|MindSpore|路径指向文件,包含指定多Ascend AI处理器环境中Ascend AI处理器的"device_id"对应的"device_ip"。|String|文件路径,支持相对路径与绝对路径|与RANK_SIZE配合使用|必选(使用Ascend AI处理器时)|
|RANK_SIZE|MindSpore|指定深度学习时调用Ascend AI处理器的数量|Integer|1~8,调用Ascend AI处理器的数量|与RANK_TABLE_FILE配合使用|必选(使用Ascend AI处理器时)|
|RANK_ID|MindSpore|指定深度学习时调用Ascend AI处理器的逻辑ID|Integer|0~7,多机并行时不同server中DEVICE_ID会有重复,使用RANK_ID可以避免这个问题(多机并行时 RANK_ID = SERVER_ID * DEVICE_NUM + DEVICE_ID|无|可选|
-|MS_SUBMODULE_LOG_v|MindSpore|[MS_SUBMODULE_LOG_v功能与用法]()|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|无|可选
+|MS_SUBMODULE_LOG_v|MindSpore|[MS_SUBMODULE_LOG_v功能与用法]()|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|无|可选
|OPTION_PROTO_LIB_PATH|MindSpore|RPOTO依赖库库路径|String|文件路径,支持相对路径与绝对路径|无|可选|
|GE_USE_STATIC_MEMORY|GraphEngine|当网络模型层数过大时,特征图中间计算数据可能超过25G,例如BERT24网络。多卡场景下为保证通信内存高效协同,需要配置为1,表示使用内存静态分配方式,其他网络暂时无需配置,默认使用内存动态分配方式。
静态内存默认配置为31G,如需要调整可以通过网络运行参数graph_memory_max_size和variable_memory_max_size的总和指定;动态内存是动态申请,最大不会超过graph_memory_max_size和variable_memory_max_size的总和。|Integer|1:使用内存静态分配方式
0:使用内存动态分配方式|无|可选|
|DUMP_GE_GRAPH|GraphEngine|把整个流程中各个阶段的图描述信息打印到文件中,此环境变量控制dump图的内容多少|Integer|1:全量dump
2:不含有权重等数据的基本版dump
3:只显示节点关系的精简版dump|无|可选|
diff --git a/docs/programming_guide/source_en/numpy.md b/docs/programming_guide/source_en/numpy.md
index dfeed88c32e76064085c1f1ee9cfece3e9e2d9d4..37615b559b09d02f0cdb7e9b20190847bcf8c614 100644
--- a/docs/programming_guide/source_en/numpy.md
+++ b/docs/programming_guide/source_en/numpy.md
@@ -362,7 +362,7 @@ from mindspore import ms_function
forward_compiled = ms_function(forward)
```
-> Currently, static graph cannot run in command line mode and not all python types can be passed into functions decorated with `ms_function`. For details about the static graph syntax support, see [Syntax Support](https://www.mindspore.cn/doc/note/en/master/static_graph_syntax_support.html). For details about how to use `ms_function`, see [API: ms_function](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.html?highlight=ms_function#mindspore.ms_function).
+> Currently, static graph cannot run in command line mode and not all python types can be passed into functions decorated with `ms_function`. For details about the static graph syntax support, see [Syntax Support](https://www.mindspore.cn/doc/note/en/master/static_graph_syntax_support.html). For details about how to use `ms_function`, see [API: ms_function](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.html#mindspore.ms_function).
### Use GradOperation to compute deratives
diff --git a/docs/programming_guide/source_zh_cn/numpy.md b/docs/programming_guide/source_zh_cn/numpy.md
index 51b2193677f1bc2b222bc83f4647e22f74f5fd36..a3caf3ef0015ecede8a1a45df54b70e15a2c6424 100644
--- a/docs/programming_guide/source_zh_cn/numpy.md
+++ b/docs/programming_guide/source_zh_cn/numpy.md
@@ -366,7 +366,7 @@ from mindspore import ms_function
forward_compiled = ms_function(forward)
```
-> 目前静态图不支持在命令行模式中运行,并且有部分语法限制。`ms_function`的更多信息可参考[API: ms_function](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.html?highlight=ms_function#mindspore.ms_function)。
+> 目前静态图不支持在命令行模式中运行,并且有部分语法限制。`ms_function`的更多信息可参考[API: ms_function](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.html#mindspore.ms_function)。
### GradOperation使用示例
diff --git a/tutorials/training/source_en/advanced_use/summary_record.md b/tutorials/training/source_en/advanced_use/summary_record.md
index 36dfa12db2958a3896e63ad47b12a06df7ccf1b5..fec6bdc328d3eff8e30876486063435bb2137a8c 100644
--- a/tutorials/training/source_en/advanced_use/summary_record.md
+++ b/tutorials/training/source_en/advanced_use/summary_record.md
@@ -139,7 +139,7 @@ if __name__ == '__main__':
```
> 1. When using summary, it is recommended that you set `dataset_sink_mode` argument of `model.train` to `False`. Please see notices for more information.
-> 2. When using summary, you need to run the code in `if __name__ == "__main__"`. For more detail, refer to [Python tutorial](https://docs.python.org/3.7/library/multiprocessing.html?highlight=multiprocess#multiprocessing-programming)
+> 2. When using summary, you need to run the code in `if __name__ == "__main__"`. For more detail, refer to [Python tutorial](https://docs.python.org/3.7/library/multiprocessing.html#multiprocessing-programming)
### Method two: Custom collection of network data with summary operators and SummaryCollector
@@ -315,7 +315,7 @@ In the saved files, `ms_output_after_hwopt.pb` is the computational graph after
If you are not using the `Model` interface provided by MindSpore, you can implement a method by imitating `train` method of `Model` interface to control the number of iterations. You can imitate the `SummaryCollector` and record the summary operator data in the following manner. For a detailed custom training cycle tutorial, please [refer to the tutorial on the official website](https://www.mindspore.cn/doc/programming_guide/en/master/train.html#customizing-a-training-cycle).
-The following example demonstrates how to record data in a custom training cycle using the summary operator and the `add_value` interface of `SummaryRecord`. For more tutorials about `SummaryRecord`, [refer to the Python API documentation](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.train.html?highlight=summaryrecord#mindspore.train.summary.SummaryRecord). Please note that `SummaryRecord` will not record computational graph automatically. If you need to record the computational graph, please manually pass the instance of network that inherits from Cell. The recorded computational graph only includes the code and functions used in the construct method.
+The following example demonstrates how to record data in a custom training cycle using the summary operator and the `add_value` interface of `SummaryRecord`. For more tutorials about `SummaryRecord`, [refer to the Python API documentation](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord). Please note that `SummaryRecord` will not record computational graph automatically. If you need to record the computational graph, please manually pass the instance of network that inherits from Cell. The recorded computational graph only includes the code and functions used in the construct method.
```python
from mindspore import nn
diff --git a/tutorials/training/source_zh_cn/advanced_use/summary_record.md b/tutorials/training/source_zh_cn/advanced_use/summary_record.md
index 94ea6b9f1868700d79db589a1d0b05b9febb0ed9..0aa71bdaf391d68017c0aad4dd318393f76eae4b 100644
--- a/tutorials/training/source_zh_cn/advanced_use/summary_record.md
+++ b/tutorials/training/source_zh_cn/advanced_use/summary_record.md
@@ -142,7 +142,7 @@ if __name__ == '__main__':
```
> 1. 使用summary功能时,建议将`model.train`的`dataset_sink_mode`参数设置为`False`。请参考文末的注意事项。
-> 2. 使用summary功能时,需要将代码放置到`if __name__ == "__main__"`中运行。详情请[参考Python官网介绍](https://docs.python.org/zh-cn/3.7/library/multiprocessing.html?highlight=multiprocess#multiprocessing-programming)。
+> 2. 使用summary功能时,需要将代码放置到`if __name__ == "__main__"`中运行。详情请[参考Python官网介绍](https://docs.python.org/zh-cn/3.7/library/multiprocessing.html#multiprocessing-programming)。
### 方式二:结合Summary算子和SummaryCollector,自定义收集网络中的数据