diff --git a/docs/federated/docs/source_zh_cn/local_differential_privacy_training_signds.md b/docs/federated/docs/source_zh_cn/local_differential_privacy_training_signds.md
index f4b247b6a97fbbbc9c82063553a12b0ff2b69aed..3de61893249e085bcea092c008dab7e29a34523c 100644
--- a/docs/federated/docs/source_zh_cn/local_differential_privacy_training_signds.md
+++ b/docs/federated/docs/source_zh_cn/local_differential_privacy_training_signds.md
@@ -4,7 +4,7 @@
## 隐私保护背景
-联邦学习通过让参与方只上传本地训练后的新模型或更新模型的update信息,实现了client用户不上传原始数据集就能参与全局模型训练的目的,打通了数据孤岛。这种普通场景的联邦学习对应MindSpore联邦学习框架中的默认方案([云侧部署](https://www.mindspore.cn/federated/docs/zh-CN/master/deploy_federated_server.html#id5)启动`server`时,`encrypt_type`开关默认为`not_encrypt`,联邦学习教程中的`安装部署`与`应用实践`都默认使用这种方式),是没有任何加密扰动等保护隐私处理的普通联邦求均方案,为方便描述,下文以`not_encrypt`来特指这种默认方案。
+联邦学习通过让参与方只上传本地训练后的新模型或更新模型的update信息,实现了client用户不上传原始数据集就能参与全局模型训练的目的,打通了数据孤岛。这种普通场景的联邦学习对应MindSpore联邦学习框架中的默认方案([云侧部署](https://www.mindspore.cn/federated/docs/zh-CN/master/deploy_federated_server.html#云侧部署)启动`server`时,`encrypt_type`开关默认为`not_encrypt`,联邦学习教程中的`安装部署`与`应用实践`都默认使用这种方式),是没有任何加密扰动等保护隐私处理的普通联邦求均方案,为方便描述,下文以`not_encrypt`来特指这种默认方案。
这种联邦学习方案并不是毫无隐私泄漏的,使用上述`not_encrypt`方案进行训练,服务端server收到用户client的训练后模型,仍可通过一些攻击方法[1]重构用户训练数据,从而泄露用户隐私,所以`not_encrypt`方案需要进一步进行用户隐私保护。
diff --git a/docs/lite/docs/source_en/use/converter_register.md b/docs/lite/docs/source_en/use/converter_register.md
index a60bca725db8a00ed3effdcb951829b0f095a109..c074e941975a62c7c3203be8f8417fb716d24b4e 100644
--- a/docs/lite/docs/source_en/use/converter_register.md
+++ b/docs/lite/docs/source_en/use/converter_register.md
@@ -77,7 +77,7 @@ REG_SCHEDULED_PASS(POSITION_BEGIN, {"PassTutorial"}) // register scheduling log
The sample code, please refer to [pass](https://gitee.com/mindspore/mindspore/tree/r1.7/mindspore/lite/examples/converter_extend/pass)。
-> In the offline phase of conversion, we will infer the basic information of output tensors of each node of the model, including the format, data type and shape. So, in this phase, users need to provide the inferring process of self-defined operator. Here, users can refer to [Operator Infershape Extension](https://www.mindspore.cn/lite/docs/en/r1.7/use/runtime_cpp.html#id19)。
+> In the offline phase of conversion, we will infer the basic information of output tensors of each node of the model, including the format, data type and shape. So, in this phase, users need to provide the inferring process of self-defined operator. Here, users can refer to [Operator Infershape Extension](https://www.mindspore.cn/lite/docs/en/r1.7/use/runtime_cpp.html#operator-infershape-extension)。
## Example
@@ -92,7 +92,7 @@ The sample code, please refer to [pass](https://gitee.com/mindspore/mindspore/tr
- Compilation preparation
- The release package of MindSpore Lite doesn't provide serialized files of other frameworks, therefore, users need to compile and obtain by yourselves. Here, please refer to [Overview](https://www.mindspore.cn/lite/docs/en/r1.7/use/converter_register.html#id1).
+ The release package of MindSpore Lite doesn't provide serialized files of other frameworks, therefore, users need to compile and obtain by yourselves. Here, please refer to [Overview](https://www.mindspore.cn/lite/docs/en/r1.7/use/converter_register.html#overview).
The case is a tflite model, users need to compile [flatbuffers](https://gitee.com/mindspore/mindspore/blob/r1.7/cmake/external_libs/flatbuffers.cmake) and combine the [TFLITE Proto File](https://gitee.com/mindspore/mindspore/blob/r1.7/mindspore/lite/tools/converter/parser/tflite/schema.fbs) to generate the serialized file.
diff --git a/docs/lite/docs/source_en/use/nnie.md b/docs/lite/docs/source_en/use/nnie.md
index aab242ea2b470ddd8f3bce51575400358e533ad1..0872be892863b09efadf8eefeefea7a588478b49 100644
--- a/docs/lite/docs/source_en/use/nnie.md
+++ b/docs/lite/docs/source_en/use/nnie.md
@@ -337,7 +337,7 @@ During model conversion, the `nnie.cfg` file declared by the NNIE_CONFIG_PATH en
When converting the NNIE model, MindSpore Lite fuses most operators into the binary file for NNIE running. Users cannot view the output of the intermediate operators. In this case, you can add the _report suffix to the top domain, during image composition conversion, the output of the intermediate operator is added to the output of the fused layer. If the operator has output (not fused), the output remains unchanged.
- During the inference running, you can obtain the output of the intermediate operator by referring to [Using C++ Interface to Perform Inference](https://www.mindspore.cn/lite/docs/en/r1.7/use/runtime_cpp.html#id15).
+ During the inference running, you can obtain the output of the intermediate operator by referring to [Using C++ Interface to Perform Inference](https://www.mindspore.cn/lite/docs/en/r1.7/use/runtime_cpp.html#using-c-interface-to-perform-inference).
MindSpore Lite parses the corresponding rules of _report and resolves the conflict with the [Inplace Mechanism](#inplace mechanism). For details, see the definition in the HiSVP Development Guide.
diff --git a/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md b/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md
index 855358a8a60211b96c0b3cc5c74a69b84a7d52b0..9f25f935d968713004ec855175bbd3049818678d 100644
--- a/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md
+++ b/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md
@@ -606,7 +606,7 @@ Xie, Z., Sato, I., & Sugiyama, M. (2020). A Diffusion Theory For Deep Learning D
### 超参问题处理
-AI训练中的超参包含全局学习率,epoch和batch等,如果需要在不同的超参下,训练过程进行可视化时,可参考资料:[可视化的超参调优](https://www.mindspore.cn/mindinsight/docs/zh-CN/r1.7/hyper_parameters_auto_tuning.html);如果需要设置动态学习率超参时,可参考资料:[学习率的优化算法](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/network/optim.html#id5)。
+AI训练中的超参包含全局学习率,epoch和batch等,如果需要在不同的超参下,训练过程进行可视化时,可参考资料:[可视化的超参调优](https://www.mindspore.cn/mindinsight/docs/zh-CN/r1.7/hyper_parameters_auto_tuning.html);如果需要设置动态学习率超参时,可参考资料:[学习率的优化算法](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/network/optim.html#学习率)。
### 模型结构问题处理
diff --git a/docs/mindspore/source_en/faq/data_processing.md b/docs/mindspore/source_en/faq/data_processing.md
index d341b254552189e5b7545f40d09e91b786572a16..747143e234dfeaa540e60adc0657ce462c90dc97 100644
--- a/docs/mindspore/source_en/faq/data_processing.md
+++ b/docs/mindspore/source_en/faq/data_processing.md
@@ -187,7 +187,7 @@ ds.GeneratorDataset(..., num_shards=8, shard_id=7, ...)
A: The data schema can be defined as follows:`cv_schema_json = {"label": {"type": "int32", "shape": [-1]}, "data": {"type": "bytes"}}`
Note: A label is an array of the numpy type, where label values 1, 1, 0, 1, 0, 1 are stored. These label values correspond to the same data, that is, the binary value of the same image.
-For details, see [Converting Dataset to MindRecord](https://www.mindspore.cn/tutorials/en/r1.7/advanced/dataset/record.html#id3).
+For details, see [Converting Dataset to MindRecord](https://www.mindspore.cn/tutorials/en/r1.7/advanced/dataset/record.html#converting-dataset-to-mindrecord).
diff --git a/docs/mindspore/source_en/migration_guide/neural_network_debug.md b/docs/mindspore/source_en/migration_guide/neural_network_debug.md
index 79e9449f91e6edebb444968826cf8410f971c8fa..cb191f555217730e641de5ac0f6df22377c2cda7 100644
--- a/docs/mindspore/source_en/migration_guide/neural_network_debug.md
+++ b/docs/mindspore/source_en/migration_guide/neural_network_debug.md
@@ -45,7 +45,7 @@ During the network process debugging, if you need to get more information about
- Using pdb for debugging in PyNative mode, and using pdb to print relevant stack and contextual information to help locate problems.
- Using Print operator to print more contextual information. Related examples can be found in [Print Operator Features](https://www.mindspore.cn/tutorials/experts/en/r1.7/debug/custom_debug.html#print).
-- Adjusting the log level to get more error information. MindSpore can easily adjust the log level through environment variables. Related examples can be found in [Logging-related Environment Variables And Configurations](https://www.mindspore.cn/tutorials/experts/en/r1.7/debug/custom_debug.html#id6).
+- Adjusting the log level to get more error information. MindSpore can easily adjust the log level through environment variables. Related examples can be found in [Logging-related Environment Variables And Configurations](https://www.mindspore.cn/tutorials/experts/en/r1.7/debug/custom_debug.html#log-related-environment-variables-and-configurations).
#### Common Errors
@@ -127,7 +127,7 @@ If the loss errors are large, the problem locating can be done by using followin
- [Callback Function](https://www.mindspore.cn/tutorials/experts/en/r1.7/debug/custom_debug.html#callback)
- MindSpore has provided ModelCheckpoint, LossMonitor, SummaryCollector and other Callback classes for saving model parameters, monitoring loss values, saving training process information, etc. Users can also customize Callback functions to implement starting and ending runs at each epoch and step, and please refer to [Custom Callback](https://www.mindspore.cn/tutorials/experts/en/r1.7/debug/custom_debug.html#id3) for specific examples.
+ MindSpore has provided ModelCheckpoint, LossMonitor, SummaryCollector and other Callback classes for saving model parameters, monitoring loss values, saving training process information, etc. Users can also customize Callback functions to implement starting and ending runs at each epoch and step, and please refer to [Custom Callback](https://www.mindspore.cn/tutorials/experts/en/r1.7/debug/custom_debug.html#custom-callback) for specific examples.
- [MindSpore Metrics Function](https://www.mindspore.cn/tutorials/experts/en/r1.7/debug/custom_debug.html#mindspore-metrics)
diff --git a/docs/mindspore/source_zh_cn/design/sharding_propagation.md b/docs/mindspore/source_zh_cn/design/sharding_propagation.md
index f2e322957350e0ccad975f469a1bf03898c496f0..d4766461def8418d6c732c2c1a674480dc7ec4f0 100644
--- a/docs/mindspore/source_zh_cn/design/sharding_propagation.md
+++ b/docs/mindspore/source_zh_cn/design/sharding_propagation.md
@@ -66,7 +66,7 @@
>
> 。
-目录结构如下,其中,`rank_table_8pcs.json`是配置当前Ascend多卡环境的组网信息文件(关于该配置文件的说明,参见[这里](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.7/parallel/train_ascend.html#id4)),`train.py`是模型定义脚本,`run.sh`是执行脚本。
+目录结构如下,其中,`rank_table_8pcs.json`是配置当前Ascend多卡环境的组网信息文件(关于该配置文件的说明,参见[这里](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.7/parallel/train_ascend.html#配置环境变量)),`train.py`是模型定义脚本,`run.sh`是执行脚本。
```text
└─sample_code
diff --git a/docs/mindspore/source_zh_cn/faq/data_processing.md b/docs/mindspore/source_zh_cn/faq/data_processing.md
index 0790fab174b5484b68a83a487dd3a24c54ca336f..40f62171c1d0bba03c9ab3aa42fe199f9e00aeb2 100644
--- a/docs/mindspore/source_zh_cn/faq/data_processing.md
+++ b/docs/mindspore/source_zh_cn/faq/data_processing.md
@@ -36,7 +36,7 @@ A: 可以参考如下几个步骤来降低CPU占用,进一步提升性能,
**Q: 在`GeneratorDataset`中,看到有参数`shuffle`,在跑任务时发现`shuffle=True`和`shuffle=False`,两者没有区别,这是为什么?**
-A: 开启`shuffle`,需要传入的`Dataset`是支持随机访问的(例如自定义的`Dataset`有`getitem`方法),如果是在自定义的`Dataset`里面通过`yeild`方式返回回来的数据,是不支持随机访问的,具体可查看教程中的[数据集加载](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/dataset.html#id5)章节。
+A: 开启`shuffle`,需要传入的`Dataset`是支持随机访问的(例如自定义的`Dataset`有`getitem`方法),如果是在自定义的`Dataset`里面通过`yeild`方式返回回来的数据,是不支持随机访问的,具体可查看教程中的[自定义数据集](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/custom.html)章节。
@@ -156,7 +156,7 @@ A: 你可以参考yolov3对于此场景的使用,里面有对于图像的不
A: [build_seg_data.py](https://gitee.com/mindspore/models/blob/r1.7/official/cv/deeplabv3/src/data/build_seg_data.py)是将数据集生成MindRecord的脚本,可以直接使用/适配下你的数据集。或者如果你想尝试自己实现数据集的读取,可以使用`GeneratorDataset`自定义数据集加载。
-[GenratorDataset 示例](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/dataset.html#id5)
+[GenratorDataset 示例](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/dataset/custom.html)
[GenratorDataset API说明](https://www.mindspore.cn/docs/zh-CN/r1.7/api_python/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset)
diff --git a/docs/mindspore/source_zh_cn/migration_guide/optim.md b/docs/mindspore/source_zh_cn/migration_guide/optim.md
index ba13e37e0b6d8290e891a44739a823f47e5ed515..d21cc3f91a46987a31766d08b0a0795daf5e56de 100644
--- a/docs/mindspore/source_zh_cn/migration_guide/optim.md
+++ b/docs/mindspore/source_zh_cn/migration_guide/optim.md
@@ -275,7 +275,7 @@ optimizer = torch.optim.SGD([
#### 混合精度
-MindSpore中的混合精度场景下,如果使用`FixedLossScaleManager`进行溢出检测,且`drop_overflow_update`为False时,优化器需设置`loss_scale`的值,且`loss_scale`值与`FixedLossScaleManager`的相同,详细使用方法可以参考[优化器的混合精度配置](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/network/optim.html#id12)。PyTorch的混合精度设置不作为优化器入参。
+MindSpore中的混合精度场景下,如果使用`FixedLossScaleManager`进行溢出检测,且`drop_overflow_update`为False时,优化器需设置`loss_scale`的值,且`loss_scale`值与`FixedLossScaleManager`的相同,详细使用方法可以参考[优化器的混合精度配置](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/network/optim.html#配置优化器)。PyTorch的混合精度设置不作为优化器入参。
### 基类支持的方法
diff --git a/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/ProximalAdagrad.md b/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/ProximalAdagrad.md
index 478e0ed9a6bc5caf75d3f2ad14f75ab723ea6281..d30d78554a1db4424cf3a2aec51e285cbb825f00 100644
--- a/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/ProximalAdagrad.md
+++ b/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/ProximalAdagrad.md
@@ -27,7 +27,7 @@ mindspore.nn.ProximalAdagrad(
一般使用场景:
-- MindSpore:一般情况下,在实例化一个优化器子类之后,将其作为`mindspore.model`高阶API的入参参与训练,用法请参考代码示例;或使用`mindspore.nn.TrainOneStepCell`,通过传入优化器和一个`mindspore.nn.WithLossCell`的实例,自定义训练网络,具体实现方式可以参考[官网教程](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/train/train_eval.html#id5)。
+- MindSpore:一般情况下,在实例化一个优化器子类之后,将其作为`mindspore.model`高阶API的入参参与训练,用法请参考代码示例;或使用`mindspore.nn.TrainOneStepCell`,通过传入优化器和一个`mindspore.nn.WithLossCell`的实例,自定义训练网络,具体实现方式可以参考[官网教程](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/train/train_eval.html#自定义训练网络)。
- TensorFlow:一般情况下,在实例化一个优化器子类之后,将其作为`tf.keras.models.Model`高阶API的入参参与训练;或调用`minimize()`(包含`compute_gradients()`和`apply_gradients()`)方法单步执行。
diff --git a/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/RMSProp.md b/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/RMSProp.md
index 872fc993aeb5f8be14697656aa9050adf4e38749..5bbf23598b7f0dfcb3ffa3467580d5fd9ba7e27b 100644
--- a/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/RMSProp.md
+++ b/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/RMSProp.md
@@ -27,7 +27,7 @@ mindspore.nn.RMSProp(
一般使用场景:
-- MindSpore:一般情况下,在实例化一个优化器子类之后,将其作为`mindspore.model`高阶API的入参参与训练,用法请参考代码示例;或使用`mindspore.nn.TrainOneStepCell`,通过传入优化器和一个`mindspore.nn.WithLossCell`的实例,自定义训练网络,具体实现方式可以参考[官网教程](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/train/train_eval.html#id5)。
+- MindSpore:一般情况下,在实例化一个优化器子类之后,将其作为`mindspore.model`高阶API的入参参与训练,用法请参考代码示例;或使用`mindspore.nn.TrainOneStepCell`,通过传入优化器和一个`mindspore.nn.WithLossCell`的实例,自定义训练网络,具体实现方式可以参考[官网教程](https://www.mindspore.cn/tutorials/zh-CN/r1.7/advanced/train/train_eval.html#自定义训练网络)。
- TensorFlow:一般情况下,在实例化一个优化器子类之后,将其作为`tf.keras.models.Model`高阶API的入参参与训练;或调用`minimize()`(包含`compute_gradients()`和`apply_gradients()`)方法单步执行。