diff --git a/docs/federated/docs/source_zh_cn/local_differential_privacy_training_signds.md b/docs/federated/docs/source_zh_cn/local_differential_privacy_training_signds.md
index f4b247b6a97fbbbc9c82063553a12b0ff2b69aed..3de61893249e085bcea092c008dab7e29a34523c 100644
--- a/docs/federated/docs/source_zh_cn/local_differential_privacy_training_signds.md
+++ b/docs/federated/docs/source_zh_cn/local_differential_privacy_training_signds.md
@@ -4,7 +4,7 @@
## 隐私保护背景
-联邦学习通过让参与方只上传本地训练后的新模型或更新模型的update信息,实现了client用户不上传原始数据集就能参与全局模型训练的目的,打通了数据孤岛。这种普通场景的联邦学习对应MindSpore联邦学习框架中的默认方案([云侧部署](https://www.mindspore.cn/federated/docs/zh-CN/master/deploy_federated_server.html#id5)启动`server`时,`encrypt_type`开关默认为`not_encrypt`,联邦学习教程中的`安装部署`与`应用实践`都默认使用这种方式),是没有任何加密扰动等保护隐私处理的普通联邦求均方案,为方便描述,下文以`not_encrypt`来特指这种默认方案。
+联邦学习通过让参与方只上传本地训练后的新模型或更新模型的update信息,实现了client用户不上传原始数据集就能参与全局模型训练的目的,打通了数据孤岛。这种普通场景的联邦学习对应MindSpore联邦学习框架中的默认方案([云侧部署](https://www.mindspore.cn/federated/docs/zh-CN/master/deploy_federated_server.html#云侧部署)启动`server`时,`encrypt_type`开关默认为`not_encrypt`,联邦学习教程中的`安装部署`与`应用实践`都默认使用这种方式),是没有任何加密扰动等保护隐私处理的普通联邦求均方案,为方便描述,下文以`not_encrypt`来特指这种默认方案。
这种联邦学习方案并不是毫无隐私泄漏的,使用上述`not_encrypt`方案进行训练,服务端server收到用户client的训练后模型,仍可通过一些攻击方法[1]重构用户训练数据,从而泄露用户隐私,所以`not_encrypt`方案需要进一步进行用户隐私保护。
diff --git a/docs/lite/docs/source_en/use/converter_register.md b/docs/lite/docs/source_en/use/converter_register.md
index 95281c85cbc601c5faca77dca9e9f43af99130ae..b7ce1242c5a2199f13ce275dd60901c00d6a4750 100644
--- a/docs/lite/docs/source_en/use/converter_register.md
+++ b/docs/lite/docs/source_en/use/converter_register.md
@@ -77,7 +77,7 @@ REG_SCHEDULED_PASS(POSITION_BEGIN, {"PassTutorial"}) // register scheduling log
The sample code, please refer to [pass](https://gitee.com/mindspore/mindspore/tree/master/mindspore/lite/examples/converter_extend/pass)。
-> In the offline phase of conversion, we will infer the basic information of output tensors of each node of the model, including the format, data type and shape. So, in this phase, users need to provide the inferring process of self-defined operator. Here, users can refer to [Operator Infershape Extension](https://www.mindspore.cn/lite/docs/en/master/use/runtime_cpp.html#id19)。
+> In the offline phase of conversion, we will infer the basic information of output tensors of each node of the model, including the format, data type and shape. So, in this phase, users need to provide the inferring process of self-defined operator. Here, users can refer to [Operator Infershape Extension](https://www.mindspore.cn/lite/docs/en/master/use/runtime_cpp.html#operator-infershape-extension)。
## Example
@@ -92,7 +92,7 @@ The sample code, please refer to [pass](https://gitee.com/mindspore/mindspore/tr
- Compilation preparation
- The release package of MindSpore Lite doesn't provide serialized files of other frameworks, therefore, users need to compile and obtain by yourselves. Here, please refer to [Overview](https://www.mindspore.cn/lite/docs/en/master/use/converter_register.html#id1).
+ The release package of MindSpore Lite doesn't provide serialized files of other frameworks, therefore, users need to compile and obtain by yourselves. Here, please refer to [Overview](https://www.mindspore.cn/lite/docs/en/master/use/converter_register.html#overview).
The case is a tflite model, users need to compile [flatbuffers](https://gitee.com/mindspore/mindspore/blob/master/cmake/external_libs/flatbuffers.cmake) and combine the [TFLITE Proto File](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/tools/converter/parser/tflite/schema.fbs) to generate the serialized file.
diff --git a/docs/lite/docs/source_en/use/nnie.md b/docs/lite/docs/source_en/use/nnie.md
index 570bfc4a0dcdc8e5696463dbb3bb7cc59b353df4..9b78121d50a2a5ce8bdb56e9626c4e1f8f7d5b46 100644
--- a/docs/lite/docs/source_en/use/nnie.md
+++ b/docs/lite/docs/source_en/use/nnie.md
@@ -337,7 +337,7 @@ During model conversion, the `nnie.cfg` file declared by the NNIE_CONFIG_PATH en
When converting the NNIE model, MindSpore Lite fuses most operators into the binary file for NNIE running. Users cannot view the output of the intermediate operators. In this case, you can add the _report suffix to the top domain, during image composition conversion, the output of the intermediate operator is added to the output of the fused layer. If the operator has output (not fused), the output remains unchanged.
- During the inference running, you can obtain the output of the intermediate operator by referring to [Using C++ Interface to Perform Inference](https://www.mindspore.cn/lite/docs/en/master/use/runtime_cpp.html#id15).
+ During the inference running, you can obtain the output of the intermediate operator by referring to [Using C++ Interface to Perform Inference](https://www.mindspore.cn/lite/docs/en/master/use/runtime_cpp.html#using-c-interface-to-perform-inference).
MindSpore Lite parses the corresponding rules of _report and resolves the conflict with the [Inplace Mechanism](#inplace mechanism). For details, see the definition in the HiSVP Development Guide.
diff --git a/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md b/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md
index b19a9a175390c2606cc396edfb485c15c76cdee5..0c257b581a9c3b67d586b71a623537d911279ee2 100644
--- a/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md
+++ b/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md
@@ -606,7 +606,7 @@ Xie, Z., Sato, I., & Sugiyama, M. (2020). A Diffusion Theory For Deep Learning D
### 超参问题处理
-AI训练中的超参包含全局学习率,epoch和batch等,如果需要在不同的超参下,训练过程进行可视化时,可参考资料:[可视化的超参调优](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/hyper_parameters_auto_tuning.html);如果需要设置动态学习率超参时,可参考资料:[学习率的优化算法](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/network/optim.html#id5)。
+AI训练中的超参包含全局学习率,epoch和batch等,如果需要在不同的超参下,训练过程进行可视化时,可参考资料:[可视化的超参调优](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/hyper_parameters_auto_tuning.html);如果需要设置动态学习率超参时,可参考资料:[学习率的优化算法](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/network/optim.html#学习率)。
### 模型结构问题处理
diff --git a/docs/mindspore/source_en/faq/data_processing.md b/docs/mindspore/source_en/faq/data_processing.md
index f4ba6a738ef4704f66a1bae9187a808916ac20e8..ee26ce08802468b37c07b7860dc8651efed327c8 100644
--- a/docs/mindspore/source_en/faq/data_processing.md
+++ b/docs/mindspore/source_en/faq/data_processing.md
@@ -187,7 +187,7 @@ ds.GeneratorDataset(..., num_shards=8, shard_id=7, ...)
A: The data schema can be defined as follows:`cv_schema_json = {"label": {"type": "int32", "shape": [-1]}, "data": {"type": "bytes"}}`
Note: A label is an array of the numpy type, where label values 1, 1, 0, 1, 0, 1 are stored. These label values correspond to the same data, that is, the binary value of the same image.
-For details, see [Converting Dataset to MindRecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html#id3).
+For details, see [Converting Dataset to MindRecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html#converting-dataset-to-mindrecord).
diff --git a/docs/mindspore/source_en/migration_guide/neural_network_debug.md b/docs/mindspore/source_en/migration_guide/neural_network_debug.md
index f2fd0d58b221274a60bf7b0a581926303f74d76e..06fd30577a014978d011dfb9512e4d777bcae9f8 100644
--- a/docs/mindspore/source_en/migration_guide/neural_network_debug.md
+++ b/docs/mindspore/source_en/migration_guide/neural_network_debug.md
@@ -45,7 +45,7 @@ During the network process debugging, if you need to get more information about
- Using pdb for debugging in PyNative mode, and using pdb to print relevant stack and contextual information to help locate problems.
- Using Print operator to print more contextual information. Related examples can be found in [Print Operator Features](https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html#print).
-- Adjusting the log level to get more error information. MindSpore can easily adjust the log level through environment variables. Related examples can be found in [Logging-related Environment Variables And Configurations](https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html#id6).
+- Adjusting the log level to get more error information. MindSpore can easily adjust the log level through environment variables. Related examples can be found in [Logging-related Environment Variables And Configurations](https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html#log-related-environment-variables-and-configurations).
#### Common Errors
@@ -127,7 +127,7 @@ If the loss errors are large, the problem locating can be done by using followin
- [Callback Function](https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html#callback)
- MindSpore has provided ModelCheckpoint, LossMonitor, SummaryCollector and other Callback classes for saving model parameters, monitoring loss values, saving training process information, etc. Users can also customize Callback functions to implement starting and ending runs at each epoch and step, and please refer to [Custom Callback](https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html#id3) for specific examples.
+ MindSpore has provided ModelCheckpoint, LossMonitor, SummaryCollector and other Callback classes for saving model parameters, monitoring loss values, saving training process information, etc. Users can also customize Callback functions to implement starting and ending runs at each epoch and step, and please refer to [Custom Callback](https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html#custom-callback) for specific examples.
- [MindSpore Metrics Function](https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debug.html#mindspore-metrics)
diff --git a/docs/mindspore/source_zh_cn/design/sharding_propagation.md b/docs/mindspore/source_zh_cn/design/sharding_propagation.md
index a201aede3b2718eeef00fe9b88fd839ead1561c9..4c76f7393194297d23f90ebd406d252f0bbda722 100644
--- a/docs/mindspore/source_zh_cn/design/sharding_propagation.md
+++ b/docs/mindspore/source_zh_cn/design/sharding_propagation.md
@@ -66,7 +66,7 @@
>
> 。
-目录结构如下,其中,`rank_table_8pcs.json`是配置当前Ascend多卡环境的组网信息文件(关于该配置文件的说明,参见[这里](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html#id4)),`train.py`是模型定义脚本,`run.sh`是执行脚本。
+目录结构如下,其中,`rank_table_8pcs.json`是配置当前Ascend多卡环境的组网信息文件(关于该配置文件的说明,参见[这里](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html#配置环境变量)),`train.py`是模型定义脚本,`run.sh`是执行脚本。
```text
└─sample_code
diff --git a/docs/mindspore/source_zh_cn/faq/data_processing.md b/docs/mindspore/source_zh_cn/faq/data_processing.md
index f5173635cdbf52d37b597175f65a59b094acfbdc..b658876062a63122b6c5a89577a815ead8309636 100644
--- a/docs/mindspore/source_zh_cn/faq/data_processing.md
+++ b/docs/mindspore/source_zh_cn/faq/data_processing.md
@@ -36,7 +36,7 @@ A: 可以参考如下几个步骤来降低CPU占用,进一步提升性能,
**Q: 在`GeneratorDataset`中,看到有参数`shuffle`,在跑任务时发现`shuffle=True`和`shuffle=False`,两者没有区别,这是为什么?**
-A: 开启`shuffle`,需要传入的`Dataset`是支持随机访问的(例如自定义的`Dataset`有`getitem`方法),如果是在自定义的`Dataset`里面通过`yeild`方式返回回来的数据,是不支持随机访问的,具体可查看教程中的[数据集加载](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html#id5)章节。
+A: 开启`shuffle`,需要传入的`Dataset`是支持随机访问的(例如自定义的`Dataset`有`getitem`方法),如果是在自定义的`Dataset`里面通过`yeild`方式返回回来的数据,是不支持随机访问的,具体可查看教程中的[自定义数据集](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/custom.html)章节。
@@ -156,7 +156,7 @@ A: 你可以参考yolov3对于此场景的使用,里面有对于图像的不
A: [build_seg_data.py](https://gitee.com/mindspore/models/blob/master/official/cv/deeplabv3/src/data/build_seg_data.py)是将数据集生成MindRecord的脚本,可以直接使用/适配下你的数据集。或者如果你想尝试自己实现数据集的读取,可以使用`GeneratorDataset`自定义数据集加载。
-[GenratorDataset 示例](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html#id5)
+[GenratorDataset 示例](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/custom.html)
[GenratorDataset API说明](https://www.mindspore.cn/docs/zh-CN/master/api_python/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset)
diff --git a/docs/mindspore/source_zh_cn/migration_guide/optim.md b/docs/mindspore/source_zh_cn/migration_guide/optim.md
index 0feec9fb4a3fa11e9bcb58a1ded5710ecb12505d..669992d86a765616b97f1e35d8e24df0efdab672 100644
--- a/docs/mindspore/source_zh_cn/migration_guide/optim.md
+++ b/docs/mindspore/source_zh_cn/migration_guide/optim.md
@@ -275,7 +275,7 @@ optimizer = torch.optim.SGD([
#### 混合精度
-MindSpore中的混合精度场景下,如果使用`FixedLossScaleManager`进行溢出检测,且`drop_overflow_update`为False时,优化器需设置`loss_scale`的值,且`loss_scale`值与`FixedLossScaleManager`的相同,详细使用方法可以参考[优化器的混合精度配置](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/network/optim.html#id12)。PyTorch的混合精度设置不作为优化器入参。
+MindSpore中的混合精度场景下,如果使用`FixedLossScaleManager`进行溢出检测,且`drop_overflow_update`为False时,优化器需设置`loss_scale`的值,且`loss_scale`值与`FixedLossScaleManager`的相同,详细使用方法可以参考[优化器的混合精度配置](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/network/optim.html#配置优化器)。PyTorch的混合精度设置不作为优化器入参。
### 基类支持的方法
diff --git a/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/ProximalAdagrad.md b/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/ProximalAdagrad.md
index 3b43d90c7737f187dfc5801829bd98cf7e33f2dd..a341597495ff8265f7d31d1f54930c7bcb4fe54b 100644
--- a/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/ProximalAdagrad.md
+++ b/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/ProximalAdagrad.md
@@ -27,7 +27,7 @@ mindspore.nn.ProximalAdagrad(
一般使用场景:
-- MindSpore:一般情况下,在实例化一个优化器子类之后,将其作为`mindspore.model`高阶API的入参参与训练,用法请参考代码示例;或使用`mindspore.nn.TrainOneStepCell`,通过传入优化器和一个`mindspore.nn.WithLossCell`的实例,自定义训练网络,具体实现方式可以参考[官网教程](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/train/train_eval.html#id5)。
+- MindSpore:一般情况下,在实例化一个优化器子类之后,将其作为`mindspore.model`高阶API的入参参与训练,用法请参考代码示例;或使用`mindspore.nn.TrainOneStepCell`,通过传入优化器和一个`mindspore.nn.WithLossCell`的实例,自定义训练网络,具体实现方式可以参考[官网教程](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/train/train_eval.html#自定义训练网络)。
- TensorFlow:一般情况下,在实例化一个优化器子类之后,将其作为`tf.keras.models.Model`高阶API的入参参与训练;或调用`minimize()`(包含`compute_gradients()`和`apply_gradients()`)方法单步执行。
diff --git a/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/RMSProp.md b/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/RMSProp.md
index 282868cab155c517cf148f200ad76343f821042b..9ccf32b4b3d93619f8e0b81d29fc98f63f8e9f5d 100644
--- a/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/RMSProp.md
+++ b/docs/mindspore/source_zh_cn/note/api_mapping/tensorflow_diff/RMSProp.md
@@ -27,7 +27,7 @@ mindspore.nn.RMSProp(
一般使用场景:
-- MindSpore:一般情况下,在实例化一个优化器子类之后,将其作为`mindspore.model`高阶API的入参参与训练,用法请参考代码示例;或使用`mindspore.nn.TrainOneStepCell`,通过传入优化器和一个`mindspore.nn.WithLossCell`的实例,自定义训练网络,具体实现方式可以参考[官网教程](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/train/train_eval.html#id5)。
+- MindSpore:一般情况下,在实例化一个优化器子类之后,将其作为`mindspore.model`高阶API的入参参与训练,用法请参考代码示例;或使用`mindspore.nn.TrainOneStepCell`,通过传入优化器和一个`mindspore.nn.WithLossCell`的实例,自定义训练网络,具体实现方式可以参考[官网教程](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/train/train_eval.html#自定义训练网络)。
- TensorFlow:一般情况下,在实例化一个优化器子类之后,将其作为`tf.keras.models.Model`高阶API的入参参与训练;或调用`minimize()`(包含`compute_gradients()`和`apply_gradients()`)方法单步执行。
diff --git a/docs/xai/docs/source_en/using_benchmarks.md b/docs/xai/docs/source_en/using_benchmarks.md
index 4961f067db79bb271bf256e1e83648d4dc12eb71..f9b3a6fe425a894712fcd393068fd73f8bbadc7c 100644
--- a/docs/xai/docs/source_en/using_benchmarks.md
+++ b/docs/xai/docs/source_en/using_benchmarks.md
@@ -10,7 +10,7 @@ Benchmarks are algorithms evaluating the goodness of saliency maps from explaine
The tutorial below is referencing [using_benchmarks.py](https://gitee.com/mindspore/xai/blob/master/examples/using_benchmarks.py).
-Please follow the [Downloading Data Package](https://www.mindspore.cn/xai/docs/en/master/using_explainers.html#id4) instructions to download the necessary files for the tutorial.
+Please follow the [Downloading Data Package](https://www.mindspore.cn/xai/docs/en/master/using_explainers.html#downloading-data-package) instructions to download the necessary files for the tutorial.
With the tutorial package, we have to get the sample image, trained classifier, explainer and optionally the saliency map ready:
diff --git a/docs/xai/docs/source_en/using_mindinsight.md b/docs/xai/docs/source_en/using_mindinsight.md
index e5f838c546dd8355ffc814d7e0f8c8c2e5ddbb27..b419f8fd156f9c644f1d34e00e305b610a3ad32f 100644
--- a/docs/xai/docs/source_en/using_mindinsight.md
+++ b/docs/xai/docs/source_en/using_mindinsight.md
@@ -27,7 +27,7 @@
### Downloading Data Package
-Please follow the [Downloading Data Package](https://www.mindspore.cn/xai/docs/en/master/using_explainers.html#id4) instructions to download the necessary files for the tutorial.
+Please follow the [Downloading Data Package](https://www.mindspore.cn/xai/docs/en/master/using_explainers.html#downloading-data-package) instructions to download the necessary files for the tutorial.
### Preparing the Script
diff --git a/tutorials/experts/source_en/debug/dataset_autotune.md b/tutorials/experts/source_en/debug/dataset_autotune.md
index 92c25d41a181057b78dbe968d5ff332c7a4addbc..50ac1339b568bfd9a5036d8b46ec1f8a69f9fe98 100644
--- a/tutorials/experts/source_en/debug/dataset_autotune.md
+++ b/tutorials/experts/source_en/debug/dataset_autotune.md
@@ -61,7 +61,7 @@ print("tuning interval:", ds.config.get_autotune_interval())
- Both Dataset Profiling and Dataset AutoTune cannot be enabled concurrently, otherwise it will lead to unwork of Dataset AutoTune or Profiling. If both of them are enabled at the same time, a warning message will prompt the user to check whether there is a mistake. Please make sure Profiling is disabled when using Dataset AutoTune.
- [Offload for Dataset](https://www.mindspore.cn/docs/en/master/design/dataset_offload.html) and Dataset AutoTune are enabled simultaneously. If any dataset node has been offloaded for hardware acceleration, the optimized dataset pipeline configuration file will not be stored and a warning will be logged, because the dataset pipeline that is actually running is not the predefined one.
- If the Dataset pipeline consists of a node that does not support deserialization (e.g. user-defined Python functions, GeneratorDataset), any attempt to deserialize the saved optimized dataset pipeline configuration file will report an error. In this case, it is recommended to modify the script of dataset pipeline manually based on the contents of the tuning figuration files to achieve the purpose of acceleration.
-- In distributed training scenario, `set_enable_autotune()` must be called after cluster communication has been initialized (mindspore.communication.management.init()), otherwise AutoTune can only detect device with id 0 and and create only one tuned file (expected tuned files number equal to device number), see the following example:
+- In the distributed training scenario, `set_enable_autotune()` must be called after cluster communication has been initialized (mindspore.communication.management.init()), otherwise AutoTune can only detect device with id 0 and and create only one tuned file (the number of expected tuned files equal to the number of devices), see the following example:
Code in distributed training scenario must be:
diff --git a/tutorials/experts/source_zh_cn/parallel/introduction.md b/tutorials/experts/source_zh_cn/parallel/introduction.md
index 5600f43c4bd44aea57992012f88939710d87aa43..ea2f94faefe663619ba309081fa52063126d060a 100644
--- a/tutorials/experts/source_zh_cn/parallel/introduction.md
+++ b/tutorials/experts/source_zh_cn/parallel/introduction.md
@@ -150,7 +150,7 @@ model.train(*args, **kwargs)
model.train(*args, **kwargs)
```
-在前后算子的设备矩阵不一致时,会自动插入[重排布](https://www.mindspore.cn/docs/zh-CN/master/design/distributed_training_design.html?highlight=%E9%87%8D%E6%8E%92%E5%B8%83#id4), 确保`tensor`的切分状态符合下一个算子输入要求。例如在单机八卡的训练中,有下述的示例代码:
+在前后算子的设备矩阵不一致时,会自动插入[重排布](https://www.mindspore.cn/docs/zh-CN/master/design/distributed_training_design.html?highlight=%E9%87%8D%E6%8E%92%E5%B8%83#自动并行), 确保`tensor`的切分状态符合下一个算子输入要求。例如在单机八卡的训练中,有下述的示例代码:
```python
import numpy as np
diff --git a/tutorials/source_zh_cn/advanced/dataset/enhanced_text_data.ipynb b/tutorials/source_zh_cn/advanced/dataset/enhanced_text_data.ipynb
index a577c5c63b1780c22862ff03ac25653b033f317e..a53559cc8ca22d19dec5c0ffbc0b706cedecd995 100644
--- a/tutorials/source_zh_cn/advanced/dataset/enhanced_text_data.ipynb
+++ b/tutorials/source_zh_cn/advanced/dataset/enhanced_text_data.ipynb
@@ -20,7 +20,7 @@
"\n",
"## 加载文本数据\n",
"\n",
- "下面我们以从TXT文件中读取数据为例,介绍`TextFileDataset`的使用方式,更多文本数据集加载相关信息可参考[API文档](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.html#id2)。\n",
+ "下面我们以从TXT文件中读取数据为例,介绍`TextFileDataset`的使用方式,更多文本数据集加载相关信息可参考[API文档](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.html#文本)。\n",
"\n",
"1. 准备文本数据,内容如下:\n",
"\n",