diff --git a/.jenkins/check/config/filter_linklint.txt b/.jenkins/check/config/filter_linklint.txt
new file mode 100644
index 0000000000000000000000000000000000000000..bbd5911cd0e8ee2f07f021bc1eb3d14c872b6bd4
--- /dev/null
+++ b/.jenkins/check/config/filter_linklint.txt
@@ -0,0 +1,2 @@
+http://www.vision.caltech.edu/visipedia/CUB-200-2011.html
+http://dl.yf.io/dla/models/imagenet/dla34-ba72cf86.pth
\ No newline at end of file
diff --git a/benchmark/ascend/bert/README.md b/benchmark/ascend/bert/README.md
index 5d9cceade88af70fd632ef6bf763cc1dd7372567..8c46ef6a690ab7c6127a826ac33474b51bf2e9bb 100644
--- a/benchmark/ascend/bert/README.md
+++ b/benchmark/ascend/bert/README.md
@@ -209,8 +209,6 @@ Please follow the instructions in the link below to create an hccl.json file in
For distributed training among multiple machines, training command should be executed on each machine in a small time interval. Thus, an hccl.json is needed on each machine. [merge_hccl](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools#merge_hccl) is a tool to create hccl.json for multi-machine case.
-For dataset, if you want to set the format and parameters, a schema configuration file with JSON format needs to be created, please refer to [tfrecord](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_loading.html#tfrecord) format.
-
```text
For pretraining, schema file contains ["input_ids", "input_mask", "segment_ids", "next_sentence_labels", "masked_lm_positions", "masked_lm_ids", "masked_lm_weights"].
diff --git a/benchmark/ascend/resnet/README.md b/benchmark/ascend/resnet/README.md
index c1e12fc6e83c8ed161538e8611883ebccc487954..f8d916c772e519e0d5a83a74f65efab8a0baaff8 100644
--- a/benchmark/ascend/resnet/README.md
+++ b/benchmark/ascend/resnet/README.md
@@ -97,7 +97,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/benchmark/ascend/resnet/README_CN.md b/benchmark/ascend/resnet/README_CN.md
index 9663c783ef355d18b21274e50f5d093936e1d4a4..3df9996ecf1abc624f932e36789d63bdb8241cc3 100644
--- a/benchmark/ascend/resnet/README_CN.md
+++ b/benchmark/ascend/resnet/README_CN.md
@@ -103,7 +103,7 @@ ResNet的总体网络架构如下:
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/audio/melgan/README.md b/official/audio/melgan/README.md
index fbe180bef70aa8be678b054a7e3867b13080d5d7..edab4f65f2a54a8ab36599d57557a2d5ec168199 100644
--- a/official/audio/melgan/README.md
+++ b/official/audio/melgan/README.md
@@ -73,7 +73,7 @@ Dataset used: [LJ Speech]()
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/official/audio/melgan/README_CN.md b/official/audio/melgan/README_CN.md
index 6e5727b3f8316be1d5c5f2b57402b1e3077affaf..ee2279296b4e8fa90aca9ba31474fdee883509cc 100644
--- a/official/audio/melgan/README_CN.md
+++ b/official/audio/melgan/README_CN.md
@@ -70,7 +70,7 @@ MelGAN模型是非自回归全卷积模型。它的参数比同类模型少得
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/FCN8s/README.md b/official/cv/FCN8s/README.md
index 1ce8edd2a6240de9a5bb178e98d4c77f1c80b23a..8e5bfc381998cf3884046642d6f39de9b2fad938 100644
--- a/official/cv/FCN8s/README.md
+++ b/official/cv/FCN8s/README.md
@@ -471,7 +471,7 @@ python export.py
### 教程
-如果你需要在不同硬件平台(如GPU,Ascend 910 或者 Ascend 310)使用训练好的模型,你可以参考这个 [Link](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。以下是一个简单例子的步骤介绍:
+如果你需要在不同硬件平台(如GPU,Ascend 910 或者 Ascend 310)使用训练好的模型,你可以参考这个 [Link](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。以下是一个简单例子的步骤介绍:
- Running on Ascend
diff --git a/official/cv/c3d/README.md b/official/cv/c3d/README.md
index fb6a298ecb342e39b91ada9539bfb059a7ca6a28..dbee0dabbb14d2e112484e9186786d970e72fc38 100644
--- a/official/cv/c3d/README.md
+++ b/official/cv/c3d/README.md
@@ -324,7 +324,7 @@ epoch time: 150760.797 ms, per step time: 252.954 ms
#### Distributed training on Ascend
> Notes:
-> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
+> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
>
```text
diff --git a/official/cv/cnnctc/README.md b/official/cv/cnnctc/README.md
index 7606795b7c0beb0c5e73a4bac8f11c875ed24bd9..f05825fed774b44f69d4d450cb4f08752e8271c7 100644
--- a/official/cv/cnnctc/README.md
+++ b/official/cv/cnnctc/README.md
@@ -1,4 +1,4 @@
-# Contents
+# Contents
- [CNNCTC Description](#CNNCTC-description)
- [Model Architecture](#model-architecture)
@@ -94,7 +94,7 @@ This takes around 75 minutes.
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
@@ -517,7 +517,7 @@ accuracy: 0.8533
### Inference
-If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example:
+If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example:
- Running on Ascend
diff --git a/official/cv/cnnctc/README_CN.md b/official/cv/cnnctc/README_CN.md
index 31b0e62578eb0cefd576a035a84b473750a1cf96..4125a7ec33cdcb1eefa043a27be49e923d24a34c 100644
--- a/official/cv/cnnctc/README_CN.md
+++ b/official/cv/cnnctc/README_CN.md
@@ -95,7 +95,7 @@ python src/preprocess_dataset.py
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
@@ -250,7 +250,7 @@ bash scripts/run_distribute_train_ascend.sh [RANK_TABLE_FILE] [PRETRAINED_CKPT(o
> 注意:
- RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
+ RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
### 训练结果
@@ -449,7 +449,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID]
### 推理
-如果您需要在GPU、Ascend 910、Ascend 310等多个硬件平台上使用训练好的模型进行推理,请参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。以下为简单示例:
+如果您需要在GPU、Ascend 910、Ascend 310等多个硬件平台上使用训练好的模型进行推理,请参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。以下为简单示例:
- Ascend处理器环境运行
diff --git a/official/cv/crnn/README.md b/official/cv/crnn/README.md
index b76d1730ba4ed359d4c88a1129a09ab4243d9072..237142ab15a570bfb6efcc26846e14cd9f6c541a 100644
--- a/official/cv/crnn/README.md
+++ b/official/cv/crnn/README.md
@@ -238,7 +238,7 @@ Parameters for both training and evaluation can be set in default_config.yaml.
## [Training Process](#contents)
-- Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
### [Training](#contents)
diff --git a/official/cv/crnn_seq2seq_ocr/README.md b/official/cv/crnn_seq2seq_ocr/README.md
index 1707ba078635aa1c366d8396a3f9fa1ee644444a..01d056e7b695497731be3e18c1efc6de006af123 100644
--- a/official/cv/crnn_seq2seq_ocr/README.md
+++ b/official/cv/crnn_seq2seq_ocr/README.md
@@ -229,7 +229,7 @@ Parameters for both training and evaluation can be set in config.py.
## [Training Process](#contents)
-- Set options in `default_config.yaml`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `default_config.yaml`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
### [Training](#contents)
diff --git a/official/cv/cspdarknet53/README.md b/official/cv/cspdarknet53/README.md
index cbfd2038d38eaa4113767c51b661ab62819d7b86..e8670cb1295b458af421090859b59f15386614c2 100644
--- a/official/cv/cspdarknet53/README.md
+++ b/official/cv/cspdarknet53/README.md
@@ -49,7 +49,7 @@ Dataset used can refer to paper.
## [Mixed Precision(Ascend)](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
@@ -206,7 +206,7 @@ bash run_distribute_train.sh [RANK_TABLE_FILE] [DATA_DIR] (option)[PATH_CHECKPOI
bash run_standalone_train.sh [DEVICE_ID] [DATA_DIR] (option)[PATH_CHECKPOINT]
```
-> Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html), and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV3, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
+> Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html), and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV3, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
>
> This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh`
diff --git a/official/cv/ctpn/README.md b/official/cv/ctpn/README.md
index a89a39c8fd9bed810eb450c2393d53dcd697e702..58be8b7bab65bb3f1fdebc948857fde8d04d56e4 100644
--- a/official/cv/ctpn/README.md
+++ b/official/cv/ctpn/README.md
@@ -231,7 +231,7 @@ imagenet_cfg = edict({
Then you can train it with ImageNet2012.
> Notes:
-> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
+> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
>
> This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh`
>
diff --git a/official/cv/darknet53/README.md b/official/cv/darknet53/README.md
index 80448851539ce94429ef2eded1f7035db7f07118..168662486f42fbe5137847a3535382203e4e1b2e 100644
--- a/official/cv/darknet53/README.md
+++ b/official/cv/darknet53/README.md
@@ -58,7 +58,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/official/cv/deeplabv3/README.md b/official/cv/deeplabv3/README.md
index e8dc4d9f9150d0128fa4c3cf704e9cc90b9305cc..185eda20a7a842bb85f193591a817be50b710630 100644
--- a/official/cv/deeplabv3/README.md
+++ b/official/cv/deeplabv3/README.md
@@ -86,7 +86,7 @@ You can also generate the list file automatically by run script: `python get_dat
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/official/cv/deeplabv3/README_CN.md b/official/cv/deeplabv3/README_CN.md
index 742956f1058f252fa290f151dbc9e52d3b6e285a..8017d3f9c8278f9a6afcb46792c969e05a80cf0e 100644
--- a/official/cv/deeplabv3/README_CN.md
+++ b/official/cv/deeplabv3/README_CN.md
@@ -93,7 +93,7 @@ Pascal VOC数据集和语义边界数据集(Semantic Boundaries Dataset,SBD
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/deeplabv3plus/README_CN.md b/official/cv/deeplabv3plus/README_CN.md
index 9985407b66d88224e7e9f3e387e5f6929ce95cae..29c133a23e2882a338e335991634eab85eba1f33 100644
--- a/official/cv/deeplabv3plus/README_CN.md
+++ b/official/cv/deeplabv3plus/README_CN.md
@@ -83,7 +83,7 @@ Pascal VOC数据集和语义边界数据集(Semantic Boundaries Dataset,SBD
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/deeptext/README.md b/official/cv/deeptext/README.md
index d30cccba70a15b411147eb69af1b2d3dd2c13a96..dfc9863acafa37f6a9e9c68462ff37e586041b8a 100644
--- a/official/cv/deeptext/README.md
+++ b/official/cv/deeptext/README.md
@@ -133,7 +133,7 @@ sh run_eval_gpu.sh [IMGS_PATH] [ANNOS_PATH] [CHECKPOINT_PATH] [COCO_TEXT_PARSER_
```
> Notes:
-> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
+> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
>
> This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh`
>
diff --git a/official/cv/densenet/README.md b/official/cv/densenet/README.md
index 97dca3522806dfc466ebe5fadbe05e18765e5563..6e7f3532c4e34e45037049fde5b0bd4f69e3bbcf 100644
--- a/official/cv/densenet/README.md
+++ b/official/cv/densenet/README.md
@@ -79,7 +79,7 @@ The default configuration of the Dataset are as follows:
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
diff --git a/official/cv/densenet/README_CN.md b/official/cv/densenet/README_CN.md
index 74c86403e899e40a594bf5635236f918718cc66d..8e6e8b9f8d4bae6c5c2034d10def8d5fd2b3781a 100644
--- a/official/cv/densenet/README_CN.md
+++ b/official/cv/densenet/README_CN.md
@@ -83,7 +83,7 @@ DenseNet-100使用的数据集: Cifar-10
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/depthnet/README.md b/official/cv/depthnet/README.md
index 5af2a58f5cca2974a563194d7bd1af4e72317cbf..d1868a4e74a5b657e6eb097ea9ffef46c2563142 100644
--- a/official/cv/depthnet/README.md
+++ b/official/cv/depthnet/README.md
@@ -74,7 +74,7 @@
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/dpn/README.md b/official/cv/dpn/README.md
index 106d98cd27ea1ff38364bd70c8a7591fc81140a9..f2ec942bb4dfe709559c292118ad3c8859cc0ff2 100644
--- a/official/cv/dpn/README.md
+++ b/official/cv/dpn/README.md
@@ -67,7 +67,7 @@ All the models in this repository are trained and validated on ImageNet-1K. The
## [Mixed Precision](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/official/cv/east/README.md b/official/cv/east/README.md
index 3686d8fdfb9c56a34376ee0c5f7d9163c0fedcbf..76c1e00e6983101bc2c586476e0f1e4cad67f81c 100644
--- a/official/cv/east/README.md
+++ b/official/cv/east/README.md
@@ -130,7 +130,7 @@ bash run_eval_gpu.sh [DATASET_PATH] [CKPT_PATH] [DEVICE_ID]
```
> Notes:
-> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
+> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
>
> This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh`
>
diff --git a/official/cv/essay-recogination/README_CN.md b/official/cv/essay-recogination/README_CN.md
index 6456047afd5f0399ef7676436217a2ffcd557a99..064425d29e11440b6ed2f94ffe5fca03ee828949 100644
--- a/official/cv/essay-recogination/README_CN.md
+++ b/official/cv/essay-recogination/README_CN.md
@@ -111,7 +111,7 @@ train.valInterval = 100 #边训练边推
## 训练过程
-- 在`parameters/hwdb.gin`中设置选项,包括学习率和网络超参数。单击[MindSpore加载数据集教程](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_sample.html),了解更多信息。
+- 在`parameters/hwdb.gin`中设置选项,包括学习率和网络超参数。单击[MindSpore加载数据集教程](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html),了解更多信息。
### 训练
diff --git a/official/cv/googlenet/README.md b/official/cv/googlenet/README.md
index 3fb862acb26bbd0954331d2494d5c8469c7d3f36..04708b39f50d41079eb64ca0435082a4330a6753 100644
--- a/official/cv/googlenet/README.md
+++ b/official/cv/googlenet/README.md
@@ -1,4 +1,4 @@
-# Contents
+# Contents
[查看中文](./README_CN.md)
@@ -71,7 +71,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
@@ -595,7 +595,7 @@ Current batch_ Size can only be set to 1.
### Inference
-If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example:
+If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example:
- Running on Ascend
diff --git a/official/cv/googlenet/README_CN.md b/official/cv/googlenet/README_CN.md
index d8f0e88856f6ca09d541eb6e7c3af7ad385c5c73..569ed28faa1282b1bcb8225face0d766c4a8b7ae 100644
--- a/official/cv/googlenet/README_CN.md
+++ b/official/cv/googlenet/README_CN.md
@@ -73,7 +73,7 @@ GoogleNet由多个inception模块串联起来,可以更加深入。 降维的
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
@@ -596,7 +596,7 @@ python export.py --config_path [CONFIG_PATH]
### 推理
-如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。下面是操作步骤示例:
+如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。下面是操作步骤示例:
- Ascend处理器环境运行
diff --git a/official/cv/inceptionv3/README.md b/official/cv/inceptionv3/README.md
index 3c5bd47fdb64ea0872664a92a71d5531db46327e..e445e702241703a3da1304e7b25c07f7b370c723 100644
--- a/official/cv/inceptionv3/README.md
+++ b/official/cv/inceptionv3/README.md
@@ -65,7 +65,7 @@ Dataset used: [CIFAR-10](http://www.cs.toronto.edu/~kriz/cifar.html)
## [Mixed Precision(Ascend)](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
diff --git a/official/cv/inceptionv3/README_CN.md b/official/cv/inceptionv3/README_CN.md
index ff3189a674da6dccaaca54f5d818f3a7ccdc27ea..cb1910b17b3a137367a716a6edb501030160c6d7 100644
--- a/official/cv/inceptionv3/README_CN.md
+++ b/official/cv/inceptionv3/README_CN.md
@@ -69,7 +69,7 @@ InceptionV3的总体网络架构如下:
## 混合精度(Ascend)
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
diff --git a/official/cv/inceptionv4/README.md b/official/cv/inceptionv4/README.md
index 4cf8250d42fbf62d5355551f4caed5505b6513e7..7cf88d279d8e4226093cdc423ed2ff5ef7a989f1 100644
--- a/official/cv/inceptionv4/README.md
+++ b/official/cv/inceptionv4/README.md
@@ -44,7 +44,7 @@ Dataset used can refer to paper.
## [Mixed Precision(Ascend)](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
@@ -263,7 +263,7 @@ bash scripts/run_standalone_train_ascend.sh [DEVICE_ID] [DATA_DIR]
```
> Notes:
-> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
+> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
>
> This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh`
diff --git a/official/cv/maskrcnn/README.md b/official/cv/maskrcnn/README.md
index 3689e43e166cb4b57b2a3779969d61614f2378a7..cc3be810288457cc8e242eed0466b85a637f8206 100644
--- a/official/cv/maskrcnn/README.md
+++ b/official/cv/maskrcnn/README.md
@@ -544,7 +544,7 @@ Usage: bash run_standalone_train.sh [PRETRAINED_MODEL] [DATA_PATH]
## [Training Process](#contents)
-- Set options in `config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
### [Training](#content)
diff --git a/official/cv/maskrcnn/README_CN.md b/official/cv/maskrcnn/README_CN.md
index cbe608e85cccbef5b49d5ce6b779ef718c737f0c..fcca9b9d0acdac977d664df459eeff5b0f753dfd 100644
--- a/official/cv/maskrcnn/README_CN.md
+++ b/official/cv/maskrcnn/README_CN.md
@@ -526,7 +526,7 @@ bash run_eval.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH] [DATA_PATH]
## 训练过程
-- 在`config.py`中设置配置项,包括loss_scale、学习率和网络超参。单击[此处](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_sample.html)获取更多数据集相关信息.
+- 在`config.py`中设置配置项,包括loss_scale、学习率和网络超参。单击[此处](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html)获取更多数据集相关信息.
### 训练
diff --git a/official/cv/maskrcnn_mobilenetv1/README.md b/official/cv/maskrcnn_mobilenetv1/README.md
index 57e5ecd320c893c202820d9932e0770988402d2b..3d8b112692eca0c76bb48dfe421eb825d61c1c3f 100644
--- a/official/cv/maskrcnn_mobilenetv1/README.md
+++ b/official/cv/maskrcnn_mobilenetv1/README.md
@@ -1,4 +1,4 @@
-# Contents
+# Contents
- [MaskRCNN Description](#maskrcnn-description)
- [Model Architecture](#model-architecture)
@@ -521,7 +521,7 @@ Usage: bash run_distribute_train_gpu.sh [DATA_PATH] [PRETRAINED_PATH] (optional)
## [Training Process](#contents)
-- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
### [Training](#content)
diff --git a/official/cv/mobilenetv1/README.md b/official/cv/mobilenetv1/README.md
index ce1a3c4b052d49f593220a8208772cf780ddb78f..5f0771154c392a2742025644320d9f6a80bfff91 100644
--- a/official/cv/mobilenetv1/README.md
+++ b/official/cv/mobilenetv1/README.md
@@ -73,7 +73,7 @@ Dataset used: [CIFAR-10](http://www.cs.toronto.edu/~kriz/cifar.html)
### Mixed Precision(Ascend)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
## Environment Requirements
diff --git a/official/cv/mobilenetv2/README.md b/official/cv/mobilenetv2/README.md
index 454cce4bb9b4e6159750248b9b7b88f53ab3da08..e7b2b046a42a9e986fc26a8beae47b408ed0a876 100644
--- a/official/cv/mobilenetv2/README.md
+++ b/official/cv/mobilenetv2/README.md
@@ -59,7 +59,7 @@ Dataset used: [imagenet](http://www.image-net.org/)
## [Mixed Precision(Ascend)](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/official/cv/mobilenetv2/README_CN.md b/official/cv/mobilenetv2/README_CN.md
index 35af3e3d4714d9094943f39c571768f508e8f814..88caa2261ec6d26791c7f16bf6b0781da3f4a538 100644
--- a/official/cv/mobilenetv2/README_CN.md
+++ b/official/cv/mobilenetv2/README_CN.md
@@ -55,7 +55,7 @@ MobileNetV2总体网络架构如下:
## 混合精度(Ascend)
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/nima/README.md b/official/cv/nima/README.md
index 0ddce65593c50457cd5f6c704e157662f623f75d..47485334c45be48d36793a027d5a688ef7fa19d5 100644
--- a/official/cv/nima/README.md
+++ b/official/cv/nima/README.md
@@ -84,7 +84,7 @@ python ./src/dividing_label.py --config_path=~/config.yaml
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/openpose/README.md b/official/cv/openpose/README.md
index 3bf5319a69a12a8cf144b2934d2d844b35d4568f..43387c844bb7239e02799aa3e4e711d66f5638de 100644
--- a/official/cv/openpose/README.md
+++ b/official/cv/openpose/README.md
@@ -69,7 +69,7 @@ In the currently provided training script, the coco2017 data set is used as an e
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/official/cv/patchcore/README_CN.md b/official/cv/patchcore/README_CN.md
index 98018773ca7560afd24bc0649d49ecc0fa21e0d8..353c7beff968b46cf5cb1771485f89867c4372e3 100644
--- a/official/cv/patchcore/README_CN.md
+++ b/official/cv/patchcore/README_CN.md
@@ -93,7 +93,7 @@ PatchCore使用预训练的WideResNet50作为Encoder, 并去除layer3之后的
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/predrnn++/README.md b/official/cv/predrnn++/README.md
index 0018d700f0384f587f57c1b07786cd7baa644407..6e569bc90ebab582348c0822938db501c4909f7f 100644
--- a/official/cv/predrnn++/README.md
+++ b/official/cv/predrnn++/README.md
@@ -140,7 +140,7 @@ device_id: 0 # id of NPU used
## [Training Process](#contents)
-- Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
### [Training](#contents)
diff --git a/official/cv/psenet/README.md b/official/cv/psenet/README.md
index 87e844ece2606903c5c784aa730e5fb6d4390fdd..c297a2637af7a57c079e4aa356b176cf4857c457 100644
--- a/official/cv/psenet/README.md
+++ b/official/cv/psenet/README.md
@@ -427,7 +427,7 @@ The `res` folder is generated in the upper-level directory. For details about th
### Inference
-If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example:
+If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example:
```python
# Load unseen dataset for inference
diff --git a/official/cv/psenet/README_CN.md b/official/cv/psenet/README_CN.md
index 9a3061b8ed770851fc6d84b8557761f4a6301d69..9225e812752e5c76c17be464a0fc341f1bacfb0b 100644
--- a/official/cv/psenet/README_CN.md
+++ b/official/cv/psenet/README_CN.md
@@ -364,7 +364,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
### 推理
-如果您需要使用已训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考[此处](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。操作示例如下:
+如果您需要使用已训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考[此处](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。操作示例如下:
```python
# 加载未知数据集进行推理
diff --git a/official/cv/pvnet/README.md b/official/cv/pvnet/README.md
index 7b5579ebb544c609758173ae623957e008b9e80b..c462410d340ed31c10e328fa7382b05b6124510a 100644
--- a/official/cv/pvnet/README.md
+++ b/official/cv/pvnet/README.md
@@ -62,7 +62,7 @@ PvNet是一种Encode-Decode的网络结构,通过输入一张rgb图,输出
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/resnet/README.md b/official/cv/resnet/README.md
index 116a9cc2a1b4e24bd839059ce433979907aae189..a88ab183a8e57a54a102af89e1e898eb3e1b87b1 100644
--- a/official/cv/resnet/README.md
+++ b/official/cv/resnet/README.md
@@ -107,7 +107,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
@@ -456,7 +456,7 @@ bash run_eval_gpu_resnet_benchmark.sh [DATASET_PATH] [CKPT_PATH] [BATCH_SIZE](op
For distributed training, a hostfile configuration needs to be created in advance.
-Please follow the instructions in the link [GPU-Multi-Host](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_gpu.html).
+Please follow the instructions in the link [GPU-Multi-Host](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html).
#### Running parameter server mode training
diff --git a/official/cv/resnet/README_CN.md b/official/cv/resnet/README_CN.md
index 9663c783ef355d18b21274e50f5d093936e1d4a4..3df9996ecf1abc624f932e36789d63bdb8241cc3 100644
--- a/official/cv/resnet/README_CN.md
+++ b/official/cv/resnet/README_CN.md
@@ -103,7 +103,7 @@ ResNet的总体网络架构如下:
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/resnext/README.md b/official/cv/resnext/README.md
index 6c5a0985fef059df51309db8a8beb36c776e1fc0..d2e356b7656f70bfa1b51ed3b11c766126a42a03 100644
--- a/official/cv/resnext/README.md
+++ b/official/cv/resnext/README.md
@@ -54,7 +54,7 @@ Dataset used: [imagenet](http://www.image-net.org/)
## [Mixed Precision](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
diff --git a/official/cv/resnext/README_CN.md b/official/cv/resnext/README_CN.md
index 09699a0a9ef299c7211e8392e5a1d681d258f37f..fce417685d7b3844225877f7b3d1e701bd91169f 100644
--- a/official/cv/resnext/README_CN.md
+++ b/official/cv/resnext/README_CN.md
@@ -54,7 +54,7 @@ ResNeXt整体网络架构如下:
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
diff --git a/official/cv/retinanet/README_CN.md b/official/cv/retinanet/README_CN.md
index 7e6d09379074d9e559ae8c5ae74b03cea0e7867f..36b50b7dbdf80f7d34f9386055f5434ce315a125 100644
--- a/official/cv/retinanet/README_CN.md
+++ b/official/cv/retinanet/README_CN.md
@@ -189,7 +189,7 @@ bash scripts/run_single_train.sh DEVICE_ID MINDRECORD_DIR PRE_TRAINED(optional)
> 注意:
- RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
+ RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
#### 运行
diff --git a/official/cv/semantic_human_matting/README.md b/official/cv/semantic_human_matting/README.md
index 68ca05edd05bf229da6ed45ad9acb7c969bfce31..4649c946b5918fc6d301413172459bb8761404c1 100644
--- a/official/cv/semantic_human_matting/README.md
+++ b/official/cv/semantic_human_matting/README.md
@@ -78,7 +78,7 @@
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索`reduce precision`查看精度降低的算子。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索`reduce precision`查看精度降低的算子。
# 环境要求
diff --git a/official/cv/simple_pose/README.md b/official/cv/simple_pose/README.md
index 9a78c5ca07ce3312d9c734ab7fca30fff3e2f32d..f22d647e205dc029b6f7201f6efd44ff6a0da1f4 100644
--- a/official/cv/simple_pose/README.md
+++ b/official/cv/simple_pose/README.md
@@ -57,7 +57,7 @@ Dataset used: COCO2017
## [Mixed Precision](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/official/cv/squeezenet/README.md b/official/cv/squeezenet/README.md
index 6b405c2ccc2ff4aac2bb4ac00cd1ef3d572bac3a..8b4637b8d020ef707c8abe7555009cca9da1bbfa 100644
--- a/official/cv/squeezenet/README.md
+++ b/official/cv/squeezenet/README.md
@@ -62,7 +62,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
@@ -687,7 +687,7 @@ Inference result is saved in current path, you can find result like this in acc.
### Inference
-If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example:
+If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example:
- Running on Ascend
diff --git a/official/cv/squeezenet/modelarts/README.md b/official/cv/squeezenet/modelarts/README.md
index d8136687b522234d23f526025a215a204c54a0f0..ddb66f9e2c2c58f74641792c482cbef46d1a5d21 100644
--- a/official/cv/squeezenet/modelarts/README.md
+++ b/official/cv/squeezenet/modelarts/README.md
@@ -62,7 +62,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
@@ -687,7 +687,7 @@ Inference result is saved in current path, you can find result like this in acc.
### Inference
-If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example:
+If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example:
- Running on Ascend
diff --git a/official/cv/srcnn/README_CN.md b/official/cv/srcnn/README_CN.md
index 564fa8855caf915b695563d76b7ddc115a1976fc..3fb5fd37577d936932bd28a41978d2c865d73084 100644
--- a/official/cv/srcnn/README_CN.md
+++ b/official/cv/srcnn/README_CN.md
@@ -71,7 +71,7 @@ SRCNN首先使用双三次(bicubic)插值将低分辨率图像放大成目标尺
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/ssd/README.md b/official/cv/ssd/README.md
index 7b4bee4da815c48dad4f0a8879d4295e59ff4768..3ab719b6b5beade3056d0d6e7ff1b40bee4c916e 100644
--- a/official/cv/ssd/README.md
+++ b/official/cv/ssd/README.md
@@ -306,7 +306,7 @@ Then you can run everything just like on ascend.
### [Training Process](#contents)
-To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
+To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
#### Training on Ascend
diff --git a/official/cv/ssd/README_CN.md b/official/cv/ssd/README_CN.md
index fdbdd254b9b1cbb00ad25e296403a71829e2d30d..40fed4347d7856fc20070b9d2f7cc338b997b00f 100644
--- a/official/cv/ssd/README_CN.md
+++ b/official/cv/ssd/README_CN.md
@@ -246,7 +246,7 @@ bash run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID] [CONFIG_PATH]
## 训练过程
-运行`train.py`训练模型。如果`mindrecord_dir`为空,则会通过`coco_root`(coco数据集)或`image_dir`和`anno_path`(自己的数据集)生成[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)文件。**注意,如果mindrecord_dir不为空,将使用mindrecord_dir代替原始图像。**
+运行`train.py`训练模型。如果`mindrecord_dir`为空,则会通过`coco_root`(coco数据集)或`image_dir`和`anno_path`(自己的数据集)生成[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)文件。**注意,如果mindrecord_dir不为空,将使用mindrecord_dir代替原始图像。**
### Ascend上训练
diff --git a/official/cv/ssim-ae/README_CN.md b/official/cv/ssim-ae/README_CN.md
index 1e954bf4cb0515b9ae6cd7fb4896c53ca48f546a..6a34cca874cdda7bb7a836a57b0c5a089ae7de8a 100644
--- a/official/cv/ssim-ae/README_CN.md
+++ b/official/cv/ssim-ae/README_CN.md
@@ -108,7 +108,7 @@ MVTec AD数据集
## 混合精度
-采用 [混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用 [混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/cv/tinydarknet/README_CN.md b/official/cv/tinydarknet/README_CN.md
index 12023a750ee68b5bab03fdcb986e5f19fc2ac639..c648fd1a1cefa506db0ef38fc55a62102e4fad49 100644
--- a/official/cv/tinydarknet/README_CN.md
+++ b/official/cv/tinydarknet/README_CN.md
@@ -64,7 +64,7 @@ Tiny-DarkNet是Joseph Chet Redmon等人提出的一个16层的针对于经典的
-
+
# [环境要求](#目录)
diff --git a/official/cv/unet/README.md b/official/cv/unet/README.md
index 627c4c7146f28a885531fe09e5d1ace89b95b54a..3093c54e5b20b6dfb11251848a366e03ca1495ff 100644
--- a/official/cv/unet/README.md
+++ b/official/cv/unet/README.md
@@ -504,7 +504,7 @@ The above python command will run in the background. You can view the results th
### Inference
If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you
-can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following
+can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following
the steps below, this is a simple example:
#### Running on Ascend 310
diff --git a/official/cv/unet/README_CN.md b/official/cv/unet/README_CN.md
index fce8e75dd5cde32fc66f0f6886e8915d66a2a33b..1bba434d8cb5e31e715b63ff47129a9acdc94f36 100644
--- a/official/cv/unet/README_CN.md
+++ b/official/cv/unet/README_CN.md
@@ -503,7 +503,7 @@ bash scripts/run_distribute_train_gpu.sh [RANKSIZE] [DATASET] [CONFIG_PATH]
#### 推理
-如果您需要使用训练好的模型在Ascend 910、Ascend 310等多个硬件平台上进行推理上进行推理,可参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。下面是一个简单的操作步骤示例:
+如果您需要使用训练好的模型在Ascend 910、Ascend 310等多个硬件平台上进行推理上进行推理,可参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。下面是一个简单的操作步骤示例:
##### Ascend 310环境运行
diff --git a/official/cv/unet3d/README.md b/official/cv/unet3d/README.md
index 49968f8687a59291b37a97f3ec7404f01fb1248c..ecd8796e5100fb99bb7f2edd2fd3f0120365f176 100644
--- a/official/cv/unet3d/README.md
+++ b/official/cv/unet3d/README.md
@@ -288,7 +288,7 @@ After training, you'll get some checkpoint files under the `train_parallel_fp[32
#### Distributed training on Ascend
> Notes:
-> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
+> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
>
```shell
diff --git a/official/cv/vgg16/README.md b/official/cv/vgg16/README.md
index ea8971d8940240001521cf46991932e8f33215bd..e47a112fb56e63c7605ee125f8466efce876e7a7 100644
--- a/official/cv/vgg16/README.md
+++ b/official/cv/vgg16/README.md
@@ -94,7 +94,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap
### Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
@@ -462,7 +462,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579
...
```
-> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training.html).
+> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorials/experts/en/master/parallel/introduction.html).
> **Attention** This will bind the processor cores according to the `device_num` and total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations about `taskset` in `scripts/run_distribute_train.sh`
##### Run vgg16 on GPU
diff --git a/official/cv/vgg16/README_CN.md b/official/cv/vgg16/README_CN.md
index d1423e1e1a75da8567411bf8bdb2e233bada6213..62a4695255b16a02ddafbf7fd52b5278902ae2e3 100644
--- a/official/cv/vgg16/README_CN.md
+++ b/official/cv/vgg16/README_CN.md
@@ -95,7 +95,7 @@ VGG 16网络主要由几个基本模块(包括卷积层和池化层)和三
### 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
@@ -462,7 +462,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579
...
```
-> 关于rank_table.json,可以参考[分布式并行训练](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training.html)。
+> 关于rank_table.json,可以参考[分布式并行训练](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/introduction.html)。
> **注意** 将根据`device_num`和处理器总数绑定处理器核。如果您不希望预训练中绑定处理器内核,请在`scripts/run_distribute_train.sh`脚本中移除`taskset`相关操作。
##### GPU处理器环境运行VGG16
diff --git a/official/cv/vit/README.md b/official/cv/vit/README.md
index 7da8ba2bf9622c2927f94d60c4ecf740e45c221c..0b304c75c128d1a6eddba48eb4310d3a16a12b8c 100644
--- a/official/cv/vit/README.md
+++ b/official/cv/vit/README.md
@@ -65,7 +65,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
@@ -444,7 +444,7 @@ Current batch_ Size can only be set to 1.
### Inference
-If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example:
+If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example:
- Running on Ascend
diff --git a/official/cv/vit/README_CN.md b/official/cv/vit/README_CN.md
index 12b1b79d17665e3b21df1120f9f18d56853659f4..db969068feeffa4b23e66d573be043179ce13b44 100644
--- a/official/cv/vit/README_CN.md
+++ b/official/cv/vit/README_CN.md
@@ -68,7 +68,7 @@ Vit是基于多个transformer encoder模块串联起来,由多个inception模
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
@@ -450,7 +450,7 @@ python export.py --config_path=[CONFIG_PATH]
### 推理
-如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。下面是操作步骤示例:
+如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。下面是操作步骤示例:
- Ascend处理器环境运行
diff --git a/official/cv/warpctc/README.md b/official/cv/warpctc/README.md
index d8554d92e237e56d7307f12c4102ac5be6f6227e..783c2596b53d4f7ca9057389d03b56f192d2a603 100644
--- a/official/cv/warpctc/README.md
+++ b/official/cv/warpctc/README.md
@@ -254,7 +254,7 @@ save_checkpoint_path: "./checkpoint" # path to save checkpoint
### [Training Process](#contents)
-- Set options in `default_config.yaml`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `default_config.yaml`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
#### [Training](#contents)
diff --git a/official/cv/warpctc/README_CN.md b/official/cv/warpctc/README_CN.md
index 4e8750d109ce181854dce7014385fb15f022d9ba..6ead399acac185b3c4158db8a65e150ea1bc61d8 100644
--- a/official/cv/warpctc/README_CN.md
+++ b/official/cv/warpctc/README_CN.md
@@ -257,7 +257,7 @@ save_checkpoint_path: "./checkpoints" # 检查点保存路径,相对于t
## 训练过程
-- 在`default_config.yaml`中设置选项,包括学习率和网络超参数。单击[MindSpore加载数据集教程](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_sample.html),了解更多信息。
+- 在`default_config.yaml`中设置选项,包括学习率和网络超参数。单击[MindSpore加载数据集教程](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html),了解更多信息。
### 训练
diff --git a/official/cv/xception/README.md b/official/cv/xception/README.md
index 5ae40e616c7d60a48f44a5231d21ef42b07331cb..6dc790198d2e38e32543acfbec814865950e6c15 100644
--- a/official/cv/xception/README.md
+++ b/official/cv/xception/README.md
@@ -54,7 +54,7 @@ Dataset used can refer to paper.
## [Mixed Precision](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
@@ -193,7 +193,7 @@ bash run_eval_gpu.sh DEVICE_ID DATASET_PATH CHECKPOINT_PATH
bash run_infer_310.sh MINDIR_PATH DATA_PATH LABEL_FILE DEVICE_ID
```
-> Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html), and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
+> Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html), and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
### Launch
diff --git a/official/cv/yolov3_resnet18/README.md b/official/cv/yolov3_resnet18/README.md
index 46281a40f148790e388644b73c7ad60a2ebfcfac..14e4425575a5c255cc2aaec2327eae7e6ed3935e 100644
--- a/official/cv/yolov3_resnet18/README.md
+++ b/official/cv/yolov3_resnet18/README.md
@@ -270,7 +270,7 @@ After installing MindSpore via the official website, you can start training and
### Training on Ascend
-To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and `mindrecord_dir`. If the `mindrecord_dir` is empty, it wil generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) file by `image_dir` and `anno_path`(the absolute image path is joined by the `image_dir` and the relative path in `anno_path`). **Note if `mindrecord_dir` isn't empty, it will use `mindrecord_dir` rather than `image_dir` and `anno_path`.**
+To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and `mindrecord_dir`. If the `mindrecord_dir` is empty, it wil generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) file by `image_dir` and `anno_path`(the absolute image path is joined by the `image_dir` and the relative path in `anno_path`). **Note if `mindrecord_dir` isn't empty, it will use `mindrecord_dir` rather than `image_dir` and `anno_path`.**
- Stand alone mode
@@ -311,7 +311,7 @@ Note the results is two-classification(person and face) used our own annotations
### Evaluation on Ascend
-To eval, run `eval.py` with the dataset `image_dir`, `anno_path`(eval txt), `mindrecord_dir` and `ckpt_path`. `ckpt_path` is the path of [checkpoint](https://www.mindspore.cn/docs/programming_guide/en/master/save_model.html) file.
+To eval, run `eval.py` with the dataset `image_dir`, `anno_path`(eval txt), `mindrecord_dir` and `ckpt_path`. `ckpt_path` is the path of [checkpoint](https://www.mindspore.cn/tutorials/en/master/advanced/train/save.html) file.
```bash
bash run_eval.sh 0 yolo.ckpt ./Mindrecord_eval ./dataset ./dataset/eval.txt
diff --git a/official/cv/yolov3_resnet18/README_CN.md b/official/cv/yolov3_resnet18/README_CN.md
index 6b0719df85514e8c86e3603e9932a0a9b26338db..6dd798f1a82e0bad702613b56f88e34710fa68c9 100644
--- a/official/cv/yolov3_resnet18/README_CN.md
+++ b/official/cv/yolov3_resnet18/README_CN.md
@@ -269,7 +269,7 @@ YOLOv3整体网络架构如下:
### Ascend上训练
-训练模型运行`train.py`,使用数据集`image_dir`、`anno_path`和`mindrecord_dir`。如果`mindrecord_dir`为空,则通过`image_dir`和`anno_path`(图像绝对路径由`image_dir`和`anno_path`中的相对路径连接)生成[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)文件。**注意,如果`mindrecord_dir`不为空,将使用`mindrecord_dir`而不是`image_dir`和`anno_path`。**
+训练模型运行`train.py`,使用数据集`image_dir`、`anno_path`和`mindrecord_dir`。如果`mindrecord_dir`为空,则通过`image_dir`和`anno_path`(图像绝对路径由`image_dir`和`anno_path`中的相对路径连接)生成[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)文件。**注意,如果`mindrecord_dir`不为空,将使用`mindrecord_dir`而不是`image_dir`和`anno_path`。**
- 单机模式
@@ -310,7 +310,7 @@ YOLOv3整体网络架构如下:
### Ascend评估
-运行`eval.py`,数据集为`image_dir`、`anno_path`(评估TXT)、`mindrecord_dir`和`ckpt_path`。`ckpt_path`是[检查点](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/save_model.html)文件的路径。
+运行`eval.py`,数据集为`image_dir`、`anno_path`(评估TXT)、`mindrecord_dir`和`ckpt_path`。`ckpt_path`是[检查点](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/train/save.html)文件的路径。
```shell script
bash run_eval.sh 0 yolo.ckpt ./Mindrecord_eval ./dataset ./dataset/eval.txt
diff --git a/official/nlp/bert/README.md b/official/nlp/bert/README.md
index e0f4f38e13d1bd07d62c33d2b1e02873c4249a15..72f6bb9d5e8edea2869d9ecc6a290b8f4870021e 100644
--- a/official/nlp/bert/README.md
+++ b/official/nlp/bert/README.md
@@ -209,8 +209,6 @@ Please follow the instructions in the link below to create an hccl.json file in
For distributed training among multiple machines, training command should be executed on each machine in a small time interval. Thus, an hccl.json is needed on each machine. [merge_hccl](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools#merge_hccl) is a tool to create hccl.json for multi-machine case.
-For dataset, if you want to set the format and parameters, a schema configuration file with JSON format needs to be created, please refer to [tfrecord](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_loading.html#tfrecord) format.
-
```text
For pretraining, schema file contains ["input_ids", "input_mask", "segment_ids", "next_sentence_labels", "masked_lm_positions", "masked_lm_ids", "masked_lm_weights"].
diff --git a/official/nlp/cpm/README.md b/official/nlp/cpm/README.md
index a309cd6a62f9f097f950d5738a1b41276830005a..33afd01d05c724fa47dc2a00c07eed4fac067eeb 100644
--- a/official/nlp/cpm/README.md
+++ b/official/nlp/cpm/README.md
@@ -309,7 +309,7 @@ After processing, the mindrecord file of training and reasoning is generated in
### Finetune Training Process
-- Set options in `src/config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `src/config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
- Run `run_distribute_train_ascend_single_machine.sh` for distributed and single machine training of CPM model.
diff --git a/official/nlp/cpm/README_CN.md b/official/nlp/cpm/README_CN.md
index bfa87f8adf50a3869d9c39e07449e7cc241f7492..f6bc6ad1b17eea2faba86fa928ab0a0b1b328068 100644
--- a/official/nlp/cpm/README_CN.md
+++ b/official/nlp/cpm/README_CN.md
@@ -309,7 +309,7 @@ Parameters for dataset and network (Training/Evaluation):
### Finetune训练过程
-- 在`src/config.py`中设置,包括模型并行、batchsize、学习率和网络超参数。点击[这里](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_sample.html)查看更多数据集信息。
+- 在`src/config.py`中设置,包括模型并行、batchsize、学习率和网络超参数。点击[这里](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html)查看更多数据集信息。
- 运行`run_distribute_train_ascend_single_machine.sh`,进行CPM模型的单机8卡分布式训练。
diff --git a/official/nlp/duconv/README_CN.md b/official/nlp/duconv/README_CN.md
index 95047b1b306fd2ea958fbad0e25c81b655dffd87..afb773f9c20ba2cc9757a8a75d05bf8070b4839a 100644
--- a/official/nlp/duconv/README_CN.md
+++ b/official/nlp/duconv/README_CN.md
@@ -85,7 +85,7 @@ Proactive Conversation模型包含四个部分:
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/official/nlp/mass/README.md b/official/nlp/mass/README.md
index 98e0c045d52e234daf9707d8852d96764e474e66..d421207e629e924aae71919625fca3aefd53a43c 100644
--- a/official/nlp/mass/README.md
+++ b/official/nlp/mass/README.md
@@ -501,7 +501,7 @@ subword-nmt
rouge
```
-
+
# Get started
@@ -563,7 +563,7 @@ Get the log and output files under the path `./train_mass_*/`, and the model fil
## Inference
-If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html).
+If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html).
For inference, config the options in `default_config.yaml` firstly:
- Assign the `default_config.yaml` under `data_path` node to the dataset path.
diff --git a/official/nlp/mass/README_CN.md b/official/nlp/mass/README_CN.md
index 3020cf77c1a262f2717b1d24601bfca1b2b81621..fc8f203e79a99ec6b960fd3005855629ce4ad4d6 100644
--- a/official/nlp/mass/README_CN.md
+++ b/official/nlp/mass/README_CN.md
@@ -505,7 +505,7 @@ subword-nmt
rouge
```
-
+
# 快速上手
@@ -567,7 +567,7 @@ bash run_gpu.sh -t t -n 1 -i 1
## 推理
-如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。
+如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。
推理时,请先配置`config.json`中的选项:
- 将`default_config.yaml`节点下的`data_path`配置为数据集路径。
diff --git a/official/nlp/pangu_alpha/README.md b/official/nlp/pangu_alpha/README.md
index d885a1feae3b412512ef834063e7aae7968e9d47..bed9c6d0e77af6661385e1908d15fa1263bffbcb 100644
--- a/official/nlp/pangu_alpha/README.md
+++ b/official/nlp/pangu_alpha/README.md
@@ -1,4 +1,4 @@
-# Contents
+# Contents
- [Contents](#contents)
- [PanGu-Alpha Description](#pangu-alpha-description)
@@ -45,7 +45,7 @@ with our parallel setting. We summarized the training tricks as followings:
2. Pipeline Model Parallelism
3. Optimizer Model Parallelism
-The above features can be found [here](https://www.mindspore.cn/docs/programming_guide/en/master/auto_parallel.html).
+The above features can be found [here](https://www.mindspore.cn/tutorials/experts/en/master/parallel/introduction.html).
More amazing features are still under developing.
The technical report and checkpoint file can be found [here](https://git.openi.org.cn/PCL-Platform.Intelligence/PanGu-AIpha).
@@ -151,7 +151,7 @@ bash scripts/run_distribute_train.sh /data/pangu_30_step_ba64/ /root/hccl_8p.jso
The above command involves some `args` described below:
- DATASET: The path to the mindrecord files's parent directory . For example: `/home/work/mindrecord/`.
-- RANK_TABLE: The details of the rank table can be found [here](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html). It's a json file describes the `device id`, `service ip` and `rank`.
+- RANK_TABLE: The details of the rank table can be found [here](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html). It's a json file describes the `device id`, `service ip` and `rank`.
- RANK_SIZE: The device number. This can be your total device numbers. For example, 8, 16, 32 ...
- TYPE: The param init type. The parameters will be initialized with float32. Or you can replace it with `fp16`. This will save a little memory used on the device.
- MODE: The configure mode. This mode will set the `hidden size` and `layers` to make the parameter number near 2.6 billions. The other mode can be `13B` (`hidden size` 5120 and `layers` 40, which needs at least 16 cards to train.) and `200B`.
@@ -189,7 +189,7 @@ device0/log0.log).
The script will launch the GPU training through `mpirun`, the user can run the following command on any machine to start training.
Note when start training multi-node, the variables `NCCL_SOCKET_IFNAME` `NCCL_IB_HCA` may be different on some servers. If you meet some errors and
-strange phenomenon, please unset or set the NCCL variables. Details can be checked on this [link](https://www.mindspore.cn/docs/faq/zh-CN/master/distributed_configure.html).
+strange phenomenon, please unset or set the NCCL variables. Details can be checked on this [link](https://www.mindspore.cn/docs/zh-CN/master/faq/distributed_configure.html).
```bash
# The following variables are optional.
@@ -200,7 +200,7 @@ bash scripts/run_distributed_train_gpu.sh RANK_SIZE HOSTFILE DATASET PER_BATCH M
```
- RANK_SIZE: The device number. This can be your total device numbers. For example, 8, 16, 32 ...
-- HOSTFILE: It's a text file describes the host ip and its devices. Please see our [tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_gpu.html) or [OpenMPI](https://www.open-mpi.org/) for more details.
+- HOSTFILE: It's a text file describes the host ip and its devices. Please see our [tutorial](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html) or [OpenMPI](https://www.open-mpi.org/) for more details.
- DATASET: The path to the mindrecord files's parent directory . For example: `/home/work/mindrecord/`.
- PER_BATCH: The batch size for each data parallel-way.
- MODE: Can be `1.3B` `2.6B`, `13B` and `200B`.
@@ -222,7 +222,7 @@ bash scripts/run_distribute_train_moe_host_device.sh DATASET RANK_TABLE RANK_SIZ
The above command involves some `args` described below:
- DATASET: The path to the mindrecord files's parent directory . For example: `/home/work/mindrecord/`.
-- RANK_TABLE: The details of the rank table can be found [here](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html). It's a json file describes the `device id`, `service ip` and `rank`.
+- RANK_TABLE: The details of the rank table can be found [here](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html). It's a json file describes the `device id`, `service ip` and `rank`.
- RANK_SIZE: The device number. This can be your total device numbers. For example, 8, 16, 32 ...
- TYPE: The param init type. The parameters will be initialized with float32. Or you can replace it with `fp16`. This will save a little memory used on the device.
- MODE: The configure mode. This mode will set the `hidden size` and `layers` to make the parameter number near 2.6 billions. The other mode can be `13B` (`hidden size` 5120 and `layers` 40, which needs at least 16 cards to train.) and `200B`.
diff --git a/official/nlp/transformer/README.md b/official/nlp/transformer/README.md
index 3e35c37849c251dfdc71090a831d4ada95e866f8..4fec4896dc1191540f5f6823995ddaade8457d70 100644
--- a/official/nlp/transformer/README.md
+++ b/official/nlp/transformer/README.md
@@ -342,7 +342,7 @@ Parameters for learning rate:
## [Training Process](#contents)
-- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
- Run `run_standalone_train.sh` for non-distributed training of Transformer model.
diff --git a/official/nlp/transformer/README_CN.md b/official/nlp/transformer/README_CN.md
index be21a0a8c893329070c6daded46dfcd0c081a7bb..913aafe5629bf1cf65b92ea5a4584b15a39ab09c 100644
--- a/official/nlp/transformer/README_CN.md
+++ b/official/nlp/transformer/README_CN.md
@@ -341,7 +341,7 @@ Parameters for learning rate:
### 训练过程
-- 在`default_config.yaml`中设置选项,包括loss_scale、学习率和网络超参数。点击[这里](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_sample.html)查看更多数据集信息。
+- 在`default_config.yaml`中设置选项,包括loss_scale、学习率和网络超参数。点击[这里](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html)查看更多数据集信息。
- 运行`run_standalone_train.sh`,进行Transformer模型的非分布式训练。
diff --git a/official/recommend/ncf/README.md b/official/recommend/ncf/README.md
index f12d20935fc3339f19e5330d277d9859f01a0de6..72b829054014dc3e343d4e85556a18506b8d6064 100644
--- a/official/recommend/ncf/README.md
+++ b/official/recommend/ncf/README.md
@@ -73,7 +73,7 @@ In both datasets, the timestamp is represented in seconds since midnight Coordin
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
@@ -335,9 +335,9 @@ Inference result is saved in current path, you can find result like this in acc.
### Inference
-If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example:
+If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example:
-
+
```python
# Load unseen dataset for inference
diff --git a/research/audio/fcn-4/README.md b/research/audio/fcn-4/README.md
index 663a5e305310d4daa0a106ef3ea99b8a1a477783..34e07d0c6ded4b4ae8f8e5c5653460bf8ea4bfd8 100644
--- a/research/audio/fcn-4/README.md
+++ b/research/audio/fcn-4/README.md
@@ -41,7 +41,7 @@ FCN-4 is a convolutional neural network architecture, its name FCN-4 comes from
### Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
## [Environment Requirements](#contents)
diff --git a/research/audio/speech_transformer/README.md b/research/audio/speech_transformer/README.md
index 246ff43707039e70986529ae0ca38b7499fdc3d9..e665fba093ee874bce63a4b90bf6dd9dff6a0d24 100644
--- a/research/audio/speech_transformer/README.md
+++ b/research/audio/speech_transformer/README.md
@@ -187,7 +187,7 @@ Dataset is preprocessed using `Kaldi` and converts kaldi binaries into Python pi
## [Training Process](#contents)
-- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
- Run `run_standalone_train_gpu.sh` for non-distributed training of Transformer model.
diff --git a/research/cv/3D_DenseNet/README.md b/research/cv/3D_DenseNet/README.md
index e3ad419a9d6542372bc9236d1f15ebfe92c5bd18..68a648a4699085fbf464144b4ce68e6c42ebe8c1 100644
--- a/research/cv/3D_DenseNet/README.md
+++ b/research/cv/3D_DenseNet/README.md
@@ -222,7 +222,7 @@ Dice Coefficient (DC) for 9th subject (9 subjects for training and 1 subject for
|-------------------|:-------------------:|:---------------------:|:-----:|:--------------:|
|3D-SkipDenseSeg | 93.66| 90.80 | 90.65 | 91.70 |
-Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) For large models like InceptionV4, it's better to export an external environment variable export HCCL_CONNECT_TIMEOUT=600 to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. To avoid ops error,you should change the code like below:
+Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) For large models like InceptionV4, it's better to export an external environment variable export HCCL_CONNECT_TIMEOUT=600 to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. To avoid ops error,you should change the code like below:
in train.py:
diff --git a/research/cv/3D_DenseNet/README_CN.md b/research/cv/3D_DenseNet/README_CN.md
index 2f81477c8caddf4c845958c2f95fed499699730d..e9d5bd111f9c6835e1bdc4aa1b0aeac8b1eb060d 100644
--- a/research/cv/3D_DenseNet/README_CN.md
+++ b/research/cv/3D_DenseNet/README_CN.md
@@ -1,5 +1,3 @@
-
-
# 目录
[View English](./README.md)
@@ -214,7 +212,7 @@ bash run_eval.sh 3D-DenseSeg-20000_36.ckpt data/data_val
|-------------------|:-------------------:|:---------------------:|:-----:|:--------------:|
|3D-SkipDenseSeg | 93.66| 90.80 | 90.65 | 91.70 |
-Notes: 分布式训练需要一个RANK_TABLE_FILE,文件的删除方式可以参考该链接[Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) ,device_ip的设置参考该链接 [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) 对于像InceptionV4这样的大模型来说, 最好导出一个外部环境变量,export HCCL_CONNECT_TIMEOUT=600,以将hccl连接检查时间从默认的120秒延长到600秒。否则,连接可能会超时,因为编译时间会随着模型大小的增加而增加。在1.3.0版本下,3D算子可能存在一些问题,您可能需要更改context.set_auto_parallel_context的部分代码:
+Notes: 分布式训练需要一个RANK_TABLE_FILE,文件的删除方式可以参考该链接[Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) ,device_ip的设置参考该链接 [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) 对于像InceptionV4这样的大模型来说, 最好导出一个外部环境变量,export HCCL_CONNECT_TIMEOUT=600,以将hccl连接检查时间从默认的120秒延长到600秒。否则,连接可能会超时,因为编译时间会随着模型大小的增加而增加。在1.3.0版本下,3D算子可能存在一些问题,您可能需要更改context.set_auto_parallel_context的部分代码:
in train.py:
diff --git a/research/cv/APDrawingGAN/README_CN.md b/research/cv/APDrawingGAN/README_CN.md
index 292a446ab81c7df6e49219e458bd6af0141417bf..15d62ce12227419456796318903840746c3f90c5 100644
--- a/research/cv/APDrawingGAN/README_CN.md
+++ b/research/cv/APDrawingGAN/README_CN.md
@@ -86,7 +86,7 @@ auxiliary.ckpt文件获取:从 https://cg.cs.tsinghua.edu.cn/people/~Yongjin/A
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/AlignedReID++/README_CN.md b/research/cv/AlignedReID++/README_CN.md
index b44ae71671b8b7b7ead395c9badc1ef8d9dad0d3..53bdce3e0ca6843efaa910a6645edd3be1c8a515 100644
--- a/research/cv/AlignedReID++/README_CN.md
+++ b/research/cv/AlignedReID++/README_CN.md
@@ -61,7 +61,7 @@ AlignedReID++采用resnet50作为backbone,重新命名了AlignedReID中提出
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
@@ -403,7 +403,7 @@ market1501上评估AlignedReID++
### 推理
-如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。下面是操作步骤示例:
+如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。下面是操作步骤示例:
在进行推理之前我们需要先导出模型,mindir可以在本地环境上导出。batch_size默认为1。
diff --git a/research/cv/AlphaPose/README_CN.md b/research/cv/AlphaPose/README_CN.md
index eb28099979e9c30e67b4a41ca5e7f8e3589d4ad7..39c521465088f360161ff646021447ef399e8ec1 100644
--- a/research/cv/AlphaPose/README_CN.md
+++ b/research/cv/AlphaPose/README_CN.md
@@ -55,7 +55,7 @@ AlphaPose的总体网络架构如下:
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/DDRNet/README_CN.md b/research/cv/DDRNet/README_CN.md
index e0ef16c494a4199e6ac042d4f35f9c7cc8cf887e..0723bcef7e61968152d44436bcdd78a2b85877a3 100644
--- a/research/cv/DDRNet/README_CN.md
+++ b/research/cv/DDRNet/README_CN.md
@@ -53,7 +53,7 @@
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)
的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# [环境要求](#目录)
diff --git a/research/cv/EDSR/README_CN.md b/research/cv/EDSR/README_CN.md
index 1af53d936dbfcbd26d0b497517a039e3b2d748b8..cec64c247d50b053fe0c747ca988d7da1ec12e72 100644
--- a/research/cv/EDSR/README_CN.md
+++ b/research/cv/EDSR/README_CN.md
@@ -97,7 +97,7 @@ EDSR是由多个优化后的residual blocks串联而成,相比原始版本的r
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html?highlight=%E6%B7%B7%E5%90%88%E7%B2%BE%E5%BA%A6)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html?highlight=%E6%B7%B7%E5%90%88%E7%B2%BE%E5%BA%A6)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/EGnet/README_CN.md b/research/cv/EGnet/README_CN.md
index e945d0a66f739accd65eed77f62236c6020327c1..8f9c9f0e5258f87c4e3c3412b1c28f392ba4c39a 100644
--- a/research/cv/EGnet/README_CN.md
+++ b/research/cv/EGnet/README_CN.md
@@ -359,7 +359,7 @@ bash run_standalone_train_gpu.sh
bash run_distribute_train.sh 8 [RANK_TABLE_FILE]
```
-线下运行分布式训练请参照[mindspore分布式并行训练基础样例(Ascend)](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html)
+线下运行分布式训练请参照[mindspore分布式并行训练基础样例(Ascend)](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html)
- 线上modelarts分布式训练
diff --git a/research/cv/GENet_Res50/README_CN.md b/research/cv/GENet_Res50/README_CN.md
index 726cf0d2f8233cd5b8f2017cebb7416d1f19779f..0a773687370ee880ecd624391e51cc93aee7d7d0 100644
--- a/research/cv/GENet_Res50/README_CN.md
+++ b/research/cv/GENet_Res50/README_CN.md
@@ -64,7 +64,7 @@ Imagenet 2017和Imagenet 2012 数据集一致
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/LightCNN/README.md b/research/cv/LightCNN/README.md
index c2d524a5ad416acbe940c0601d84bfb704990b3d..21f59b7fbd0b7ee68d1799da018dea3341e8b29e 100644
--- a/research/cv/LightCNN/README.md
+++ b/research/cv/LightCNN/README.md
@@ -119,7 +119,7 @@ Dataset structure:
## [Mixed Precision](#mixedprecision)
-The [mixed-precision](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) training
+The [mixed-precision](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) training
method uses single-precision and half-precision data to improve the training speed of deep learning neural networks,
while maintaining the network accuracy that can be achieved by single-precision training. Mixed-precision training
increases computing speed and reduces memory usage, while supporting training larger models or achieving larger batches
@@ -139,7 +139,7 @@ reduce precision" to view the operators with reduced precision.
- Generate config json file for 8-card training
- [Simple tutorial](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools)
- For detailed configuration method, please refer to
- the [official website tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html#configuring-distributed-environment-variables).
+ the [official website tutorial](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#configuring-distributed-environment-variables).
# [Quick start](#Quickstart)
@@ -637,7 +637,7 @@ Please check the official [homepage](https://gitee.com/mindspore/models).
[5]: https://pan.baidu.com/s/1eR6vHFO
-[6]: https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html
+[6]: https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html
[7]: http://www.cbsr.ia.ac.cn/users/scliao/projects/blufr/BLUFR.zip
diff --git a/research/cv/LightCNN/README_CN.md b/research/cv/LightCNN/README_CN.md
index 97e91e01043b49c58905a7ea2c6dc889dda439ed..4866de2f29572ff973e9e14b8c789e9cf8a25eab 100644
--- a/research/cv/LightCNN/README_CN.md
+++ b/research/cv/LightCNN/README_CN.md
@@ -107,7 +107,7 @@ LightCNN适用于有大量噪声的人脸识别数据集,提出了maxout 的
- [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
- 生成config json文件用于8卡训练。
- [简易教程](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools)
- - 详细配置方法请参照[官网教程](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html#配置分布式环境变量)。
+ - 详细配置方法请参照[官网教程](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html#配置分布式环境变量)。
# 快速入门
@@ -516,7 +516,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [DEVICE_ID]
[3]: https://drive.google.com/file/d/0ByNaVHFekDPRbFg1YTNiMUxNYXc/view?usp=sharing
[4]: https://hyper.ai/datasets/5543
[5]: https://pan.baidu.com/s/1eR6vHFO
-[6]: https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html
+[6]: https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html
[7]: http://www.cbsr.ia.ac.cn/users/scliao/projects/blufr/BLUFR.zip
[8]: https://github.com/AlfredXiangWu/face_verification_experiment/blob/master/code/lfw_pairs.mat
[9]: https://github.com/AlfredXiangWu/face_verification_experiment/blob/master/results/LightenedCNN_B_lfw.mat
diff --git a/research/cv/ManiDP/Readme.md b/research/cv/ManiDP/Readme.md
index 2f4302712e41d70ffaaf7f3eafe0e28070067904..403094c0f82b9e8f75e113a308ecc4095473e6ea 100644
--- a/research/cv/ManiDP/Readme.md
+++ b/research/cv/ManiDP/Readme.md
@@ -40,7 +40,7 @@ Dataset used: [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html)
## [Mixed Precision(Ascend)](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/research/cv/NFNet/README_CN.md b/research/cv/NFNet/README_CN.md
index d46125b2b0d7ebda8f1d8f4fde4424ec918beb7d..ee4fdad5ee12e981c7b55881fe7b8946d900f384 100644
--- a/research/cv/NFNet/README_CN.md
+++ b/research/cv/NFNet/README_CN.md
@@ -57,7 +57,7 @@
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)
的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# [环境要求](#目录)
diff --git a/research/cv/RefineDet/README_CN.md b/research/cv/RefineDet/README_CN.md
index 3645326d5569484cd8c7d472f13509531676b48c..92353c907e1c0724422f5413779a2d2f0db028ed 100644
--- a/research/cv/RefineDet/README_CN.md
+++ b/research/cv/RefineDet/README_CN.md
@@ -211,7 +211,7 @@ sh run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
## 训练过程
-运行`train.py`训练模型。如果`mindrecord_dir`为空,则会通过`coco_root`(coco数据集)或`image_dir`和`anno_path`(自己的数据集)生成[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)文件。**注意,如果mindrecord_dir不为空,将使用mindrecord_dir代替原始图像。**
+运行`train.py`训练模型。如果`mindrecord_dir`为空,则会通过`coco_root`(coco数据集)或`image_dir`和`anno_path`(自己的数据集)生成[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)文件。**注意,如果mindrecord_dir不为空,将使用mindrecord_dir代替原始图像。**
### Ascend上训练
diff --git a/research/cv/RefineNet/README.md b/research/cv/RefineNet/README.md
index 413b7e363b23169f7f8af97fbb5e897800df9afc..fb8c1e4dbbe2095fc6c59550640c1b7df6821ced 100644
--- a/research/cv/RefineNet/README.md
+++ b/research/cv/RefineNet/README.md
@@ -84,7 +84,7 @@ Pascal VOC数据集和语义边界数据集(Semantic Boundaries Dataset,SBD
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)
的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
diff --git a/research/cv/SE-Net/README.md b/research/cv/SE-Net/README.md
index dd8993efe011a227a534e22d187ffbb0e0242b98..6c981e56cd7092ff8bc8777ebaaa1430900049ef 100644
--- a/research/cv/SE-Net/README.md
+++ b/research/cv/SE-Net/README.md
@@ -67,7 +67,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/research/cv/SE_ResNeXt50/README_CN.md b/research/cv/SE_ResNeXt50/README_CN.md
index e4d3136abcb3f0bad4d0bb88ff46071937fc2ed1..e9e54a23c4c7c98c0fdd233ebc280a1c24c767e8 100644
--- a/research/cv/SE_ResNeXt50/README_CN.md
+++ b/research/cv/SE_ResNeXt50/README_CN.md
@@ -56,7 +56,7 @@ SE-ResNeXt的总体网络架构如下: [链接](https://arxiv.org/abs/1709.015
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# 环境要求
diff --git a/research/cv/TNT/README_CN.md b/research/cv/TNT/README_CN.md
index bf21f0efc612360e1ea7ee3dde4999028b2b90dc..cf8699f34db3cb67a87ea6fa45047bae87d7a182 100644
--- a/research/cv/TNT/README_CN.md
+++ b/research/cv/TNT/README_CN.md
@@ -53,7 +53,7 @@ Transformer是一种最初用于NLP任务的基于自注意力的神经网络。
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)
的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# [环境要求](#目录)
diff --git a/research/cv/cct/README_CN.md b/research/cv/cct/README_CN.md
index f67e02896c6af6ad981b3692aff4dc9ab3d53f64..b61064ab0bbb1b832c926802780cb4f88d75d0c3 100644
--- a/research/cv/cct/README_CN.md
+++ b/research/cv/cct/README_CN.md
@@ -51,7 +51,7 @@
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)
的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# [环境要求](#目录)
diff --git a/research/cv/convnext/README_CN.md b/research/cv/convnext/README_CN.md
index eec99f77397119d555d6939baffc00b25e1de5b5..09a9026289a82e0b000f629fc76c24daefb6783d 100644
--- a/research/cv/convnext/README_CN.md
+++ b/research/cv/convnext/README_CN.md
@@ -53,7 +53,7 @@
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)
的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# [环境要求](#目录)
diff --git a/research/cv/dcgan/README.md b/research/cv/dcgan/README.md
index 5fd8c0e66e001a6f66372604fb5ee2ddb414d91f..cca8544677b5c74e3ee253c0da0119453618adec 100644
--- a/research/cv/dcgan/README.md
+++ b/research/cv/dcgan/README.md
@@ -137,7 +137,7 @@ dcgan_cifar10_cfg {
## [Training Process](#contents)
-- Set options in `config.py`, including learning rate, output filename and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `config.py`, including learning rate, output filename and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
### [Training](#content)
diff --git a/research/cv/deeplabv3plus/README_CN.md b/research/cv/deeplabv3plus/README_CN.md
index 38a80416f19b82d4d0ff9945d255e0b700f92958..a404b5c28ee57e9d1b33a1059a4b5f8c3712fd57 100644
--- a/research/cv/deeplabv3plus/README_CN.md
+++ b/research/cv/deeplabv3plus/README_CN.md
@@ -85,7 +85,7 @@ Pascal VOC数据集和语义边界数据集(Semantic Boundaries Dataset,SBD
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/dlinknet/README.md b/research/cv/dlinknet/README.md
index 06cf9bdedb7b7e6d44f5c2eeb92913573e887e36..d272c06d96897538edfdc68948595a1476d87103 100644
--- a/research/cv/dlinknet/README.md
+++ b/research/cv/dlinknet/README.md
@@ -316,7 +316,7 @@ bash scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [CONFIG_PATH]
#### inference
If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you
-can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following
+can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following
the steps below, this is a simple example:
##### running-on-ascend-310
diff --git a/research/cv/dlinknet/README_CN.md b/research/cv/dlinknet/README_CN.md
index 2cdb3ed0cfbdbad612eb4432068b42b75d0b94be..2e43b9cb74ea35540b6b2b14e73e8dda03b6f81e 100644
--- a/research/cv/dlinknet/README_CN.md
+++ b/research/cv/dlinknet/README_CN.md
@@ -320,7 +320,7 @@ bash scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [CONFIG_PATH]
#### 推理
-如果您需要使用训练好的模型在Ascend 910、Ascend 310等多个硬件平台上进行推理上进行推理,可参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。下面是一个简单的操作步骤示例:
+如果您需要使用训练好的模型在Ascend 910、Ascend 310等多个硬件平台上进行推理上进行推理,可参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。下面是一个简单的操作步骤示例:
##### Ascend 310环境运行
diff --git a/research/cv/efficientnetv2/README_CN.md b/research/cv/efficientnetv2/README_CN.md
index 9e90c4a99c125551acf93af51c935f43ca38df45..75dea2a674dcb7e988b3690e0bc4ca7591d0a39c 100644
--- a/research/cv/efficientnetv2/README_CN.md
+++ b/research/cv/efficientnetv2/README_CN.md
@@ -51,7 +51,7 @@
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)
的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# [环境要求](#目录)
diff --git a/research/cv/fairmot/README.md b/research/cv/fairmot/README.md
index ff3f565fa2a4f66f91bc2130ad469b81f33e677a..c75f79a657a54850918b4eb32403ab2b854ee3fd 100644
--- a/research/cv/fairmot/README.md
+++ b/research/cv/fairmot/README.md
@@ -46,7 +46,7 @@ Dataset used: ETH, CalTech, MOT17, CUHK-SYSU, PRW, CityPerson
## [Mixed Precision](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/research/cv/fishnet99/README_CN.md b/research/cv/fishnet99/README_CN.md
index 7129785a41cc6e5cd1c6b27dcd50b57d67f4cd08..86aae8c3412467fc5631ba291a826a4d2ebd9c10 100644
--- a/research/cv/fishnet99/README_CN.md
+++ b/research/cv/fishnet99/README_CN.md
@@ -63,7 +63,7 @@
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# 环境要求
diff --git a/research/cv/glore_res/README_CN.md b/research/cv/glore_res/README_CN.md
index ead07cb539af3114f1a5976e5b296df0115b0db8..4a7afb8ba75056e0dd6d9edba0bc0140269e3009 100644
--- a/research/cv/glore_res/README_CN.md
+++ b/research/cv/glore_res/README_CN.md
@@ -81,7 +81,7 @@ glore_res200网络模型的backbone是ResNet200, 在Stage2, Stage3中分别均
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/glore_res200/README_CN.md b/research/cv/glore_res200/README_CN.md
index 6c81a1be65a1805631d5179aa1f547795d8bbdc3..cd0bf75fa9600e1c1c49eaeca3e03e65ea8d6d90 100644
--- a/research/cv/glore_res200/README_CN.md
+++ b/research/cv/glore_res200/README_CN.md
@@ -72,7 +72,7 @@
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/glore_res50/README.md b/research/cv/glore_res50/README.md
index 39e47cae98db1df413630c3e0452ae068b3bbbd2..bc80ce7d18f2c1fff629fabf858c86e47dcfbd61 100644
--- a/research/cv/glore_res50/README.md
+++ b/research/cv/glore_res50/README.md
@@ -61,7 +61,7 @@ glore_res的总体网络架构如下:
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/hardnet/README_CN.md b/research/cv/hardnet/README_CN.md
index d1b901770c83e5c25d31a229c4888d0dd7ef1c58..7b44eef5597383941430dcea6659837f9f7cf667 100644
--- a/research/cv/hardnet/README_CN.md
+++ b/research/cv/hardnet/README_CN.md
@@ -60,7 +60,7 @@ HarDNet指的是Harmonic DenseNet: A low memory traffic network,其突出的
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
@@ -419,7 +419,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [DEVICE_ID]
### 推理
-如果您需要使用此训练模型在Ascend 910上进行推理,可参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。下面是操作步骤示例:
+如果您需要使用此训练模型在Ascend 910上进行推理,可参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。下面是操作步骤示例:
- Ascend处理器环境运行
@@ -456,7 +456,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [DEVICE_ID]
print("==============Acc: {} ==============".format(acc))
```
-如果您需要使用此训练模型在GPU上进行推理,可参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。下面是操作步骤示例:
+如果您需要使用此训练模型在GPU上进行推理,可参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。下面是操作步骤示例:
- GPU处理器环境运行
diff --git a/research/cv/inception_resnet_v2/README.md b/research/cv/inception_resnet_v2/README.md
index cf199c606e2f95b76f84d5efb031e2a662b7675c..3852562defc0cc520377ec27e0639caa2c42d2fc 100644
--- a/research/cv/inception_resnet_v2/README.md
+++ b/research/cv/inception_resnet_v2/README.md
@@ -44,7 +44,7 @@ The dataset used is [ImageNet](https://image-net.org/download.php).
## [Mixed Precision(Ascend)](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
@@ -122,7 +122,7 @@ bash scripts/run_standalone_train_ascend.sh DEVICE_ID DATA_DIR
```
> Notes:
-> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
+> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
>
> This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh`
diff --git a/research/cv/inception_resnet_v2/README_CN.md b/research/cv/inception_resnet_v2/README_CN.md
index 9fadab9ea7b505c4da2ffc320e5a0fd61c841a4e..8be29bd5cb7f82f8ccfba4e19486768016ecdce1 100644
--- a/research/cv/inception_resnet_v2/README_CN.md
+++ b/research/cv/inception_resnet_v2/README_CN.md
@@ -56,7 +56,7 @@ Inception_ResNet_v2的总体网络架构如下:
## 混合精度(Ascend)
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
@@ -133,7 +133,7 @@ bash scripts/run_distribute_train_ascend.sh RANK_TABLE_FILE DATA_DIR
bash scripts/run_standalone_train_ascend.sh DEVICE_ID DATA_DIR
```
-> 注:RANK_TABLE_FILE可参考[链接]( https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html)。device_ip可以通过[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools)获取
+> 注:RANK_TABLE_FILE可参考[链接]( https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html)。device_ip可以通过[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools)获取
- GPU:
diff --git a/research/cv/mae/README_CN.md b/research/cv/mae/README_CN.md
index 5c8f9a266fbaedece9b487ae4e0e01fc333ba8af..ee67f0afa6b16237a6d225e62b87caf175d7add8 100644
--- a/research/cv/mae/README_CN.md
+++ b/research/cv/mae/README_CN.md
@@ -63,7 +63,7 @@ This is a MindSpore/NPU re-implementation of the paper [Masked Autoencoders Are
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
@@ -390,7 +390,7 @@ This is a MindSpore/NPU re-implementation of the paper [Masked Autoencoders Are
### 推理
-如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。下面是操作步骤示例:
+如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。下面是操作步骤示例:
- Ascend处理器环境运行
diff --git a/research/cv/metric_learn/README_CN.md b/research/cv/metric_learn/README_CN.md
index 6c95794fd9b6a2745423921bc0d0fadfa926104f..1588e0afaa800b983dbf6664caf2eaa0c79df58b 100644
--- a/research/cv/metric_learn/README_CN.md
+++ b/research/cv/metric_learn/README_CN.md
@@ -80,7 +80,7 @@ cd Stanford_Online_Products && head -n 1048 test.txt > test_tiny.txt
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/midas/README.md b/research/cv/midas/README.md
index 4ce4c8ffd710790c75a8012ed75a36588f559940..b5535338368be95ebe9c4e71ea2a3a7d547dec84 100644
--- a/research/cv/midas/README.md
+++ b/research/cv/midas/README.md
@@ -55,7 +55,7 @@ Midas的总体网络架构如下:
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/nas-fpn/README_CN.md b/research/cv/nas-fpn/README_CN.md
index 1fd7d580f2f748dfcb347be7388531785689ba09..8706b3600fb5105879c041b238e8ca2c62a66777 100644
--- a/research/cv/nas-fpn/README_CN.md
+++ b/research/cv/nas-fpn/README_CN.md
@@ -161,7 +161,7 @@ bash scripts/run_single_train.sh DEVICE_ID MINDRECORD_DIR PRE_TRAINED(optional)
```
> 注意:
-RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
+RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
#### 运行
diff --git a/research/cv/ntsnet/README.md b/research/cv/ntsnet/README.md
index 20a42e7c3c11e129ca9d87f068ce2432c9416d75..f53854d337784083bc7ec23c99fe96ccbf9e62f0 100644
--- a/research/cv/ntsnet/README.md
+++ b/research/cv/ntsnet/README.md
@@ -133,7 +133,7 @@ Usage: bash run_standalone_train_ascend.sh [DATA_URL] [TRAIN_URL]
## [Training Process](#contents)
-- Set options in `config.py`, including learning rate, output filename and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset.
+- Set options in `config.py`, including learning rate, output filename and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset.
- Get ResNet50 pretrained model from [Mindspore Hub](https://www.mindspore.cn/resources/hub/details?MindSpore/ascend/v1.2/resnet50_v1.2_imagenet2012)
### [Training](#content)
diff --git a/research/cv/osnet/README.md b/research/cv/osnet/README.md
index 15c7ea6a68f5521a92530c19f73a1cca9ce8ee8a..f449b8b59d3e2272bacffa36a5fdf093a9742ba1 100644
--- a/research/cv/osnet/README.md
+++ b/research/cv/osnet/README.md
@@ -155,7 +155,7 @@ bash run_eval_ascend.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
```
> Notes:
-> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
+> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
>
> This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_train_distribute_ascend.sh`
>
diff --git a/research/cv/ras/README.md b/research/cv/ras/README.md
index c2a18eb1bb50d10068be5ad2e3959ad082c97e2f..1dc300f6e9db4b0c801f643de802cc843d07e461 100644
--- a/research/cv/ras/README.md
+++ b/research/cv/ras/README.md
@@ -73,7 +73,7 @@ RAS总体网络架构如下:
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/renas/Readme.md b/research/cv/renas/Readme.md
index 3a862f9876e30d68b41f79aaa9d583df20136ff6..f76c2c8edc9ff99de023467e74a670a6a031ab01 100644
--- a/research/cv/renas/Readme.md
+++ b/research/cv/renas/Readme.md
@@ -39,7 +39,7 @@ An effective and efficient architecture performance evaluation scheme is essenti
## [Mixed Precision(Ascend)](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/research/cv/res2net/README.md b/research/cv/res2net/README.md
index d971e0ee4a7ef41ddbf85e84982a42d0d05d4b56..199f6fa24df8a79b8e716f0074940817f88e5a48 100644
--- a/research/cv/res2net/README.md
+++ b/research/cv/res2net/README.md
@@ -82,7 +82,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/research/cv/res2net_deeplabv3/README.md b/research/cv/res2net_deeplabv3/README.md
index 4632c1d4f2247849042ebf8de60da9bd36b1b90d..478034d000c0136bdec8431ce45b6bfcd97b39b4 100644
--- a/research/cv/res2net_deeplabv3/README.md
+++ b/research/cv/res2net_deeplabv3/README.md
@@ -85,7 +85,7 @@ You can also generate the list file automatically by run script: `python get_dat
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/research/cv/resnet3d/README_CN.md b/research/cv/resnet3d/README_CN.md
index 3410ec5ca004b87e4853854f97fec3744d8d7eb1..5ed6d25aad4439ee700833e5bf4b5fe436789572 100644
--- a/research/cv/resnet3d/README_CN.md
+++ b/research/cv/resnet3d/README_CN.md
@@ -105,7 +105,7 @@ python3 generate_video_jpgs.py --video_path ~/dataset/hmdb51/videos/ --target_pa
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/resnet50_bam/README.md b/research/cv/resnet50_bam/README.md
index 170f2124ca41cf276d15c04aa8781b47ff4a2772..9367f89f3bdcea82793d32481271a6aa77b5883d 100644
--- a/research/cv/resnet50_bam/README.md
+++ b/research/cv/resnet50_bam/README.md
@@ -56,7 +56,7 @@ Data set used: [ImageNet2012](http://www.image-net.org/)
## Mixed precision
-The [mixed-precision](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) training method uses single-precision and half-precision data to improve the training speed of deep learning neural networks, while maintaining the network accuracy that can be achieved by single-precision training. Mixed-precision training increases computing speed and reduces memory usage, while supporting training larger models or achieving larger batches of training on specific hardware.
+The [mixed-precision](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) training method uses single-precision and half-precision data to improve the training speed of deep learning neural networks, while maintaining the network accuracy that can be achieved by single-precision training. Mixed-precision training increases computing speed and reduces memory usage, while supporting training larger models or achieving larger batches of training on specific hardware.
# Environmental requirements
diff --git a/research/cv/resnet50_bam/README_CN.md b/research/cv/resnet50_bam/README_CN.md
index 5b7ea5b2680e343d0797af3796b79e6eb0963dbe..d4a8c28f60cd4ef798012a5fcbc87225cbae33cd 100644
--- a/research/cv/resnet50_bam/README_CN.md
+++ b/research/cv/resnet50_bam/README_CN.md
@@ -56,7 +56,7 @@ resnet50_bam的作者提出了一个简单但是有效的Attention模型——BA
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# 环境要求
diff --git a/research/cv/resnext152_64x4d/README.md b/research/cv/resnext152_64x4d/README.md
index 3320bcf355262524316df6cd28d9d2917df69621..61fb3a324028bfbf3100a2c6f122bd5403b88e0c 100644
--- a/research/cv/resnext152_64x4d/README.md
+++ b/research/cv/resnext152_64x4d/README.md
@@ -54,7 +54,7 @@ Dataset used: [imagenet](http://www.image-net.org/)
## [Mixed Precision](#contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
diff --git a/research/cv/resnext152_64x4d/README_CN.md b/research/cv/resnext152_64x4d/README_CN.md
index 8b6a05b0e52ac346c99c4c8b47c6836a48479fec..2a6e580969d1df2fb12a29463f52169be2c7e1e9 100644
--- a/research/cv/resnext152_64x4d/README_CN.md
+++ b/research/cv/resnext152_64x4d/README_CN.md
@@ -54,7 +54,7 @@ ResNeXt整体网络架构如下:
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
diff --git a/research/cv/retinanet_resnet101/README.md b/research/cv/retinanet_resnet101/README.md
index d20bef0b343e07d3b6f9cbbbaab988c43dd2c025..c2e896644d940acdabdc5bbba1acd8ebe874e55b 100644
--- a/research/cv/retinanet_resnet101/README.md
+++ b/research/cv/retinanet_resnet101/README.md
@@ -287,7 +287,7 @@ bash run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABL
bash run_single_train.sh [DEVICE_ID] [EPOCH_SIZE] [LR] [DATASET] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional)
```
-> Note: RANK_TABLE_FILE related reference materials see in this [link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html), for details on how to get device_ip check this [link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
+> Note: RANK_TABLE_FILE related reference materials see in this [link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html), for details on how to get device_ip check this [link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
- GPU
diff --git a/research/cv/retinanet_resnet101/README_CN.md b/research/cv/retinanet_resnet101/README_CN.md
index 8f86a05d88aed353f3f775553a5adeaa1924f0c9..d62df255a5bae5ef36d8337ce6634208bb4f0a5b 100644
--- a/research/cv/retinanet_resnet101/README_CN.md
+++ b/research/cv/retinanet_resnet101/README_CN.md
@@ -292,7 +292,7 @@ bash run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABL
bash run_single_train.sh [DEVICE_ID] [EPOCH_SIZE] [LR] [DATASET] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional)
```
-> 注意: RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
+> 注意: RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
- GPU
diff --git a/research/cv/retinanet_resnet152/README.md b/research/cv/retinanet_resnet152/README.md
index 23e04a27d042f38df5eb8b0a8f77ee027868a683..d1be441cb249c3164e374402a25222d6af732767 100644
--- a/research/cv/retinanet_resnet152/README.md
+++ b/research/cv/retinanet_resnet152/README.md
@@ -291,7 +291,7 @@ bash run_distribute_train.sh DEVICE_NUM EPOCH_SIZE LR DATASET RANK_TABLE_FILE PR
bash run_distribute_train.sh DEVICE_ID EPOCH_SIZE LR DATASET PRE_TRAINED(optional) PRE_TRAINED_EPOCH_SIZE(optional)
```
-> Note: RANK_TABLE_FILE related reference materials see in this [link](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html),
+> Note: RANK_TABLE_FILE related reference materials see in this [link](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html),
> for details on how to get device_ip check this [link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
- GPU:
diff --git a/research/cv/retinanet_resnet152/README_CN.md b/research/cv/retinanet_resnet152/README_CN.md
index 1dda1c52d4e38e6877f87ae0a08dc3b2c47e4279..f3c709496a8db1accc53fac7532900d81c41a74b 100644
--- a/research/cv/retinanet_resnet152/README_CN.md
+++ b/research/cv/retinanet_resnet152/README_CN.md
@@ -285,7 +285,7 @@ bash run_distribute_train.sh DEVICE_NUM EPOCH_SIZE LR DATASET RANK_TABLE_FILE PR
bash run_distribute_train.sh DEVICE_ID EPOCH_SIZE LR DATASET PRE_TRAINED(optional) PRE_TRAINED_EPOCH_SIZE(optional)
```
-> 注意: RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html),
+> 注意: RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html),
> 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
- GPU:
diff --git a/research/cv/siamRPN/README_CN.md b/research/cv/siamRPN/README_CN.md
index d7937fa68290760d3de87e1c7daa63dbfe4ac91b..3376c0cd3472f4547b275c1f9b89a90325e95999 100644
--- a/research/cv/siamRPN/README_CN.md
+++ b/research/cv/siamRPN/README_CN.md
@@ -51,7 +51,7 @@ Siam-RPN提出了一种基于RPN的孪生网络结构。由孪生子网络和RPN
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/simple_baselines/README_CN.md b/research/cv/simple_baselines/README_CN.md
index 1a228c7e719154eb80bd627d1a241fe2d6c8b8f9..3eb48ea04db3a879f4af6f2b38649bce8e055b9f 100644
--- a/research/cv/simple_baselines/README_CN.md
+++ b/research/cv/simple_baselines/README_CN.md
@@ -53,7 +53,7 @@ simple_baselines的总体网络架构如下:
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html))的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html))的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/single_path_nas/README.md b/research/cv/single_path_nas/README.md
index f4899b7ae70006667cc6c6ccbc8d89eb3e8b0e0f..ae660b649592fc8998de928da8c2c3622efa6051 100644
--- a/research/cv/single_path_nas/README.md
+++ b/research/cv/single_path_nas/README.md
@@ -70,7 +70,7 @@ Dataset used:[ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed-precision](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)
+The [mixed-precision](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)
training method uses single-precision and half-precision data to improve the training speed of
deep learning neural networks, while maintaining the network accuracy that can be achieved by single-precision training.
Mixed-precision training increases computing speed and reduces memory usage, while supporting training larger models or
diff --git a/research/cv/single_path_nas/README_CN.md b/research/cv/single_path_nas/README_CN.md
index 3c71cfe5396fdacd71cbfa3d2e9e0f09b7b94ebb..62c6d04c6d95599f77e88920ea26721e403b443f 100644
--- a/research/cv/single_path_nas/README_CN.md
+++ b/research/cv/single_path_nas/README_CN.md
@@ -57,7 +57,7 @@ single-path-nas的作者用一个7x7的大卷积,来代表3x3、5x5和7x7的
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# 环境要求
diff --git a/research/cv/sknet/README.md b/research/cv/sknet/README.md
index 60f7315da8edb8ec98624133c19ff1c7b7a8b0c0..6e581761a127c4e7d2bad3d51b505e9f2a7c6a03 100644
--- a/research/cv/sknet/README.md
+++ b/research/cv/sknet/README.md
@@ -74,7 +74,7 @@ Dataset used: [CIFAR10](https://www.kaggle.com/c/cifar-10)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/research/cv/squeezenet/README.md b/research/cv/squeezenet/README.md
index 7a045e9481ed1d73ece6dda28bb35875a686ba9a..de1f902c6d404d0124146269d5aa9041c55b5bf0 100644
--- a/research/cv/squeezenet/README.md
+++ b/research/cv/squeezenet/README.md
@@ -74,7 +74,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
## Mixed Precision
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
@@ -512,7 +512,7 @@ result: {'top_1_accuracy': 0.6094950384122919, 'top_5_accuracy': 0.8263244238156
### Inference
-If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example:
+If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example:
- Running on Ascend
diff --git a/research/cv/squeezenet1_1/README.md b/research/cv/squeezenet1_1/README.md
index 5042d64b3e8158b818e574a32111122a42cf3628..ee112140dedf149f1fa180cb032aaf311db7adb9 100644
--- a/research/cv/squeezenet1_1/README.md
+++ b/research/cv/squeezenet1_1/README.md
@@ -304,7 +304,7 @@ Inference result is saved in current path, you can find result like this in acc.
### Inference
-If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example:
+If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example:
- Running on Ascend
diff --git a/research/cv/ssd_ghostnet/README.md b/research/cv/ssd_ghostnet/README.md
index cbc408763bb443922f7358b5d15b40a438b76eeb..1e8b82af26f0cc72b62d381472b6ccd11ecd0144 100644
--- a/research/cv/ssd_ghostnet/README.md
+++ b/research/cv/ssd_ghostnet/README.md
@@ -210,7 +210,7 @@ If you want to run in modelarts, please check the official documentation of [mod
### Training on Ascend
-To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
+To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
- Distribute mode
diff --git a/research/cv/ssd_inception_v2/README.md b/research/cv/ssd_inception_v2/README.md
index 0a55f166352a428288c076ee3b9c5829d355b6c2..cd0916115e99ff7cde2c0d01735d194c48ba1562 100644
--- a/research/cv/ssd_inception_v2/README.md
+++ b/research/cv/ssd_inception_v2/README.md
@@ -213,7 +213,7 @@ bash scripts/docker_start.sh ssd:20.1.0 [DATA_DIR] [MODEL_DIR]
### [Training Process](#contents)
-To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
+To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
#### Training on GPU
diff --git a/research/cv/ssd_inceptionv2/README_CN.md b/research/cv/ssd_inceptionv2/README_CN.md
index f1b0298ee30ae65eb6a49d2795b65f6cc90be4b7..fcf4a26d86475e5a93d1c4cf5d6dd1599319d018 100644
--- a/research/cv/ssd_inceptionv2/README_CN.md
+++ b/research/cv/ssd_inceptionv2/README_CN.md
@@ -171,7 +171,7 @@ bash run_eval.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [CHECKPOINT_PATH] [MINDREC
## 训练过程
-运行`train.py`训练模型。如果`mindrecord_dir`为空,则会通过`coco_root`(coco数据集)或`image_dir`和`anno_path`(自己的数据集)生成[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)文件。**注意,如果mindrecord_dir不为空,将使用mindrecord_dir代替原始图像。**
+运行`train.py`训练模型。如果`mindrecord_dir`为空,则会通过`coco_root`(coco数据集)或`image_dir`和`anno_path`(自己的数据集)生成[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)文件。**注意,如果mindrecord_dir不为空,将使用mindrecord_dir代替原始图像。**
### Ascend上训练
diff --git a/research/cv/ssd_mobilenetV2/README.md b/research/cv/ssd_mobilenetV2/README.md
index 3987cbdddc338cc7d2c5ede4e17a8ba8d6a0a5c0..7b2ca8caffdfa539d94a1c06acfb6f42d94082ea 100644
--- a/research/cv/ssd_mobilenetV2/README.md
+++ b/research/cv/ssd_mobilenetV2/README.md
@@ -221,7 +221,7 @@ bash scripts/run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
### [Training Process](#contents)
-To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
+To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
#### Training on Ascend
diff --git a/research/cv/ssd_mobilenetV2_FPNlite/README.md b/research/cv/ssd_mobilenetV2_FPNlite/README.md
index 0190650aa03c8713e32bdd058c02ef53fb9f6f84..6f2cdd29907940dd8f2d9eb80ecd3b4f2f1a6ed7 100644
--- a/research/cv/ssd_mobilenetV2_FPNlite/README.md
+++ b/research/cv/ssd_mobilenetV2_FPNlite/README.md
@@ -233,7 +233,7 @@ bash run_eval_gpu.sh [CONFIG_FILE] [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
### [Training Process](#contents)
-To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
+To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
#### Training on Ascend
diff --git a/research/cv/ssd_resnet34/README.md b/research/cv/ssd_resnet34/README.md
index e3938abeca8a1a347f98fe611596ea34c0a1937b..8ce22a5ffd307a2ed2a43ad6906664adffc3cd02 100644
--- a/research/cv/ssd_resnet34/README.md
+++ b/research/cv/ssd_resnet34/README.md
@@ -202,7 +202,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
### [Training Process](#contents)
-To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
+To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
#### Training on Ascend
diff --git a/research/cv/ssd_resnet34/README_CN.md b/research/cv/ssd_resnet34/README_CN.md
index 96326775359a826db9e994afdad73ff5f5c6a8cf..2aab91733928c6764b181726b7b5fd6d4b8e3c91 100644
--- a/research/cv/ssd_resnet34/README_CN.md
+++ b/research/cv/ssd_resnet34/README_CN.md
@@ -169,7 +169,7 @@ sh scripts/run_eval.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [CHECKPOINT_PATH] [M
## 训练过程
-运行`train.py`训练模型。如果`mindrecord_dir`为空,则会通过`coco_root`(coco数据集)或`image_dir`和`anno_path`(自己的数据集)生成[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)文件。**注意,如果mindrecord_dir不为空,将使用mindrecord_dir代替原始图像。**
+运行`train.py`训练模型。如果`mindrecord_dir`为空,则会通过`coco_root`(coco数据集)或`image_dir`和`anno_path`(自己的数据集)生成[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)文件。**注意,如果mindrecord_dir不为空,将使用mindrecord_dir代替原始图像。**
### Ascend上训练
diff --git a/research/cv/ssd_resnet50/README.md b/research/cv/ssd_resnet50/README.md
index 116c1abb008c6260d9d3308dfaf4e364b87dabb2..9075c7e95ff0f6111bece202048b71e700340641 100644
--- a/research/cv/ssd_resnet50/README.md
+++ b/research/cv/ssd_resnet50/README.md
@@ -204,7 +204,7 @@ Then you can run everything just like on ascend.
### [Training Process](#contents)
-To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
+To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
#### Training on Ascend
diff --git a/research/cv/ssd_resnet50/README_CN.md b/research/cv/ssd_resnet50/README_CN.md
index 0f2d4067ef3b6b34559ea91c9b26c6284e2548ea..4f7d5d16760cf1ad8bcb7ec69d2ee7423d512494 100644
--- a/research/cv/ssd_resnet50/README_CN.md
+++ b/research/cv/ssd_resnet50/README_CN.md
@@ -163,7 +163,7 @@ bash run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
## 训练过程
-运行`train.py`训练模型。如果`mindrecord_dir`为空,则会通过`coco_root`(coco数据集)或`image_dir`和`anno_path`(自己的数据集)生成[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)文件。**注意,如果mindrecord_dir不为空,将使用mindrecord_dir代替原始图像。**
+运行`train.py`训练模型。如果`mindrecord_dir`为空,则会通过`coco_root`(coco数据集)或`image_dir`和`anno_path`(自己的数据集)生成[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)文件。**注意,如果mindrecord_dir不为空,将使用mindrecord_dir代替原始图像。**
### Ascend上训练
diff --git a/research/cv/ssd_resnet_34/README.md b/research/cv/ssd_resnet_34/README.md
index 6fde21e6fc17d710b37ef42046b62eab0938407e..1704cb8ebba08945d807674dd4aae21d9b4be540 100644
--- a/research/cv/ssd_resnet_34/README.md
+++ b/research/cv/ssd_resnet_34/README.md
@@ -204,7 +204,7 @@ Major parameters in train.py and config.py for Multi GPU train:
### [Training Process](#contents)
-To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
+To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
#### Training on GPU
diff --git a/research/cv/swin_transformer/README_CN.md b/research/cv/swin_transformer/README_CN.md
index 7d0a842c99069534073e75ac37bc7f03738e6c58..23ed2d54c7853a99ccc62ad161473a51eb3cc66a 100644
--- a/research/cv/swin_transformer/README_CN.md
+++ b/research/cv/swin_transformer/README_CN.md
@@ -53,7 +53,7 @@ SwinTransformer是新型的视觉Transformer,它可以用作计算机视觉的
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)
的训练方法,使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
# [环境要求](#目录)
diff --git a/research/cv/tsm/README_CN.md b/research/cv/tsm/README_CN.md
index 1df30c8db6482c57b46d90ba87ebc1bdab211bf9..9f66040d213504ddf8e2d9e8d2cbf6dd7ce492d6 100644
--- a/research/cv/tsm/README_CN.md
+++ b/research/cv/tsm/README_CN.md
@@ -59,7 +59,7 @@ TSM应用了一种通用而有效的时间转移模块。 时间转移模块将
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/cv/vgg19/README.md b/research/cv/vgg19/README.md
index 35bad1b67a2fcbb8cb82596af9204b45a7282e0a..b48fc9c79c5568ccd4ed4319b18cd31cd4dfeeb5 100644
--- a/research/cv/vgg19/README.md
+++ b/research/cv/vgg19/README.md
@@ -440,7 +440,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579
...
```
-> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training.html).
+> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorials/experts/en/master/parallel/introduction.html).
> **Attention** This will bind the processor cores according to the `device_num` and total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations about `taskset` in `scripts/run_distribute_train.sh`
##### Run vgg19 on GPU
diff --git a/research/cv/vgg19/README_CN.md b/research/cv/vgg19/README_CN.md
index b4afd312fc8c901146294917986c0387cf36cec7..9c99d010afa6c31c57edc3024bec75be5e0236ec 100644
--- a/research/cv/vgg19/README_CN.md
+++ b/research/cv/vgg19/README_CN.md
@@ -87,7 +87,7 @@ VGG 19网络主要由几个基本模块(包括卷积层和池化层)和三
### 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
@@ -459,7 +459,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579
...
```
-> 关于rank_table.json,可以参考[分布式并行训练](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training.html)。
+> 关于rank_table.json,可以参考[分布式并行训练](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/introduction.html)。
> **注意** 将根据`device_num`和处理器总数绑定处理器核。如果您不希望预训练中绑定处理器内核,请在`scripts/run_distribute_train.sh`脚本中移除`taskset`相关操作。
##### GPU处理器环境运行VGG19
diff --git a/research/cv/vnet/README_CN.md b/research/cv/vnet/README_CN.md
index 4e8c4148c8336f0836af088e427656e8001cd0dc..dd25398eae625d87e55568b0515eb58fa3354de2 100644
--- a/research/cv/vnet/README_CN.md
+++ b/research/cv/vnet/README_CN.md
@@ -101,7 +101,7 @@ VNet适用于医学图像分割,使用3D卷积,能够处理3D MR图像数据
- [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
- 生成config json文件用于多卡训练。
- [简易教程](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools)
- - 详细配置方法请参照[官网教程](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html#配置分布式环境变量)。
+ - 详细配置方法请参照[官网教程](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html#配置分布式环境变量)。
# 快速入门
diff --git a/research/cv/wideresnet/README.md b/research/cv/wideresnet/README.md
index 80b40d4bb8bf3306aa952bae778cc6ab84ff1500..f6defab27efcd0fe60843690cc867fda79d69713 100644
--- a/research/cv/wideresnet/README.md
+++ b/research/cv/wideresnet/README.md
@@ -208,7 +208,7 @@ bash run_standalone_train_gpu.sh [DATASET_PATH] [CONFIG_PATH] [EXPERIMENT_LABEL]
For distributed training, a hostfile configuration needs to be created in advance.
-Please follow the instructions in the link [GPU-Multi-Host](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_gpu.html).
+Please follow the instructions in the link [GPU-Multi-Host](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html).
##### Evaluation while training
diff --git a/research/cv/wideresnet/README_CN.md b/research/cv/wideresnet/README_CN.md
index 6340647562bf3e0f662cd839fc4722b8f0835e2d..12e2ea27e81ed6ec25d1a37542e154cd4a2a1440 100644
--- a/research/cv/wideresnet/README_CN.md
+++ b/research/cv/wideresnet/README_CN.md
@@ -211,7 +211,7 @@ bash run_standalone_train_gpu.sh [DATASET_PATH] [CONFIG_PATH] [EXPERIMENT_LABEL]
对于分布式培训,需要提前创建主机文件配置。
-请按照链接中的说明操作 [GPU-Multi-Host](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_gpu.html).
+请按照链接中的说明操作 [GPU-Multi-Host](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html).
## 培训时的评估
diff --git a/research/hpc/pinns/README.md b/research/hpc/pinns/README.md
index 9ad24330a6473df29338c137c915125b6f5609a0..6ea8de4f7c64dfba1e9b1fe7202f3ed1ff0bcee1 100644
--- a/research/hpc/pinns/README.md
+++ b/research/hpc/pinns/README.md
@@ -1,4 +1,4 @@
-# Contents
+# Contents
[查看中文](./README_CN.md)
@@ -72,7 +72,7 @@ Dataset used:[cylinder nektar wake](https://github.com/maziarraissi/PINNs/tree
## [Mixed Precision](#Contents)
-The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
+The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
diff --git a/research/hpc/pinns/README_CN.md b/research/hpc/pinns/README_CN.md
index d080e0adfd5fb9e8c50116c387f0f92e60152f9c..79cf1a90036d93e1efa8d559d05824679e57f8e9 100644
--- a/research/hpc/pinns/README_CN.md
+++ b/research/hpc/pinns/README_CN.md
@@ -70,7 +70,7 @@ Navier-Stokes方程是流体力学中描述粘性牛顿流体的方程。针对N
## [混合精度](#目录)
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# [环境要求](#目录)
diff --git a/research/nlp/albert/README.md b/research/nlp/albert/README.md
index 303e363ba61df28c1f43dd51c142b78f5f971c03..4943392e7d70899ec311cb9a939b03da762fd911 100644
--- a/research/nlp/albert/README.md
+++ b/research/nlp/albert/README.md
@@ -181,10 +181,9 @@ If you want to run in modelarts, please check the official documentation of [mod
```
For distributed training, an hccl configuration file with JSON format needs to be created in advance.
-Please follow the instructions in the link below:
-https:gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools.
-For dataset, if you want to set the format and parameters, a schema configuration file with JSON format needs to be created, please refer to [tfrecord](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_loading.html#tfrecord) format.
+Please follow the instructions in the link below:
+[https://gitee.com/mindspore/models/tree/master/utils/hccl_tools](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
```text
For pretraining, schema file contains ["input_ids", "input_mask", "segment_ids", "next_sentence_labels", "masked_lm_positions", "masked_lm_ids", "masked_lm_weights"].
diff --git a/research/nlp/atae_lstm/README.md b/research/nlp/atae_lstm/README.md
index 34aadc780637dbfcbe21abbac54cb773402c360f..59a313be5bae53612647fbf9f586c8614926efef 100644
--- a/research/nlp/atae_lstm/README.md
+++ b/research/nlp/atae_lstm/README.md
@@ -54,7 +54,7 @@ AttentionLSTM模型的输入由aspect和word向量组成,输入部分输入单
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求
diff --git a/research/nlp/rotate/README_CN.md b/research/nlp/rotate/README_CN.md
index 2c8ae6dd10ccf2b9123184dc966aa528eb7d1d25..dc5e24da7b4d6ec5615e44bf82860e483972069f 100644
--- a/research/nlp/rotate/README_CN.md
+++ b/research/nlp/rotate/README_CN.md
@@ -86,7 +86,7 @@ bash run_infer_310.sh [MINDIR_HEAD_PATH] [MINDIR_TAIL_PATH] [DATASET_PATH] [NEED
在裸机环境(本地有Ascend 910 AI 处理器)进行分布式训练时,需要配置当前多卡环境的组网信息文件。
请遵循一下链接中的说明创建json文件:
-
+
- GPU处理器环境运行
diff --git a/research/nlp/seq2seq/README_CN.md b/research/nlp/seq2seq/README_CN.md
index 99c995590197fa209c301189e578c5bfcd9856f9..45dc01a073e1e0430c41b073dc57a11e96d0857c 100644
--- a/research/nlp/seq2seq/README_CN.md
+++ b/research/nlp/seq2seq/README_CN.md
@@ -33,7 +33,7 @@ bash wmt14_en_fr.sh
## 混合精度
-采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html))的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
+采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html))的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。
# 环境要求