diff --git a/docs/lite/docs/source_en/quick_start/quick_start_python.md b/docs/lite/docs/source_en/quick_start/quick_start_python.md
index a28f00a04fd4f075c783662e1187bd74423902a9..38a33e2f1baa91699f73ebbe8fabfdeb2325d5d4 100644
--- a/docs/lite/docs/source_en/quick_start/quick_start_python.md
+++ b/docs/lite/docs/source_en/quick_start/quick_start_python.md
@@ -10,7 +10,7 @@ The following is an example of how to use the Python Simplified Inference Demo o
- One-click installation of inference-related model files, MindSpore Lite and its required dependencies. See the [One-click installation](#one-click-installation) section for details.
-- Execute the Python Simplified Inference Demo. See the [Execute Demo](#execute-demo) section for details.
+- Execute the Python Simplified Inference Demo. See the [Execute Demo](#executing-demo) section for details.
- For a description of the Python Simplified Inference Demo content, see the [Demo Content Description](#demo-content-description) section for details.
@@ -67,7 +67,7 @@ Performing inference with MindSpore Lite consists of the following main steps:
2. [Model Loading and Compilation](#model-loading-and-compilation): Before executing inference, you need to call [build_from_file](https://www.mindspore.cn/lite/api/en/master/mindspore_lite/mindspore_lite.Model.html#mindspore_lite.Model.build_from_file) interface of `Model` for model loading and model compilation, and to configure the Context obtained in the previous step into the Model. The model loading phase parses the file cache into a runtime model. The model compilation phase mainly carries out the process of operator selection scheduling, subgraph slicing, etc. This phase will consume more time, so it is recommended that `Model` be loaded once, compiled once, and inferenced for several times.
3. [Input Data](#inputting-data): The model needs to fill the `Input Tensor` with data before executing inference.
4. [Execute Inference](#executing-inference): Use the [predict](https://www.mindspore.cn/lite/api/en/master/mindspore_lite/mindspore_lite.Model.html#mindspore_lite.Model.predict) interface of `Model` for model inference.
-5. [Get Output](#getting-output): After the model finishes performing inference, you can get the inference result by `Output Tensor`.
+5. [Get Output](#getting-outputs): After the model finishes performing inference, you can get the inference result by `Output Tensor`.
For more advanced usage and examples of Python interfaces, please refer to the [Python API](https://www.mindspore.cn/lite/api/en/master/mindspore_lite.html).
diff --git a/docs/mindinsight/docs/source_zh_cn/accuracy_problem_preliminary_location.md b/docs/mindinsight/docs/source_zh_cn/accuracy_problem_preliminary_location.md
index ee3db21f0cc281c9e3a6c90110d68336722659ca..abe09324c0ef7de68d4fd710c8ad060645c1c6c6 100644
--- a/docs/mindinsight/docs/source_zh_cn/accuracy_problem_preliminary_location.md
+++ b/docs/mindinsight/docs/source_zh_cn/accuracy_problem_preliminary_location.md
@@ -349,7 +349,7 @@ MindSpore API同其它框架的API存在一定差异。有标杆脚本的情况
出现溢出问题后常见的解决措施如下:
-1. 使能动态loss scale功能,或者是合理设置静态loss scale的值,请参考[LossScale](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html#损失缩放原理)。需要注意的是,直接将GPU场景中的静态loss scale用于Ascend上的训练时,可能会导致不期望的频繁溢出,影响收敛。loss scale使能后,可能需要多次实验以调整loss scale的初始值init_loss_scale、调整比例scale_factor、调整窗口scale_window等参数,直到训练中浮点溢出非常少,请参考[DynamicLossScaleManager](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html#dynamiclossscalemanager)以了解这些参数的含义。
+1. 使能动态loss scale功能,或者是合理设置静态loss scale的值,请参考[LossScale](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html#损失缩放)。需要注意的是,直接将GPU场景中的静态loss scale用于Ascend上的训练时,可能会导致不期望的频繁溢出,影响收敛。loss scale使能后,可能需要多次实验以调整loss scale的初始值init_loss_scale、调整比例scale_factor、调整窗口scale_window等参数,直到训练中浮点溢出非常少,请参考[DynamicLossScaleManager](https://www.mindspore.cn/docs/zh-CN/master/api_python/amp/mindspore.amp.DynamicLossScaleManager.html)以了解这些参数的含义。
2. 溢出问题对精度有关键影响且无法规避的,将相应的API调整为FP32 API(调整后可能对性能有较大影响)。
检查结论:
@@ -360,7 +360,7 @@ MindSpore API同其它框架的API存在一定差异。有标杆脚本的情况
检查方法:
-在使用[混合精度](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html)时,一般应确认使能了[DynamicLossScaleManager](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html#dynamiclossscalemanager)或[FixedLossScaleManager](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html#fixedlossscalemanager),推荐优先使用DynamicLossScaleManager。可以先使用DynamicLossScaleManager或FixedLossScaleManager的默认参数值进行训练,若产生溢出的迭代过多,影响最终精度时,应根据主要的溢出现象,针对性调整loss_scale的值。当主要溢出现象为梯度上溢时,应减小loss_scale的值(可以尝试将原loss_scale值除以2);当主要溢出现象为梯度下溢时,应增大loss_scale的值(可以尝试将原loss_scale值乘以2)。对于Ascend AI处理器上的训练,其在大部分情况下为混合精度训练。由于Ascend AI处理器计算特性与GPU混合精度计算特性存在差异,LossScaleManager超参也可能需要根据训练情况调整为与GPU上不同的值以保证精度。
+在使用[混合精度](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html)时,一般应确认使能了[DynamicLossScaleManager](https://www.mindspore.cn/docs/zh-CN/master/api_python/amp/mindspore.amp.DynamicLossScaleManager.html)或[FixedLossScaleManager](https://www.mindspore.cn/docs/zh-CN/master/api_python/amp/mindspore.amp.FixedLossScaleManager.html),推荐优先使用DynamicLossScaleManager。可以先使用DynamicLossScaleManager或FixedLossScaleManager的默认参数值进行训练,若产生溢出的迭代过多,影响最终精度时,应根据主要的溢出现象,针对性调整loss_scale的值。当主要溢出现象为梯度上溢时,应减小loss_scale的值(可以尝试将原loss_scale值除以2);当主要溢出现象为梯度下溢时,应增大loss_scale的值(可以尝试将原loss_scale值乘以2)。对于Ascend AI处理器上的训练,其在大部分情况下为混合精度训练。由于Ascend AI处理器计算特性与GPU混合精度计算特性存在差异,LossScaleManager超参也可能需要根据训练情况调整为与GPU上不同的值以保证精度。
检查结论:
@@ -370,7 +370,7 @@ MindSpore API同其它框架的API存在一定差异。有标杆脚本的情况
检查方法:
-梯度裁剪(gradient clip)是指当梯度大于某个阈值时,强制调整梯度使其变小的技术。梯度裁剪对RNN网络中的梯度爆炸问题有较好的效果。如果同时使用了[loss scale](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html#损失缩放原理)和梯度裁剪,需要进行本检查。请对照代码检查确认梯度裁剪的应用对象是除以loss scale后得到的原始梯度值。
+梯度裁剪(gradient clip)是指当梯度大于某个阈值时,强制调整梯度使其变小的技术。梯度裁剪对RNN网络中的梯度爆炸问题有较好的效果。如果同时使用了[loss scale](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html#损失缩放)和梯度裁剪,需要进行本检查。请对照代码检查确认梯度裁剪的应用对象是除以loss scale后得到的原始梯度值。
检查结论:
diff --git a/docs/mindspore/source_en/faq/implement_problem.md b/docs/mindspore/source_en/faq/implement_problem.md
index fa60778cdc263a19c9905d54e3d1901de5671fac..9193e921d5c4ca919c06d3a367a30e8073051202 100644
--- a/docs/mindspore/source_en/faq/implement_problem.md
+++ b/docs/mindspore/source_en/faq/implement_problem.md
@@ -439,7 +439,7 @@ A: This issue is a memory shortage problem caused by too much memory usage, whic
- Set the value of `batch_size` too large. Solution: Reduce the value of `batch_size`.
- Introduce the abnormally large `parameter`, for example, a single data shape is [640,1024,80,81]. The data type is float32, and the single data size is over 15G. In this way, the two data with the similar size are added together, and the memory occupied is over 3*15G, which easily causes `Out of Memory`. Solution: Check the `shape` of the parameter. If it is abnormally large, the shape can be reduced.
-- If the following operations cannot solve the problem, you can raise the problem on the [official forum](https://bbs.huaweicloud.com/forum/forum-1076-1.html), and there are dedicated technical personnels for help.
+- If the following operations cannot solve the problem, you can raise the problem on the [official forum](https://www.hiascend.com/forum/forum-0106101385921175002-1.html), and there are dedicated technical personnels for help.
diff --git a/docs/mindspore/source_en/migration_guide/sample_code.md b/docs/mindspore/source_en/migration_guide/sample_code.md
index 577dcd314fe5616be9a971ff0f00f9b3c394aa2d..6b5e315f5af7ab662dc7b036f7adc71a46f3e8b7 100644
--- a/docs/mindspore/source_en/migration_guide/sample_code.md
+++ b/docs/mindspore/source_en/migration_guide/sample_code.md
@@ -808,7 +808,7 @@ MindSpore has three methods to use mixed precision:
1. Use `Cast` to convert the network input `cast` into `float16` and the loss input `cast` into `float32`.
2. Use the `to_float` method of `Cell`. For details, see [Network Entity and Loss Construction](https://www.mindspore.cn/docs/en/master/migration_guide/model_development/model_and_loss.html).
-3. Use the `amp_level` interface of the `Model` to perform mixed precision. For details, see [Automatic Mixed-Precision](https://www.mindspore.cn/tutorials/en/master/advanced/mixed_precision.html#mixed-precision).
+3. Use the `amp_level` interface of the `Model` to perform mixed precision. For details, see [Automatic Mixed-Precision](https://www.mindspore.cn/tutorials/en/master/advanced/mixed_precision.html#automatic-mix-precision).
Use the third method to set `amp_level` in `Model` to `O3` and check the profiler result.
diff --git a/docs/mindspore/source_zh_cn/faq/implement_problem.md b/docs/mindspore/source_zh_cn/faq/implement_problem.md
index ff204508a25d0ab60f95243efb3e2af1d884ccc1..4d01786706308592ef65b0b1adbc19d4750cd720 100644
--- a/docs/mindspore/source_zh_cn/faq/implement_problem.md
+++ b/docs/mindspore/source_zh_cn/faq/implement_problem.md
@@ -416,7 +416,7 @@ A: 此问题属于内存占用过多导致的内存不够问题,可能原因
- `batch_size`的值设置过大。解决办法: 将`batch_size`的值设置减小。
- 引入了异常大的`Parameter`,例如单个数据shape为[640,1024,80,81],数据类型为float32,单个数据大小超过15G,这样差不多大小的两个数据相加时,占用内存超过3*15G,容易造成`Out of Memory`。解决办法: 检查参数的`shape`,如果异常过大,减少shape。
-- 如果以上操作还是未能解决,可以上[官方论坛](https://bbs.huaweicloud.com/forum/forum-1076-1.html)发帖提出问题,将会有专门的技术人员帮助解决。
+- 如果以上操作还是未能解决,可以上[官方论坛](https://www.hiascend.com/forum/forum-0106101385921175002-1.html)发帖提出问题,将会有专门的技术人员帮助解决。
diff --git a/tutorials/experts/source_en/optimize/images/mix_precision_fp16.png b/tutorials/experts/source_en/optimize/images/mix_precision_fp16.png
deleted file mode 100644
index 2c8771445dfaaf320ace866cb14f5ae581164560..0000000000000000000000000000000000000000
Binary files a/tutorials/experts/source_en/optimize/images/mix_precision_fp16.png and /dev/null differ
diff --git a/tutorials/experts/source_zh_cn/optimize/images/loss_scale1.png b/tutorials/experts/source_zh_cn/optimize/images/loss_scale1.png
deleted file mode 100644
index ccd751b5366004a5e9995af0117cba968bfca882..0000000000000000000000000000000000000000
Binary files a/tutorials/experts/source_zh_cn/optimize/images/loss_scale1.png and /dev/null differ
diff --git a/tutorials/experts/source_zh_cn/optimize/images/loss_scale2.png b/tutorials/experts/source_zh_cn/optimize/images/loss_scale2.png
deleted file mode 100644
index e279c9deba13982fe17446df854ee5302196dea6..0000000000000000000000000000000000000000
Binary files a/tutorials/experts/source_zh_cn/optimize/images/loss_scale2.png and /dev/null differ
diff --git a/tutorials/experts/source_zh_cn/optimize/images/loss_scale3.png b/tutorials/experts/source_zh_cn/optimize/images/loss_scale3.png
deleted file mode 100644
index 47eb0235785ebd7476a3cea914b487c4cf86e17e..0000000000000000000000000000000000000000
Binary files a/tutorials/experts/source_zh_cn/optimize/images/loss_scale3.png and /dev/null differ
diff --git a/tutorials/source_en/beginner/introduction.md b/tutorials/source_en/beginner/introduction.md
index 66eb43b835dbd5a302c1dd00783fd1e16ca37383..037668e7ef723030f4f65e7fc5cfad62ebf187a0 100644
--- a/tutorials/source_en/beginner/introduction.md
+++ b/tutorials/source_en/beginner/introduction.md
@@ -118,4 +118,4 @@ Welcome every developer to the MindSpore community and contribute to this all-sc
- [MindSpore Github](https://github.com/mindspore-ai/mindspore): MindSpore code image of Gitee. Developers who are accustomed to using GitHub can learn MindSpore and view the latest code implementation here.
-- **MindSpore forum**: We are dedicated to serving every developer. You can find your voice in MindSpore, regardless of whether you are an entry-level developer or a master. Let's learn and grow together. ([Learn more](https://bbs.huaweicloud.com/forum/forum-1076-1.html))
+- **MindSpore forum**: We are dedicated to serving every developer. You can find your voice in MindSpore, regardless of whether you are an entry-level developer or a master. Let's learn and grow together. ([Learn more](https://www.hiascend.com/forum/forum-0106101385921175002-1.html))
diff --git a/tutorials/source_zh_cn/beginner/introduction.ipynb b/tutorials/source_zh_cn/beginner/introduction.ipynb
index e4fbfac4ad8e58e16ea867269ccc172e3a24e077..0925106d4cd322275c894927e4fd48d315b8fd94 100644
--- a/tutorials/source_zh_cn/beginner/introduction.ipynb
+++ b/tutorials/source_zh_cn/beginner/introduction.ipynb
@@ -139,7 +139,7 @@
"\n",
" - [MindSpore Github](https://github.com/mindspore-ai/mindspore):Gitee的MindSpore代码镜像,习惯用github的开发者可以在这里进行MindSpore的学习,查看最新代码实现!\n",
"\n",
- "- **昇思MindSpore 论坛**:我们努力地服务好每一个开发者,在昇思MindSpore中,无论是入门开发者还是高手大咖都能找到知音,共同学习,共同成长!([了解更多](https://bbs.huaweicloud.com/forum/forum-1076-1.html))"
+ "- **昇思MindSpore 论坛**:我们努力地服务好每一个开发者,在昇思MindSpore中,无论是入门开发者还是高手大咖都能找到知音,共同学习,共同成长!([了解更多](https://www.hiascend.com/forum/forum-0106101385921175002-1.html))"
]
}
],