diff --git a/docs/mindspore/source_en/migration_guide/model_development/learning_rate_and_optimizer.md b/docs/mindspore/source_en/migration_guide/model_development/learning_rate_and_optimizer.md index cb10fdacb3a98e74505f7eb72cd4628158844a9c..13ae97d9181056d90013e1de6ec8a9e0a1189b67 100644 --- a/docs/mindspore/source_en/migration_guide/model_development/learning_rate_and_optimizer.md +++ b/docs/mindspore/source_en/migration_guide/model_development/learning_rate_and_optimizer.md @@ -2,7 +2,7 @@ -Before reading this chapter, please read the official MindSpore tutorial [Optimizer](https://mindspore.cn/tutorials/en/master/advanced/modules/optim.html). +Before reading this chapter, please read the official MindSpore tutorial [Optimizer](https://mindspore.cn/tutorials/en/master/advanced/modules/optimizer.html). The chapter of official tutorial optimizer in MindSpore is already detailed, so here is an introduction to some special ways of using MindSpore optimizer and the principle of learning rate decay strategy.