diff --git a/tutorials/experts/source_en/parallel/train_ascend.md b/tutorials/experts/source_en/parallel/train_ascend.md index 9af9e80ac94e5210872e249a9916615a09838488..28cc3ba2319d9788289d0fb0eee7985ddf0c40ec 100644 --- a/tutorials/experts/source_en/parallel/train_ascend.md +++ b/tutorials/experts/source_en/parallel/train_ascend.md @@ -551,7 +551,7 @@ bash run_cluster.sh /path/dataset /path/rank_table.json 16 8 ## Running the Script through OpenMPI -Currently MindSpore also supports `mpirun`of OpenMPI for distributed training on Ascend hardware platform without environment variable `RANK_TABLE_FILE`. +Currently MindSpore also supports `mpirun`of OpenMPI for distributed training on Ascend hardware platform without environment variable `RANK_TABLE_FILE`. Users can click [Multi-Card Startup Method](https://www.mindspore.cn/tutorials/experts/en/master/parallel/introduction.html#multi-card-startup-method) to check the support of multi-card startup method in different platforms. ### Single-host Training diff --git a/tutorials/experts/source_en/parallel/train_gpu.md b/tutorials/experts/source_en/parallel/train_gpu.md index f41485982f8df0ec51c47c9e8235ce0e29baa82a..4fa0612400928b56791685d05c471ae54a775f99 100644 --- a/tutorials/experts/source_en/parallel/train_gpu.md +++ b/tutorials/experts/source_en/parallel/train_gpu.md @@ -393,7 +393,7 @@ When performing distributed training on a GPU, the method of saving and loading ## Training without Relying on OpenMPI -Due to training safety and reliability requirements, MindSpore GPUs also support **distributed training without relying on OpenMPI**. +Due to training safety and reliability requirements, MindSpore GPUs also support **distributed training without relying on OpenMPI**. Users can click [Multi-Card Startup Method](https://www.mindspore.cn/tutorials/experts/en/master/parallel/introduction.html#multi-card-startup-method) to check the support of multi-card startup method in different platforms. OpenMPI plays the role of synchronizing data and inter-process networking on the Host side in distributed training scenarios. MindSpore replaces openMPI capabilities by **reusing the Parameter Server mode training architecture**. diff --git a/tutorials/source_en/beginner/quick_start.md b/tutorials/source_en/beginner/quick_start.md index 2d776aec91122d94c1a887273c67caf7cce819c7..54b194dac5691157d1240f13f293a64d151867c5 100644 --- a/tutorials/source_en/beginner/quick_start.md +++ b/tutorials/source_en/beginner/quick_start.md @@ -2,7 +2,7 @@ [Introduction](https://www.mindspore.cn/tutorials/en/master/beginner/introduction.html) || **Quick Start** || [Tensor](https://www.mindspore.cn/tutorials/en/master/beginner/tensor.html) || [Dataset](https://www.mindspore.cn/tutorials/en/master/beginner/dataset.html) || [Transforms](https://www.mindspore.cn/tutorials/en/master/beginner/transforms.html) || [Model](https://www.mindspore.cn/tutorials/en/master/beginner/model.html) || [Autograd](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html) || [Train](https://www.mindspore.cn/tutorials/en/master/beginner/train.html) || [Save and load](https://www.mindspore.cn/tutorials/en/master/beginner/save_load.html) || [Infer](https://www.mindspore.cn/tutorials/en/master/beginner/infer.html) -# Quick Start: Linear Fitting +# Quick Start This section quickly implements a simple deep learning model through MindSpore APIs. For a deeper understanding of how to use MindSpore, see the reference links provided at the end of each section.