From b0ed3d0323c658418a06515bc1a1a8486f151573 Mon Sep 17 00:00:00 2001 From: huanxiaoling <3174348550@qq.com> Date: Thu, 3 Nov 2022 10:02:37 +0800 Subject: [PATCH] modify the en files --- tutorials/experts/source_en/parallel/train_ascend.md | 2 +- tutorials/experts/source_en/parallel/train_gpu.md | 2 +- tutorials/source_en/beginner/quick_start.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/tutorials/experts/source_en/parallel/train_ascend.md b/tutorials/experts/source_en/parallel/train_ascend.md index 9af9e80ac9..28cc3ba231 100644 --- a/tutorials/experts/source_en/parallel/train_ascend.md +++ b/tutorials/experts/source_en/parallel/train_ascend.md @@ -551,7 +551,7 @@ bash run_cluster.sh /path/dataset /path/rank_table.json 16 8 ## Running the Script through OpenMPI -Currently MindSpore also supports `mpirun`of OpenMPI for distributed training on Ascend hardware platform without environment variable `RANK_TABLE_FILE`. +Currently MindSpore also supports `mpirun`of OpenMPI for distributed training on Ascend hardware platform without environment variable `RANK_TABLE_FILE`. Users can click [Multi-Card Startup Method](https://www.mindspore.cn/tutorials/experts/en/master/parallel/introduction.html#multi-card-startup-method) to check the support of multi-card startup method in different platforms. ### Single-host Training diff --git a/tutorials/experts/source_en/parallel/train_gpu.md b/tutorials/experts/source_en/parallel/train_gpu.md index f41485982f..4fa0612400 100644 --- a/tutorials/experts/source_en/parallel/train_gpu.md +++ b/tutorials/experts/source_en/parallel/train_gpu.md @@ -393,7 +393,7 @@ When performing distributed training on a GPU, the method of saving and loading ## Training without Relying on OpenMPI -Due to training safety and reliability requirements, MindSpore GPUs also support **distributed training without relying on OpenMPI**. +Due to training safety and reliability requirements, MindSpore GPUs also support **distributed training without relying on OpenMPI**. Users can click [Multi-Card Startup Method](https://www.mindspore.cn/tutorials/experts/en/master/parallel/introduction.html#multi-card-startup-method) to check the support of multi-card startup method in different platforms. OpenMPI plays the role of synchronizing data and inter-process networking on the Host side in distributed training scenarios. MindSpore replaces openMPI capabilities by **reusing the Parameter Server mode training architecture**. diff --git a/tutorials/source_en/beginner/quick_start.md b/tutorials/source_en/beginner/quick_start.md index 2d776aec91..54b194dac5 100644 --- a/tutorials/source_en/beginner/quick_start.md +++ b/tutorials/source_en/beginner/quick_start.md @@ -2,7 +2,7 @@ [Introduction](https://www.mindspore.cn/tutorials/en/master/beginner/introduction.html) || **Quick Start** || [Tensor](https://www.mindspore.cn/tutorials/en/master/beginner/tensor.html) || [Dataset](https://www.mindspore.cn/tutorials/en/master/beginner/dataset.html) || [Transforms](https://www.mindspore.cn/tutorials/en/master/beginner/transforms.html) || [Model](https://www.mindspore.cn/tutorials/en/master/beginner/model.html) || [Autograd](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html) || [Train](https://www.mindspore.cn/tutorials/en/master/beginner/train.html) || [Save and load](https://www.mindspore.cn/tutorials/en/master/beginner/save_load.html) || [Infer](https://www.mindspore.cn/tutorials/en/master/beginner/infer.html) -# Quick Start: Linear Fitting +# Quick Start This section quickly implements a simple deep learning model through MindSpore APIs. For a deeper understanding of how to use MindSpore, see the reference links provided at the end of each section. -- Gitee