diff --git a/tutorials/source_en/parallel/distributed_case.rst b/tutorials/source_en/parallel/distributed_case.rst index aca997e988c9e56f96a1db2c3ba7d2cbd60e986c..a7887c21b1e3c03a28e356940cb897be10cf24d0 100644 --- a/tutorials/source_en/parallel/distributed_case.rst +++ b/tutorials/source_en/parallel/distributed_case.rst @@ -8,5 +8,4 @@ Distributed High-Level Configuration Case .. toctree:: :maxdepth: 1 - multiple_mixed - ms_operator \ No newline at end of file + multiple_mixed \ No newline at end of file diff --git a/tutorials/source_en/parallel/overview.md b/tutorials/source_en/parallel/overview.md index 164ef39f41041abc9bfdb873261a27f9b380c0ae..3c2d7918484dcbf6ef27d536fa0d4ef646777910 100644 --- a/tutorials/source_en/parallel/overview.md +++ b/tutorials/source_en/parallel/overview.md @@ -66,4 +66,3 @@ If there is a requirement for performance, throughput, or scale, or if you don't ## Distributed High-Level Configuration Examples - **Multi-dimensional Hybrid Parallel Case Based on Double Recursive Search**: Multi-dimensional hybrid parallel based on double recursive search means that the user can configure optimization methods such as recomputation, optimizer parallel, pipeline parallel. Based on the user configurations, the operator-level strategy is automatically searched by the double recursive strategy search algorithm, which generates the optimal parallel strategy. For details, please refer to the [Multi-dimensional Hybrid Parallel Case Based on Double Recursive Search](https://www.mindspore.cn/tutorials/en/r2.7.0rc1/parallel/multiple_mixed.html). -- **Performing Distributed Training on K8S Clusters**: MindSpore Operator is a plugin that follows Kubernetes' Operator pattern (based on the CRD-Custom Resource Definition feature) and implements distributed training on Kubernetes. MindSpore Operator defines Scheduler, PS, Worker three roles in CRD, and users can easily use MindSpore on K8S for distributed training through simple YAML file configuration. The code repository of mindSpore Operator is described in: [ms-operator](https://gitee.com/mindspore/ms-operator/). For details, please refer to the [Performing Distributed Training on K8S Clusters](https://www.mindspore.cn/tutorials/en/r2.7.0rc1/parallel/ms_operator.html). diff --git a/tutorials/source_zh_cn/parallel/distributed_case.rst b/tutorials/source_zh_cn/parallel/distributed_case.rst index fc09a6810098eff245e2ae6ca4217230c74d20ad..da4891d22d2c1bd4c8d7d72b852d87e6c9621c70 100644 --- a/tutorials/source_zh_cn/parallel/distributed_case.rst +++ b/tutorials/source_zh_cn/parallel/distributed_case.rst @@ -9,4 +9,3 @@ :maxdepth: 1 multiple_mixed - ms_operator diff --git a/tutorials/source_zh_cn/parallel/overview.md b/tutorials/source_zh_cn/parallel/overview.md index 7e5ef7331154c31c1f2fb9a60a79362170a6a629..437d215ea8e120cbf4d4dadb848b25d51e335b31 100644 --- a/tutorials/source_zh_cn/parallel/overview.md +++ b/tutorials/source_zh_cn/parallel/overview.md @@ -66,4 +66,3 @@ MindSpore提供两种粒度的算子级并行能力:算子级并行和高阶 ## 分布式高阶配置案例 - **基于双递归搜索的多维混合并行案例**:基于双递归搜索的多维混合并行是指用户可以配置重计算、优化器并行、流水线并行等优化方法,在用户配置的基础上,通过双递归策略搜索算法进行算子级策略自动搜索,进而生成最优的并行策略。详细可参考[基于双递归搜索的多维混合并行案例](https://www.mindspore.cn/tutorials/zh-CN/r2.7.0rc1/parallel/multiple_mixed.html)。 -- **在K8S集群上进行分布式训练**:MindSpore Operator是遵循Kubernetes的Operator模式(基于CRD-Custom Resource Definition功能),实现的在Kubernetes上进行分布式训练的插件。其中,MindSpore Operator在CRD中定义了Scheduler、PS、Worker三种角色,用户只需通过简单的YAML文件配置,就可以轻松地在K8S上进行分布式训练。MindSpore Operator的代码仓详见:[ms-operator](https://gitee.com/mindspore/ms-operator/)。详细可参考[在K8S集群上进行分布式训练](https://www.mindspore.cn/tutorials/zh-CN/r2.7.0rc1/parallel/ms_operator.html)。