From ee253ed95e74f58133efa91ace77307580b9f369 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=AE=A6=E6=99=93=E7=8E=B2?= <3174348550@qq.com> Date: Mon, 22 Sep 2025 17:11:48 +0800 Subject: [PATCH] modify image format --- docs/mindspore/source_en/features/parallel/auto_parallel.md | 2 +- docs/mindspore/source_zh_cn/features/parallel/auto_parallel.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/mindspore/source_en/features/parallel/auto_parallel.md b/docs/mindspore/source_en/features/parallel/auto_parallel.md index 941ac02b2f..ac505df803 100644 --- a/docs/mindspore/source_en/features/parallel/auto_parallel.md +++ b/docs/mindspore/source_en/features/parallel/auto_parallel.md @@ -83,7 +83,7 @@ The following figure illustrates an example process of applying Sharding Propaga 2. Next, it enumerates possible strategies and the Tensor Redistribution costs for each edge. Demonstrated in figure (c), the strategy for an edge is defined as a pair [*s_strategy*, *t_strategy*], where *s_strategy* and *t_strategy* denote Sharding Strategy for source operator and target operator, respectively. 3. Finally, starting from the configured operator, it determines the next operator's Sharding Strategy, such that the communication cost in Tensor Redistribution is minimized. The propagation ends when the Sharding Strategies for all operators are settled, as shown in figure (d). -[![An example process of applying Sharding Propagation](./images/sharding_propagation.png)](./images/sharding_propagation.png) +![image](./images/sharding_propagation.png) ### Double Recursive Strategy Search Algorithm diff --git a/docs/mindspore/source_zh_cn/features/parallel/auto_parallel.md b/docs/mindspore/source_zh_cn/features/parallel/auto_parallel.md index 8ced6dc818..5db7bccb64 100644 --- a/docs/mindspore/source_zh_cn/features/parallel/auto_parallel.md +++ b/docs/mindspore/source_zh_cn/features/parallel/auto_parallel.md @@ -83,7 +83,7 @@ MindSpore将单机版本的程序转换成并行版本的程序。该转换是 2. 为每条边枚举重排布策略和相应的代价。如下图(c)所示,这里的重排布策略定义为二元组[*s_strategy* , *t_strategy* ],其中 *s_strategy*表示的是源算子(下图(c)中的ReLU)的切分策略, *t_strategy*表示的是目的算子(下图(c)中的MatMul)的切分策略。 3. 当沿着一条边传播到下一个算子时(如图中ReLU切分策略已确定,为[2, 4],下一步要决定MatMul算子的切分策略),在表中选择引起通信代价最小的目的算子策略(即为MatMul选择[[2,4], [4, 1]])。最后,所有算子的切分策略都被确定,如下图(d)所示。 -[![切分策略传播的流程实例](./images/sharding_propagation_zh.png)](./images/sharding_propagation_zh.png) +![image](./images/sharding_propagation_zh.png) ### 双递归策略搜索算法 -- Gitee